CN111930844B - Financial prediction system based on block chain and artificial intelligence - Google Patents

Financial prediction system based on block chain and artificial intelligence Download PDF

Info

Publication number
CN111930844B
CN111930844B CN202010800385.5A CN202010800385A CN111930844B CN 111930844 B CN111930844 B CN 111930844B CN 202010800385 A CN202010800385 A CN 202010800385A CN 111930844 B CN111930844 B CN 111930844B
Authority
CN
China
Prior art keywords
financial
data
particle
value
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010800385.5A
Other languages
Chinese (zh)
Other versions
CN111930844A (en
Inventor
刘星
罗忠明
肖岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiao Yan
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010800385.5A priority Critical patent/CN111930844B/en
Publication of CN111930844A publication Critical patent/CN111930844A/en
Application granted granted Critical
Publication of CN111930844B publication Critical patent/CN111930844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis

Abstract

A financial prediction system based on a block chain and artificial intelligence comprises a data acquisition module, a data preprocessing module, a block chain storage module and a financial prediction module, wherein the data acquisition module is used for acquiring a financial time sequence and inputting the acquired financial time sequence into the data preprocessing module for processing, the data preprocessing module is used for removing noise data in the financial time sequence and transmitting the preprocessed financial time sequence to the block chain storage module for storage, the financial prediction module is used for retrieving the financial time sequence from the block chain storage module and predicting the trend of financial data according to the financial time sequence, and the financial prediction module predicts the trend of the financial time sequence by adopting a BP (back propagation) neural network. The invention has the beneficial effects that: the method realizes effective prediction of the trend of the financial time series, and has important significance for governments, investment institutions and investors.

Description

Financial prediction system based on block chain and artificial intelligence
Technical Field
The invention relates to the field of finance, in particular to a financial prediction system based on a block chain and artificial intelligence.
Background
The time series is generated by the data of various industries sequentially and continuously according to different time intervals, and the time series usually contains rich and complex information. Time series analysis techniques are in force because people need to obtain valuable information from time series. The key ring in the field of time series analysis is prediction, and the time series prediction is to make reasonable conjecture on the development condition of future data according to the historical rules and the change trend of the data. The financial time series is the most important data in the financial field, and the analysis and prediction of the data have important significance in financial investment decision and risk management.
Disclosure of Invention
In view of the above problems, the present invention is directed to a financial prediction system based on blockchains and artificial intelligence.
The purpose of the invention is realized by the following technical scheme:
a financial forecasting system based on a block chain and artificial intelligence comprises a data acquisition module, a data preprocessing module, a block chain storage module and a financial forecasting module, wherein the data acquisition module is used for acquiring a financial time sequence, and inputting the obtained financial time series into a data preprocessing module for processing, wherein the data preprocessing module is used for removing noise data in the financial time series, and transmitting the preprocessed financial time sequence to a block chain storage module for storage, wherein the financial prediction module is used for retrieving the financial time sequence from the block chain storage module, predicting the trend of financial data according to the financial time sequence, predicting the trend of the financial time sequence by the financial prediction module by adopting a BP (back propagation) neural network, and optimizing the initial weight and the threshold of the BP neural network adopted by the financial prediction module by adopting a particle swarm algorithm; defining the particle swarm algorithm to update in the following way:
vi(t+1)=ωi(t)vi(t)+c1r1(pi(t)-xi(t))+c2r2(g(t)-xi(t))
xi(t+1)=xi(t)+vi(t+1)
in the formula, ωi(t) represents the inertial weight factor, v, of the particle i at the t-th iterationi(t +1) and xi(t +1) denotes the step size and position of the particle i at the (t +1) th iteration, vi(t) and xi(t) denotes the step size and position of the particle i at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, g (t) representing a globally optimal solution, p'i(t) watchShowing local learned solution of particle i at the t-th iteration, and p'iThe value of (t) is determined in the following manner:
let P (t) denote the set of individual optimal solutions for the particles in the population at the t-th iteration, and P (t) { p }i(t), i ═ 1, 2.., N }, where p isi(t) represents the individual optimal solution for particle i at the tth iteration, and N represents the number of particles in the population; let M (t) denote the local classification number of the particle swarm optimization algorithm at the t-th iteration, and
Figure BDA0002627167390000021
defining an individual optimal solution pi(t) the local detection coefficient is εi(t) and εiThe expression of (t) is:
Figure BDA0002627167390000022
in the formula, xl(t) represents the position of the particle l at the t-th iteration, p1(pi(t),xl(t)) represents a first value function, d (p)i(t),xl(t)) represents the individual optimal solution pi(t) and position xl(t) Euclidean distance between (t) when d (p)i(t),xlWhen (t)) < D, then rho1(pi(t),xl(t)). 1, when d (p)i(t),xl(t)) > D, then ρ1(pi(t),xl(t)) -0, where D is a given distance threshold, and
Figure BDA0002627167390000023
wherein p isj(t) represents the individual optimal solution for particle j at the t-th iteration, d (p)i(t),pj(t)) represents the individual optimal solution pi(t) and the individual optimal solution pj(t) Euclidean distance between (t), pi(e) Represents the individual optimal solution, p, of the particle i at the e-th iteration2(pi(t),pi(e) Represents a second value function, when pi(t)=pi(e) Then ρ2(pi(t),pi(e) 1 when p is ═ 1i(t)≠pi(e) Then ρ2(pi(t),pi(e))=0,hi(t) represents the fitness function value of particle i at the t-th iteration, hmax(t) and hmin(t) respectively representing a maximum fitness function and a minimum fitness function value of the particles in the particle swarm during the t-th iteration;
sorting the individual optimal solutions in the set P (t) from small to large according to the values of the local detection coefficients, selecting the first M (t) individual optimal solutions as candidate local learning solutions, forming a set P '(t) by the selected M (t) candidate local learning solutions, and selecting the candidate local learning solution closest to the particles in the particle swarm from the set P' (t) as the corresponding local learning solution, namely the local learning solution
Figure BDA0002627167390000024
Figure BDA0002627167390000025
Wherein p isr(t) represents the individual optimal solution for particle r at the t-th iteration, d (x)i(t),pr(t)) represents position xi(t) and the individual optimal solution pr(t) Euclidean distance between them.
Preferably, the fitness function h of the particle swarm algorithm is defined as:
Figure BDA0002627167390000026
in the formula, yuOutput value, o, representing the u-th training sampleuThe target value of the u-th training sample is shown, and M represents the number of training samples.
Preferably, the inertial weight factor ω of the particle i at the t-th timeiThe expression of (t) is:
Figure BDA0002627167390000031
in the formula, ωmaxAnd ωminRespectively representing a maximum inertia weight factor and a minimum inertia weight factor, T representing the current iteration number, TmaxRepresenting the maximum number of iterations, δi(t) represents the inertial weight adjustment factor for particle i at the tth iteration, let xi(t-2) denotes the position of the particle i at the (t-2) th iteration, xi(t-1) denotes the position of the particle i at the (t-1) th iteration, defining αi(t) represents the forward property value, β, of particle i at the t-th iterationi(t) represents the value of the optimizing property for particle i at the tth iteration, and
Figure BDA0002627167390000032
βi(t)=ρ5(pi(t),pi(t-1)), wherein d (x)i(t-1), g (t)) represents the position xiEuclidean distance between (t-1) and global optimal solution g (t), d (x)i(t), g (t)) represents a position xiEuclidean distance between (t) and global optimal solution g (t), d (x)i(t-1),xi(t-2)) represents the position xi(t-1) and position xi(t-2) Euclidean distance between d (x)i(t),xi(t-2)) represents the position xi(t) and position xi(t-2) the Euclidean distance between,
Figure BDA0002627167390000033
represents a third value function when
Figure BDA0002627167390000034
When it is, then
Figure BDA0002627167390000035
If not, then,
Figure BDA0002627167390000036
represents a fourth value function when
Figure BDA0002627167390000037
When it is, then
Figure BDA0002627167390000038
If not, then,
Figure BDA0002627167390000039
pi(t-1) represents the individual optimal solution, ρ, for particle i at the (t-1) th iteration5(pi(t),pi(t-1)) represents a fifth value function, when pi(t)=piAt (t-1), then ρ5(pi(t),pi(t-1)) ═ 1, when pi(t)≠piAt (t-1), then ρ5(pi(t),pi(t-1))=0;
Let τi(t) represents the number of iterations before and at the latest distance from the t-th iteration when the inertial weight adjustment factor for the particle i is equal to-1, then δiThe expression of (t) is:
Figure BDA00026271673900000310
wherein the content of the first and second substances,
Figure BDA00026271673900000311
represents a sixth value-taking function, given a threshold value M (δ), and
Figure BDA00026271673900000312
when in use
Figure BDA00026271673900000313
When it is, then
Figure BDA00026271673900000314
When in use
Figure BDA00026271673900000315
When it is, then
Figure BDA00026271673900000316
The beneficial effects created by the invention are as follows:
predicting the trend of a financial time sequence by adopting a BP neural network, optimizing an initial weight and a threshold of the BP neural network adopted in a financial prediction module by adopting a particle swarm algorithm, improving an update mode of a traditional particle swarm, and compared with a traditional mode of locally learning particles to individual optimal solutions in the update process of the particles, selecting only part of better individual optimal solutions as the local learning solutions of the particles in the preferred embodiment, defining local classification numbers M (t) and local detection coefficients corresponding to the individual optimal solutions, and determining the probability of the individual optimal solutions becoming the local learning solutions by comprehensively considering the density of the particles in the local neighborhood of the individual optimal solutions, the stagnation times corresponding to the individual optimal solutions and the fitness function values of the individual optimal solutions through a local detection system corresponding to the defined individual optimal solutions, when the density of particles in the local neighborhood of the individual optimal solution is small, the search of the local neighborhood of the individual optimal solution is strengthened, namely, the probability that the individual optimal solution becomes a local learning solution is increased, and when the stagnation times of the individual optimal solution are more, the position has higher possibility to be the local optimal solution, so that the probability that the individual optimal solution becomes the local learning solution is reduced, and when the fitness function value of the individual optimal solution is smaller, the individual optimal solution is better, so that the probability that the individual optimal solution becomes the local learning solution is increased; the value of the set local classification number is increased along with the increase of the iteration number, namely in the early stage of the iteration, less representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the convergence speed of the particle swarm algorithm is increased, and in the later stage of the particle swarm algorithm, more representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the local searching capability of the particle swarm algorithm is enhanced; in summary, in the update process of the particle swarm algorithm, the preferred embodiment introduces a local learning solution to replace an individual optimal solution in the traditional update formula as an object for local learning of the particles, and selects a part of better individual optimal solutions as the object for local learning of the particles through the defined local classification number and the local detection coefficients corresponding to the individual optimal solutions, so that the convergence rate of the particle swarm algorithm is accelerated, and the optimization capability of the particle swarm algorithm is ensured; aiming at the two situations, the advancing attribute value and the optimizing attribute value of the particles in the current iteration are introduced into the inertia weight adjusting factor defined by the preferred embodiment, the advancing attribute value is used for measuring the updating route of the particles in the iteration process, compared with the fitness function value, the Euclidean distance can effectively reflect the distance relationship between the positions of the particles, and the position of the particles after three iterations is continuously compared with the distance relationship between the current global optimal solution to judge whether the particles advance towards the current global optimal solution after the iteration update or not, when the position of the particle in the current iteration is farther from the current global optimal solution than the position of the particle in the previous iteration and is closer to the positions of the previous iterations, that is, the particle is indicated to have a return phenomenon in the updating process, at this time, the advance attribute value of the particle is increased by 1, that is, the advance attribute value of the particle records the return phenomenon of the particle in the updating process, the optimization attribute value is used for measuring the optimization performance of the particle in the updating process, when the individual optimal solution of the particle is not changed after the iterative updating process, the particle is indicated to possibly sink into the local optimal value, at this time, the optimization attribute value of the particle is increased by 1, that is, the optimization attribute value of the particle records the number of times that the particle may sink into the local optimal solution in the iteration process, and in conclusion, the inertia weight adjustment factor of the particle records the number of returns and sink into the local optimal solution of the particle in the iterative updating process through the advance attribute value of the particle and the optimization attribute value of the particle And when the sum of the number of times of returning the particle to the local optimum value in the updating process and the number of times of trapping the particle into the local optimum value is smaller than the given threshold value, the value of the inertia weight adjusting factor of the particle at the moment is equal to 0, namely the particle is subjected to iterative updating according to the traditional inertia weight factor value.
Drawings
The invention is further described with the aid of the accompanying drawings, in which, however, the embodiments do not constitute any limitation to the invention, and for a person skilled in the art, without inventive effort, further drawings may be derived from the following figures.
FIG. 1 is a schematic diagram of the present invention.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the financial prediction system based on blockchain and artificial intelligence of the embodiment includes a data acquisition module, a data preprocessing module, a blockchain storage module, and a financial prediction module, where the data acquisition module is configured to acquire a financial time series, and inputting the obtained financial time series into a data preprocessing module for processing, wherein the data preprocessing module is used for removing noise data in the financial time series, and transmitting the preprocessed financial time sequence to a block chain storage module for storage, wherein the financial prediction module is used for retrieving the financial time sequence from the block chain storage module, predicting the trend of financial data according to the financial time sequence, predicting the trend of the financial time sequence by the financial prediction module by adopting a BP (back propagation) neural network, and optimizing the initial weight and the threshold of the BP neural network adopted by the financial prediction module by adopting a particle swarm algorithm; defining the particle swarm algorithm to update in the following way:
vi(t+1)=ωi(t)vi(t)+c1r1(p′i(t)-xi(t))+c2r2(g(t)-xi(t))
xi(t+1)=xi(t)+vi(t+1)
in the formula, ωi(t) represents the inertial weight factor, v, of the particle i at the t-th iterationi(t +1) and xi(t +1) denotes the step size and position of the particle i at the (t +1) th iteration, vi(t) and xi(t) denotes the step size and position of the particle i at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, g (t) representing a globally optimal solution, p'i(t) represents the local learned solution for particle i at the t-th iteration, and p'iThe value of (t) is determined in the following manner:
let P (t) denote the set of individual optimal solutions for the particles in the population at the t-th iteration, and P (t) { p }i(t), i ═ 1, 2.., N }, where p isi(t) represents the individual optimal solution for particle i at the tth iteration, and N represents the number of particles in the population; let M (t) denote the local classification number of the particle swarm optimization algorithm at the t-th iteration, and
Figure BDA0002627167390000061
defining an individual optimal solution pi(t) the local detection coefficient is εi(t) and εiThe expression of (t) is:
Figure BDA0002627167390000062
in the formula, xl(t) represents the position of the particle l at the t-th iteration, p1(pi(t),xl(t)) represents a first value function, let d (p)i(t),xl(t)) represents the individual optimal solution pi(t) and position xl(t) Euclidean distance between (t) when d (p)i(t),xlWhen (t)) < D, then rho1(pi(t),xl(t)). 1, when d (p)i(t),xl(t)) > D, then ρ1(pi(t),xl(t)) -0, where D is a given distance threshold, and
Figure BDA0002627167390000063
wherein p isj(t) represents the individual optimal solution for particle j at the t-th iteration, d (p)i(t),pj(t)) represents the individual optimal solution pi(t) and the individual optimal solution pj(t) Euclidean distance between (t), pi(e) Represents the individual optimal solution, p, of the particle i at the e-th iteration2(pi(t),pi(e) Represents a second value function, when pi(t)=pi(e) Then ρ2(pi(t),pi(e) 1 when p is ═ 1i(t)≠pi(e) Then ρ2(pi(t),pi(e))=0,hi(t) represents the fitness function value of particle i at the t-th iteration, hmax(t) and hmin(t) respectively representing a maximum fitness function and a minimum fitness function value of the particles in the particle swarm during the t-th iteration;
sorting the individual optimal solutions in the set P (t) from small to large according to the values of the local detection coefficients, selecting the first M (t) individual optimal solutions as candidate local learning solutions, forming a set P '(t) by the selected M (t) candidate local learning solutions, and selecting the candidate local learning solution closest to the particles in the particle swarm from the set P' (t) as the corresponding local learning solution, namely the local learning solution
Figure BDA0002627167390000064
Figure BDA0002627167390000065
Wherein p isr(t) represents the individual optimal solution for particle r at the t-th iteration, d (x)i(t),pr(t)) represents position xi(t) and the individual optimal solution pr(t) Euclidean distance between them.
The optimal particle swarm optimization is adopted in the optimal embodiment to optimize the initial weight and the threshold of the BP neural network adopted in the financial prediction module, and the updating mode of the traditional particle swarm is improved, compared with the traditional mode that the particles are made to locally learn to the individual optimal solution in the updating process of the particles, the optimal particle swarm optimization embodiment only selects part of the better individual optimal solution as the local learning solution of the particles, defines the local classification number M (t) and the local detection coefficient corresponding to the individual optimal solution, the local detection system corresponding to the defined individual optimal solution determines the probability that the individual optimal solution becomes the local learning solution by comprehensively considering the density of the particles in the local neighborhood of the individual optimal solution, the stagnation number corresponding to the individual optimal solution and the fitness function value of the individual optimal solution, when the density of the particles in the local neighborhood of the individual optimal solution is smaller, the search of the local neighborhood of the individual optimal solution is strengthened, namely the probability that the individual optimal solution becomes a local learning solution is increased, when the stagnation times of the individual optimal solution are more, the position has higher possibility to be the local optimal solution, therefore, the probability that the individual optimal solution becomes the local learning solution is reduced, and when the fitness function value of the individual optimal solution is smaller, the individual optimal solution is better, therefore, the probability that the individual optimal solution becomes the local learning solution is increased; the value of the set local classification number is increased along with the increase of the iteration number, namely in the early stage of the iteration, less representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the convergence speed of the particle swarm algorithm is increased, and in the later stage of the particle swarm algorithm, more representative individual optimal solutions are selected from the individual optimal solutions of the particle swarm as the object for local learning of the particles, so that the local searching capability of the particle swarm algorithm is enhanced; in summary, in the update process of the particle swarm algorithm, the preferred embodiment introduces a local learning solution to replace an individual optimal solution in a traditional update formula as an object for local learning of the particles, and selects a part of better individual optimal solutions as the object for local learning of the particles through the defined local classification number and the local detection coefficients corresponding to the individual optimal solutions, so that the convergence rate of the particle swarm algorithm is accelerated, and the optimization capability of the particle swarm algorithm is ensured.
Preferably, the fitness function h of the particle swarm algorithm is defined as:
Figure BDA0002627167390000071
in the formula, yuOutput value, o, representing the u-th training sampleuThe target value of the u-th training sample is shown, and M represents the number of training samples.
The smaller the value of the fitness function of the particle population defined in the preferred embodiment is, the better the optimization result of the particle is.
Preferably, the inertial weight factor ω of the particle i at the t-th timeiThe expression of (t) is:
Figure BDA0002627167390000072
in the formula, ωmaxAnd ωminRespectively representing a maximum inertia weight factor and a minimum inertia weight factor, T representing the current iteration number, TmaxRepresenting the maximum number of iterations, δi(t) represents the inertial weight adjustment factor for particle i at the tth iteration, let xi(t-2) denotes the position of the particle i at the (t-2) th iteration, xi(t-1) denotes the position of the particle i at the (t-1) th iteration, defining αi(t) represents the forward property value, β, of particle i at the t-th iterationi(t) represents the value of the optimizing property for particle i at the tth iteration, and
Figure BDA0002627167390000073
βi(t)=ρ5(pi(t),pi(t-1)), wherein d (x)i(t-1), g (t)) represents the position xiEuclidean distance between (t-1) and global optimal solution g (t), d (x)i(t), g (t)) represents a position xiEuclidean distance between (t) and global optimal solution g (t), d (x)i(t-1),xi(t-2)) represents the position xi(t-1) and position xi(t-2) Euclidean distance between d (x)i(t),xi(t-2)) represents the position xi(t) and position xi(t-2) the Euclidean distance between,
Figure BDA0002627167390000081
represents a third value function when
Figure BDA0002627167390000082
When it is, then
Figure BDA0002627167390000083
If not, then,
Figure BDA0002627167390000084
represents a fourth value function when
Figure BDA0002627167390000085
When it is, then
Figure BDA0002627167390000086
If not, then,
Figure BDA0002627167390000087
pi(t-1) represents the individual optimal solution, ρ, for particle i at the (t-1) th iteration5(pi(t),pi(t-1)) represents a fifth value function, when pi(t)=piAt (t-1), then ρ5(pi(t),pi(t-1)) ═ 1, when pi(t)≠piAt (t-1), then ρ5(pi(t),pi(t-1))=0;
Let τi(t) represents the number of iterations before and at the latest distance from the t-th iteration when the inertial weight adjustment factor for the particle i is equal to-1, then δiThe expression of (t) is:
Figure BDA0002627167390000088
wherein the content of the first and second substances,
Figure BDA0002627167390000089
represents a sixth value-taking function, given a threshold value M (δ), and
Figure BDA00026271673900000810
when in use
Figure BDA00026271673900000811
When it is, then
Figure BDA00026271673900000812
When in use
Figure BDA00026271673900000813
When it is, then
Figure BDA00026271673900000814
In the preferred embodiment, an inertia weight adjustment factor is introduced into the inertia weight factors of the particle swarm algorithm, when a particle returns in the updating process, the convergence speed of the particle swarm algorithm is influenced, when the individual optimal solution of the particle in the updating process is not changed, the particle possibly falls into the local optimal solution, and the optimization performance of the particle swarm algorithm is influenced, aiming at the two situations, the advancing attribute value and the optimization attribute value of the particle in the current iteration are introduced into the inertia weight adjustment factor defined by the preferred embodiment, the advancing attribute value is used for measuring the updating route of the particle in the iteration process, compared with the fitness function value, the Euclidean distance can effectively reflect the distance relationship between the positions of the particles, and whether the particle is advanced towards the current global optimal solution after the iterative updating is judged by comparing the positions of the particle after three times of continuous iterations with the distance relationship between the current global optimal solutions, when the position of the particle in the current iteration is farther from the current global optimal solution than the position of the particle in the previous iteration and is closer to the positions of the previous iterations, that is, the particle is indicated to have a return phenomenon in the updating process, at this time, the advance attribute value of the particle is increased by 1, that is, the advance attribute value of the particle records the return phenomenon of the particle in the updating process, the optimization attribute value is used for measuring the optimization performance of the particle in the updating process, when the individual optimal solution of the particle is not changed after the iterative updating process, the particle is indicated to possibly sink into the local optimal value, at this time, the optimization attribute value of the particle is increased by 1, that is, the optimization attribute value of the particle records the number of times that the particle may sink into the local optimal solution in the iteration process, and in conclusion, the inertia weight adjustment factor of the particle records the number of returns and sink into the local optimal solution of the particle in the iterative updating process through the advance attribute value of the particle and the optimization attribute value of the particle And when the sum of the number of times of returning the particle to the local optimum value in the updating process and the number of times of trapping the particle into the local optimum value is smaller than the given threshold value, the value of the inertia weight adjusting factor of the particle at the moment is equal to 0, namely the particle is subjected to iterative updating according to the traditional inertia weight factor value.
Preferably, the data preprocessing module is configured to remove noise data in the financial time series, set the financial time series to be processed as F, sequentially process the financial data in the financial time series F, and set F (k) to represent the current financial data to be processed in the financial time series F, and F (k) to represent the kth financial data in the financial time series F, where Δ F (k) may be set to be a data threshold Δ F (k), where Δ F (k) is
Figure BDA0002627167390000091
Determining a reference data sequence F (k) corresponding to the financial data F (k) according to a given data threshold value delta F (k), and setting the reference data sequence F (k) determined according to the given data threshold value delta F (k) to { F (k-l +1), F (k-l +2),.. times, F (k-1) }, wherein F (k-l +1), F (k-l +2) and F (k-1) respectively represent the (k-l +1), (k-l +2) and (k-1) financial data in the financial time sequence F, and (l-1) represents the financial data amount in the parameter data sequence F (k);
let F (a) denote the financial data in the reference data sequence F (k), and F (a) is the a-th financial data in the financial time sequence F, F (b) denotes the financial data in the reference data sequence F (k), and F (b) denotes the b-th financial data in the financial time sequence F, wherein a ≠ b, then the financial data F (a) and the financial data F (b) in the reference data sequence F (k) satisfy: (a) f (b) Δ f ≦ Δ f (k);
is provided with
Figure BDA0002627167390000092
Representing the mean of the financial data in the reference data series F (k), let F ' (k) represent the first reference data subsequence of financial data F (k), and F ' (k) { F (k-m '), F (k-m ' +1),. ·, F (k) }, where F (k-m ') represents the (k-m ') th financial data in the financial time series F, F (k-m ' +1) represents the (k-m ' +1) th financial data in the financial time series F, and the value of m ' is determined in the following manner;
(1) when the financial data f (k) is satisfied
Figure BDA0002627167390000093
Then, the value of m' is determined in the following manner:
Figure BDA0002627167390000094
wherein, θ (k) represents when the financial data f (k) is greater than or equal to
Figure BDA0002627167390000095
A time-corresponding sequence detection function, F (k-s) represents the (k-s) th financial data in the financial time sequence F,
Figure BDA0002627167390000096
a first comparison function representing the correspondence of the financial data f (k-s) when
Figure BDA0002627167390000097
When it is, then
Figure BDA0002627167390000098
When in use
Figure BDA0002627167390000099
When it is, then
Figure BDA00026271673900000910
Selecting a value of the maximum m which enables the sequence detection function theta (k) to be 1 as m';
(2) when the financial data f (k) is satisfied
Figure BDA0002627167390000101
Then, the value of m' is determined in the following manner:
Figure BDA0002627167390000102
wherein the content of the first and second substances,
Figure BDA0002627167390000103
when the financial data f (k) is less than
Figure BDA0002627167390000104
The time of the corresponding sequence detection function,
Figure BDA0002627167390000105
a second comparison function representing the correspondence of the financial data f (k-s) when
Figure BDA0002627167390000106
When it is, then
Figure BDA0002627167390000107
When in use
Figure BDA0002627167390000108
When it is, then
Figure BDA0002627167390000109
Selecting the function that makes the sequence detection
Figure BDA00026271673900001010
The value of the maximum m of (a) is denoted as m';
let F' (k) denote a second reference data subsequence of financial data F (k), and
Figure BDA00026271673900001011
Figure BDA00026271673900001012
wherein the content of the first and second substances,
Figure BDA00026271673900001013
representing the second in financial time series F
Figure BDA00026271673900001014
The financial data of the individual financial data,
Figure BDA00026271673900001015
representing the second in financial time series F
Figure BDA00026271673900001016
The financial data of the individual financial data,
Figure BDA00026271673900001017
Figure BDA00026271673900001018
representing the second in financial time series F
Figure BDA00026271673900001019
Individual financial data; defining the first detection coefficient of the financial data F (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k) as Y1(k) And Y is1(k) The expression of (a) is:
Figure BDA00026271673900001020
Figure BDA00026271673900001021
Figure BDA00026271673900001022
Figure BDA00026271673900001023
Figure BDA00026271673900001024
wherein Δ F (k-m ') represents the standard deviation of the financial data F (k-m') in the first sub-sequence of reference data F '(k), Δ F (k) represents the standard deviation of the financial data F (k) in the first sub-sequence of reference data F' (k),
Figure BDA00026271673900001025
representing financial data
Figure BDA00026271673900001026
The standard deviation in the second reference data subsequence F "(k),
Figure BDA00026271673900001027
representing financial data
Figure BDA00026271673900001028
Figure BDA00026271673900001029
The standard deviation in the second reference data subsequence F "(k),
Figure BDA00026271673900001030
represents rounding up;
defining the financial data F (k) as Y in the first reference data subsequence F '(k) and in the second reference data, subsequence F' (k)2(k) And Y is2(k) The expression of (a) is:
Figure BDA0002627167390000111
in the formula (I), the compound is shown in the specification,
Figure BDA0002627167390000112
representing the mean of the fusion data in the first sub-sequence of reference data F' (k),
Figure BDA0002627167390000113
represents the mean of the fusion data in the second subsequence of reference data F "(k);
defining the financial data F (k) as an anomaly detection function Y (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k), and the expression of Y (k) is:
Figure BDA0002627167390000114
when the value of the anomaly detection function y (k) satisfies: when Y (k) is less than or equal to 0, the financial data f (k) is judged to be normal financial data, and the value of the financial data f (k) is kept unchanged; when the value of the anomaly detection function y (k) satisfies: when Y (k) is greater than 0, the financial data f (k) is judged to be abnormal data, and the order is given
Figure BDA0002627167390000115
Where F (k-c) represents the (k-c) th financial data in the financial time series F.
The preferred embodiment is used for removing noise data in the financial time sequence, sequentially detecting financial data in the financial time sequence, and judging whether the financial data is noise data, when the financial data is detected, a given data threshold is used for determining a reference data sequence of the financial data to be detected, and Euclidean distances between any two pieces of financial data in the reference data sequence are smaller than or equal to the data threshold, so that the similarity of the financial data in the parameter data sequence is ensured, according to the relation of the financial data to be detected and the mean value of the melting data in the reference data sequence, part of financial data and the financial data to be detected in the reference data sequence form a first reference data subsequence of the financial data to be detected, so that the uniformity of the trend of the first reference data subsequence is ensured, and part of the financial data in the middle of the first reference data subsequence is selected to form a second reference subsequence of the financial data to be detected When the financial data to be detected is normal data, a first reference data subsequence and a second reference data subsequence which are determined have similar trends, and according to the characteristic, a first detection coefficient and a second detection coefficient of the financial data in the first reference data subsequence and the second reference data subsequence are defined, wherein the first detection coefficient judges the similarity of the trends of the first reference data subsequence and the second reference data subsequence of the financial data to be detected by comparing a standard deviation of initial financial data of the first reference data subsequence with a standard deviation of initial financial data of the second reference data subsequence, a standard deviation of ending financial data of the first reference data subsequence (namely the standard deviation of the financial data to be detected) and a standard deviation of ending financial data of the second reference data subsequence, and the second detection coefficient judges the similarity of the trends of the first reference data subsequence and the second reference data subsequence of the financial data to be detected by fusing a mean value of the gold data in the first reference data subsequence with the gold in the second reference data subsequence The mean value of the fused data is compared, so that the trend similarity of the first reference data subsequence and the second reference data subsequence is judged, an abnormal detection function corresponding to the financial data to be detected is defined, the abnormal detection function compares the trend similarity between the first reference data subsequence and the second parameter data subsequence through a first detection coefficient and a second detection coefficient, so that whether the financial data to be detected is noise data is judged, and the condition that the trend similarity between the first reference data subsequence and the second reference data subsequence is reduced when the distance between the head data and the tail data of the second reference data subsequence is farther from the head financial data and the tail financial data of the first reference data subsequence is considered, in the preferred embodiment, a sine-form correction coefficient is introduced into the abnormal detection function of the financial data to be detected to correct the first detection coefficient, so that the abnormal detection function of the financial data to be detected can be more flexible, therefore, the detection precision of the noise data is effectively improved.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (5)

1. A financial prediction system based on a block chain and artificial intelligence is characterized by comprising a data acquisition module, a data preprocessing module, a block chain storage module and a financial prediction module, wherein the data acquisition module is used for acquiring a financial time sequence and inputting the acquired financial time sequence into the data preprocessing module for processing, the data preprocessing module is used for removing noise data in the financial time sequence and transmitting the preprocessed financial time sequence to the block chain storage module for storage, the financial prediction module is used for retrieving the financial time sequence from the block chain storage module and predicting the trend of financial data according to the financial time sequence, and the financial prediction module predicts the trend of the financial time sequence by adopting a BP (back propagation) neural network; optimizing the initial weight and the threshold of the BP neural network adopted by the financial prediction module by adopting a particle swarm algorithm; defining the particle swarm algorithm to update in the following way:
vi(t+1)=ωi(t)vi(t)+c1r1(p′i(t)-xi(t))+c2r2(g(t)-xi(t))
xi(t+1)=xi(t)+vi(t+1)
in the formula, ωi(t) represents the inertial weight factor, v, of the particle i at the t-th iterationi(t +1) and xi(t +1) denotes the step size and position of the particle i at the (t +1) th iteration, vi(t) and xi(t) denotes the step size and position of the particle i at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, g (t) representing a globally optimal solution, p'i(t) represents the local learned solution for particle i at the t-th iteration, and p'iThe value of (t) is determined in the following manner:
let P (t) denote the set of individual optimal solutions for the particles in the population at the t-th iteration, and P (t) { p }i(t), i ═ 1, 2.., N }, where p isi(t) represents the individual optimal solution for particle i at the tth iteration, and N represents the number of particles in the population; let M (t) denote the local classification number of the particle swarm optimization algorithm at the t-th iteration, and
Figure FDA0002869120460000011
defining an individual optimal solution pi(t) the local detection coefficient is εi(t) and εiThe expression of (t) is:
Figure FDA0002869120460000012
in the formula, xl(t) represents the position of the particle l at the t-th iteration, p1(pi(t),xl(t)) represents a first value function, d (p)i(t),xl(t)) represents the individual optimal solution pi(t) and position xl(t) Euclidean distance between (t) when d (p)i(t),xlWhen (t)) < D, then rho1(pi(t),xl(t)). 1, when d (p)i(t),xl(t)) > D, then ρ1(pi(t),xl(t)) -0, where D is a given distance threshold, and
Figure FDA0002869120460000013
wherein p isj(t) represents the individual optimal solution for particle j at the t-th iteration, d (p)i(t),pj(t)) represents the individual optimal solution pi(t) and the individual optimal solution pj(t) Euclidean distance between (t), pi(e) Represents the individual optimal solution, p, of the particle i at the e-th iteration2(pi(t),pi(e) Represents a second value function, when pi(t)=pi(e) Then ρ2(pi(t),pi(e) 1 when p is ═ 1i(t)≠pi(e) Then ρ2(pi(t),pi(e))=0,hi(t) represents the fitness function value of particle i at the t-th iteration, hmax(t) and hmin(t) respectively representing a maximum fitness function and a minimum fitness function value of the particles in the particle swarm during the t-th iteration;
sorting the individual optimal solutions in the set P (t) from small to large according to the values of the local detection coefficients, selecting the first M (t) individual optimal solutions as candidate local learning solutions, forming a set P '(t) by the selected M (t) candidate local learning solutions, and selecting the candidate local learning solution closest to the particles in the particle swarm from the set P' (t) as the corresponding local learning solution, namely the local learning solution
Figure FDA0002869120460000021
Figure FDA0002869120460000022
Wherein p isr(t) represents the individual optimal solution for particle r at the t-th iteration, d (x)i(t),pr(t)) represents position xi(t) and the individual optimal solution pr(t) Euclidean distance between them.
2. The system according to claim 1, wherein the fitness function h of the particle swarm algorithm is defined as:
Figure FDA0002869120460000023
in the formula, yuOutput value, o, representing the u-th training sampleuThe target value of the u-th training sample is shown, and M represents the number of training samples.
3. The system of claim 2, wherein the inertia weight factor ω of the particle i at the t-th time is a value of the inertia weight factor ωiThe expression of (t) is:
Figure FDA0002869120460000024
in the formula, ωmaxAnd ωminRespectively representing a maximum inertia weight factor and a minimum inertia weight factor, T representing the current iteration number, TmaxRepresenting the maximum number of iterations, δi(t) represents the inertial weight adjustment factor for particle i at the tth iteration, let xi(t-2) denotes the position of the particle i at the (t-2) th iteration, xi(t-1) denotes the position of the particle i at the (t-1) th iteration, defining αi(t) represents the forward property value, β, of particle i at the t-th iterationi(t) represents the value of the optimizing property for particle i at the tth iteration, and
Figure FDA0002869120460000025
βi(t)=ρ5(pi(t),pi(t-1)), wherein d (x)i(t-1), g (t)) represents the position xiEuclidean distance between (t-1) and global optimal solution g (t), d (x)i(t), g (t)) represents a position xiEuclidean distance between (t) and global optimal solution g (t), d (x)i(t-1),xi(t-2)) represents the position xi(t-1) and position xi(t-2) Euclidean distance between d (x)i(t),xi(t-2)) represents the position xi(t) and position xi(t-2) the Euclidean distance between,
Figure FDA0002869120460000026
represents a third value function when
Figure FDA0002869120460000027
When it is, then
Figure FDA0002869120460000028
If not, then,
Figure FDA0002869120460000031
represents a fourth value function when
Figure FDA0002869120460000032
When it is, then
Figure FDA0002869120460000033
If not, then,
Figure FDA0002869120460000034
pi(t-1) represents the individual optimal solution, ρ, for particle i at the (t-1) th iteration5(pi(t),pi(t-1)) represents a fifth value function, when pi(t)=piAt (t-1), then ρ5(pi(t),pi(t-1)) ═ 1, when pi(t)≠piAt (t-1), then ρ5(pi(t),pi(t-1))=0;
Let τi(t) represents the number of iterations before and at the latest distance from the t-th iteration when the inertial weight adjustment factor for the particle i is equal to-1, then δiThe expression of (t) is:
Figure FDA0002869120460000035
wherein the content of the first and second substances,
Figure FDA0002869120460000036
represents a sixth value-taking function, given a threshold value M (δ), and
Figure FDA0002869120460000037
when in use
Figure FDA0002869120460000038
When it is, then
Figure FDA0002869120460000039
When in use
Figure FDA00028691204600000310
When it is, then
Figure FDA00028691204600000311
4. The system of claim 3, wherein the data preprocessing module is configured to remove noise data in the financial time series, set the financial time series to be processed as F, sequentially process the financial data in the financial time series F, set F (k) to represent the current financial data to be processed in the financial time series F, and F (k) to represent the kth financial data in the financial time series F, and set the data threshold Δ F (k), wherein Δ F (k) is set as Δ F (k)
Figure FDA00028691204600000312
Determining a reference data sequence F (k) corresponding to the financial data F (k) according to a given data threshold value delta F (k), and setting the reference data sequence F (k) determined according to the given data threshold value delta F (k) to { F (k-l +1), F (k-l +2),.. times, F (k-1) }, wherein F (k-l +1), F (k-l +2) and F (k-1) respectively represent the (k-l +1), (k-l +2) and (k-1) financial data in the financial time sequence F, and (l-1) represents the financial data amount in the parameter data sequence F (k);
let F (a) denote the financial data in the reference data sequence F (k), and F (a) is the a-th financial data in the financial time sequence F, F (b) denotes the financial data in the reference data sequence F (k), and F (b) denotes the b-th financial data in the financial time sequence F, wherein a ≠ b, then the financial data F (a) and the financial data F (b) in the reference data sequence F (k) satisfy: (a) f (b) Δ f ≦ Δ f (k);
is provided with
Figure FDA00028691204600000313
Denotes the mean value of the fusion data in the reference data sequence F (k), F ' (k) denotes the first reference data subsequence of the financial data F (k), and F ' (k) { F (k-m '), F (k) is-m '+ 1),. F, (k) }, wherein F (k-m') represents the (k-m ') th financial data in the financial time series F, F (k-m' +1) represents the (k-m '+ 1) th financial data in the financial time series F, and the value of m' is determined in the following manner;
(1) when the financial data f (k) is satisfied
Figure FDA0002869120460000041
Then, the value of m' is determined in the following manner:
Figure FDA0002869120460000042
wherein, θ (k) represents when the financial data f (k) is greater than or equal to
Figure FDA0002869120460000043
A time-corresponding sequence detection function, F (k-s) represents the (k-s) th financial data in the financial time sequence F,
Figure FDA0002869120460000044
a first comparison function representing the correspondence of the financial data f (k-s) when
Figure FDA0002869120460000045
When it is, then
Figure FDA0002869120460000046
When in use
Figure FDA0002869120460000047
When it is, then
Figure FDA0002869120460000048
Selecting a value of the maximum m which enables the sequence detection function theta (k) to be 1 as m';
(2) when the financial data f (k) is satisfied
Figure FDA0002869120460000049
Then, the value of m' is determined in the following manner:
Figure FDA00028691204600000410
wherein the content of the first and second substances,
Figure FDA00028691204600000428
when the financial data f (k) is less than
Figure FDA00028691204600000411
The time of the corresponding sequence detection function,
Figure FDA00028691204600000412
a second comparison function representing the correspondence of the financial data f (k-s) when
Figure FDA00028691204600000413
When it is, then
Figure FDA00028691204600000414
When in use
Figure FDA00028691204600000415
When it is, then
Figure FDA00028691204600000416
Selecting the function that makes the sequence detection
Figure FDA00028691204600000429
The value of the maximum m of (2) is denoted as m'.
5. The system of claim 4, wherein the system comprises:
let F' (k) denote a second reference data subsequence of financial data F (k), and
Figure FDA00028691204600000417
Figure FDA00028691204600000418
wherein the content of the first and second substances,
Figure FDA00028691204600000419
representing the second in financial time series F
Figure FDA00028691204600000420
The financial data of the individual financial data,
Figure FDA00028691204600000421
representing the second in financial time series F
Figure FDA00028691204600000422
The financial data of the individual financial data,
Figure FDA00028691204600000423
Figure FDA00028691204600000424
representing the second in financial time series F
Figure FDA00028691204600000425
Individual financial data; defining the first detection coefficient of the financial data F (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k) as Y1(k) And Y is1(k) The expression of (a) is:
Figure FDA00028691204600000426
Figure FDA00028691204600000427
Figure FDA0002869120460000051
Figure FDA0002869120460000052
Figure FDA0002869120460000053
wherein Δ F (k-m ') represents the standard deviation of the financial data F (k-m') in the first sub-sequence of reference data F '(k), Δ F (k) represents the standard deviation of the financial data F (k) in the first sub-sequence of reference data F' (k),
Figure FDA0002869120460000054
representing financial data
Figure FDA0002869120460000055
The standard deviation in the second reference data subsequence F "(k),
Figure FDA0002869120460000056
representing financial data
Figure FDA0002869120460000057
Figure FDA0002869120460000058
The standard deviation in the second reference data subsequence F "(k),
Figure FDA00028691204600000514
represents rounding up;
defining financial data F (k) in a first subsequence of reference data F '(k) and a second subsequence of reference data F' (k)Has a second detection coefficient of Y2(k) And Y is2(k) The expression of (a) is:
Figure FDA0002869120460000059
in the formula (I), the compound is shown in the specification,
Figure FDA00028691204600000510
representing the mean of the fusion data in the first sub-sequence of reference data F' (k),
Figure FDA00028691204600000511
represents the mean of the fusion data in the second subsequence of reference data F "(k);
defining the financial data F (k) as an anomaly detection function Y (k) in the first reference data subsequence F '(k) and the second reference data subsequence F' (k), and the expression of Y (k) is:
Figure FDA00028691204600000512
when the value of the anomaly detection function y (k) satisfies: when Y (k) is less than or equal to 0, the financial data f (k) is judged to be normal financial data, and the value of the financial data f (k) is kept unchanged; when the value of the anomaly detection function y (k) satisfies: when Y (k) is greater than 0, the financial data f (k) is judged to be abnormal data, and the order is given
Figure FDA00028691204600000513
Where F (k-c) represents the (k-c) th financial data in the financial time series F.
CN202010800385.5A 2020-08-11 2020-08-11 Financial prediction system based on block chain and artificial intelligence Active CN111930844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010800385.5A CN111930844B (en) 2020-08-11 2020-08-11 Financial prediction system based on block chain and artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010800385.5A CN111930844B (en) 2020-08-11 2020-08-11 Financial prediction system based on block chain and artificial intelligence

Publications (2)

Publication Number Publication Date
CN111930844A CN111930844A (en) 2020-11-13
CN111930844B true CN111930844B (en) 2021-09-24

Family

ID=73306597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010800385.5A Active CN111930844B (en) 2020-08-11 2020-08-11 Financial prediction system based on block chain and artificial intelligence

Country Status (1)

Country Link
CN (1) CN111930844B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532630B (en) * 2020-11-30 2021-09-24 广州瘦吧网络科技有限公司 Gene big data disease prediction system based on algorithm, 5G and block chain
CN112530537B (en) * 2020-12-15 2021-06-25 重庆中联信息产业有限责任公司 Big health management platform based on algorithm, medical image and block chain
CN113065710A (en) * 2021-04-09 2021-07-02 深圳市小金象科技有限公司 Financial prediction system based on artificial intelligence and block chain
CN112927085B (en) * 2021-04-14 2021-10-26 广州经传多赢投资咨询有限公司 Stock risk early warning system based on block chain, big data and algorithm

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625324A (en) * 2012-03-08 2012-08-01 上海大学 Wireless optical fiber sensor network deployment method based on particle swarm optimization
CN103462605A (en) * 2013-09-06 2013-12-25 南京邮电大学 Biological electrical impedance tomography method
CN105469144A (en) * 2015-11-19 2016-04-06 东北大学 Mobile communication user loss prediction method based on particle classification and BP neural network
CN106920008A (en) * 2017-02-28 2017-07-04 山东大学 A kind of wind power forecasting method based on Modified particle swarm optimization BP neural network
CN110488842A (en) * 2019-09-04 2019-11-22 湖南大学 A kind of track of vehicle prediction technique based on two-way kernel ridge regression
CN111209591A (en) * 2019-12-31 2020-05-29 浙江工业大学 Storage structure sorted according to time and quick query method
CN111210082A (en) * 2020-01-13 2020-05-29 东南大学 Optimized BP neural network algorithm-based precipitation prediction method
CN111351801A (en) * 2020-03-11 2020-06-30 春光线缆有限公司 Wire and cable defect detection system
CN111586151A (en) * 2020-04-30 2020-08-25 武汉钟码科技有限公司 Intelligent city data sharing system and method based on block chain
CN111754267A (en) * 2020-06-29 2020-10-09 蚌埠科睿达机械设计有限公司 Data processing method and system based on block chain

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818224A (en) * 2017-11-13 2018-03-20 浙江大学 Sea clutter optimal soft survey instrument and method based on drosophila optimized algorithm Optimization of Wavelet neutral net
CN107958073B (en) * 2017-12-07 2020-07-17 电子科技大学 Particle cluster algorithm optimization-based color image retrieval method
CN108388702A (en) * 2018-01-30 2018-08-10 河南工程学院 Engineering ceramics electrical discharge machining effect prediction method based on PSO neural networks
CN109063938B (en) * 2018-10-30 2021-11-26 浙江工商大学 Air quality prediction method based on PSODE-BP neural network
CN110668276B (en) * 2019-08-29 2021-02-09 浙江理工大学 Method for predicting elevator fault based on BP neural network optimized by PSO
CN111081022A (en) * 2019-12-30 2020-04-28 宁波财经学院 Traffic flow prediction method based on particle swarm optimization neural network
CN111798061B (en) * 2020-07-08 2020-12-29 深圳市荣亿达科技发展有限公司 Financial data prediction system based on block chain and cloud computing
CN111680107B (en) * 2020-08-11 2020-12-08 上海竞动科技有限公司 Financial prediction system based on artificial intelligence and block chain

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102625324A (en) * 2012-03-08 2012-08-01 上海大学 Wireless optical fiber sensor network deployment method based on particle swarm optimization
CN103462605A (en) * 2013-09-06 2013-12-25 南京邮电大学 Biological electrical impedance tomography method
CN105469144A (en) * 2015-11-19 2016-04-06 东北大学 Mobile communication user loss prediction method based on particle classification and BP neural network
CN106920008A (en) * 2017-02-28 2017-07-04 山东大学 A kind of wind power forecasting method based on Modified particle swarm optimization BP neural network
CN110488842A (en) * 2019-09-04 2019-11-22 湖南大学 A kind of track of vehicle prediction technique based on two-way kernel ridge regression
CN111209591A (en) * 2019-12-31 2020-05-29 浙江工业大学 Storage structure sorted according to time and quick query method
CN111210082A (en) * 2020-01-13 2020-05-29 东南大学 Optimized BP neural network algorithm-based precipitation prediction method
CN111351801A (en) * 2020-03-11 2020-06-30 春光线缆有限公司 Wire and cable defect detection system
CN111586151A (en) * 2020-04-30 2020-08-25 武汉钟码科技有限公司 Intelligent city data sharing system and method based on block chain
CN111754267A (en) * 2020-06-29 2020-10-09 蚌埠科睿达机械设计有限公司 Data processing method and system based on block chain

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于小波和PSO-BP神经网络的金融时序预测;苗旭东等;《信息技术》;20180525;26-29 *

Also Published As

Publication number Publication date
CN111930844A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111930844B (en) Financial prediction system based on block chain and artificial intelligence
WO2022121289A1 (en) Methods and systems for mining minority-class data samples for training neural network
CN110070141B (en) Network intrusion detection method
Dmochowski et al. Maximum Likelihood in Cost-Sensitive Learning: Model Specification, Approximations, and Upper Bounds.
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
CN111680107B (en) Financial prediction system based on artificial intelligence and block chain
CN110991321B (en) Video pedestrian re-identification method based on tag correction and weighting feature fusion
CN109886343B (en) Image classification method and device, equipment and storage medium
WO2022252455A1 (en) Methods and systems for training graph neural network using supervised contrastive learning
US20220027786A1 (en) Multimodal Self-Paced Learning with a Soft Weighting Scheme for Robust Classification of Multiomics Data
Li et al. Source-free active domain adaptation via energy-based locality preserving transfer
Chan et al. Controlled false negative reduction of minority classes in semantic segmentation
CN115905855A (en) Improved meta-learning algorithm MG-copy
CN116258978A (en) Target detection method for weak annotation of remote sensing image in natural protection area
CN112800232B (en) Case automatic classification method based on big data
CN109492816B (en) Coal and gas outburst dynamic prediction method based on hybrid intelligence
Aarthi et al. Detection and classification of MRI brain tumors using S3-DRLSTM based deep learning model
EP3975062A1 (en) Method and system for selecting data to train a model
CN115174263B (en) Attack path dynamic decision method and device
Narasimhan et al. Altered particle swarm optimization based attribute selection strategy with improved fuzzy Artificial Neural Network classifier for coronary artery heart disease risk prediction
CN115775231A (en) Cascade R-CNN-based hardware defect detection method and system
CN115174170A (en) VPN encrypted flow identification method based on ensemble learning
CN115019083A (en) Word embedding graph neural network fine-grained graph classification method based on few-sample learning
CN114463602B (en) Target identification data processing method based on big data
CN116894169B (en) Online flow characteristic selection method based on dynamic characteristic clustering and particle swarm optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Liu Xing

Inventor after: Luo Zhongming

Inventor after: Xiao Yan

Inventor before: Liu Xing

Inventor before: Luo Zhongming

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20210720

Address after: 161000 group 17, Huamin village, Huamin Township, Qiqihar City, Heilongjiang Province

Applicant after: Xiao Yan

Address before: 343400 No. 106, Beimen Road, prosperity Street, Hechuan Town, Yongxin County, Ji'an City, Jiangxi Province

Applicant before: Luo Zhongming

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant