CN113283461A - Financial big data processing system and method based on block chain - Google Patents
Financial big data processing system and method based on block chain Download PDFInfo
- Publication number
- CN113283461A CN113283461A CN202110260389.3A CN202110260389A CN113283461A CN 113283461 A CN113283461 A CN 113283461A CN 202110260389 A CN202110260389 A CN 202110260389A CN 113283461 A CN113283461 A CN 113283461A
- Authority
- CN
- China
- Prior art keywords
- network model
- sentence
- text
- expression
- relation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Business, Economics & Management (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Finance (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The financial big data processing method based on the block chain comprises the steps of collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; and Hash chaining and storing the generated relation classification result of the text to be classified based on a block chain technology. The invention realizes the effect of improving the data relation classification accuracy and the classification task accuracy, respectively calculates the cross entropy cost for the first probability distribution and the second probability distribution, then adds the cross entropy costs, and minimizes the sum of the costs, thereby leading the accuracy of the classification task to be higher.
Description
Technical Field
The application relates to the technical field of computers, in particular to a financial big data processing system and method based on a block chain.
Background
Block chain from a technology level, the block chain involves many scientific and technical problems such as mathematics, cryptography, internet and computer programming. From the application perspective, the blockchain is simply a distributed shared account book and database, and has the characteristics of decentralization, no tampering, trace remaining in the whole process, traceability, collective maintenance, public transparency and the like. The characteristics ensure the honesty and the transparency of the block chain and lay a foundation for creating trust for the block chain. And the rich application scenes of the block chains basically solve the problem of information asymmetry based on the block chains, and realize the cooperative trust and consistent action among a plurality of main bodies.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. The block chain is an important concept of bitcoin, which is essentially a decentralized database, and is a string of data blocks generated by using a cryptographic method to correlate the underlying technology of bitcoin.
At present, the blockchain is gradually applied to the field of financial big data, for example, a financial data traceability method based on the blockchain and the identification technology disclosed in the invention patent with the application number of CN202010370723.6 can store financial data in the blockchain for data traceability, and because data in the blockchain cannot be tampered, the security and reliability of the traceability data and the process are effectively improved. Specifically, the second device may send the first encrypted data to the first device, and the first device stores the first encrypted data in the blockchain. When the first device stores the first encrypted data in the blockchain, the first device may obtain the first identifier, and correspondingly store the first identifier and the first encrypted data in the blockchain. After the first device correspondingly stores the first encrypted data and the first identifier in the block chain, a first message carrying the first identifier may be sent to the second device, where the first message is used to indicate that the first encrypted data has been successfully stored in the block chain. The first identifier may be used as an index to the first encrypted data to facilitate tracing the first encrypted data.
It can be seen that the data source tracing of the blockchain applied to the financial big data is already mature, but the application of the blockchain to the financial text related data processing of the financial big data has great technical barriers and technical problems, for example, based on the blockchain technology, when text processing and classification are performed on the financial text big data, problems of inaccurate data relation classification and low classification task accuracy exist.
Disclosure of Invention
Therefore, it is necessary to provide a system and a method for processing financial big data based on a block chain, which can improve the accuracy of data relation classification and the accuracy of classification task.
The technical scheme of the invention is as follows:
a financial big data processing method based on a blockchain, the method comprising:
step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified;
step S300: and Hash chaining and storing the generated relation classification result of the text to be classified based on a block chain technology.
Specifically, step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; the method specifically comprises the following steps:
step S210: collecting an original text set, and carrying out primary processing on a text and a relation mark in the original text set to obtain a marked expression, wherein the original text set comprises a chapter text and a relation mark corresponding to the chapter text; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
step S220: randomly selecting a pair of sample sentences from a network model learning set and a network model checking set as network model input according to a preset batch size, segmenting a first sentence and a second sentence in the selected sample sentences before inputting the selected pair of sample sentences into a pre-learned pre-training language model, adding a first dynamic vector bit before the first sentence, adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence respectively, inputting the segmented sentences into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using model parameters in the pre-training language model, wherein each segmented word input into the pre-training language model corresponds to a dynamic word vector expression, and an output vector corresponding to the first dynamic vector bit comprises a relationship between an upper sentence and a lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
step S230: respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, specifically, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, respectively outputting to obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain expression of sequence information characteristics;
step S240: inputting the expression of the sequence information features obtained by combination into a graph convolution neural model, and modeling the relation between words in sentence pairs by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
step S250: classifying output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
step S260: and loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model.
Specifically, step S260: loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model; the method specifically comprises the following steps:
step S261: loading an archived network model inspection set with an optimal network model, fixing parameters of the network model, and acquiring actual acquisition time nodes of the texts to be classified, wherein one text to be classified corresponds to one actual acquisition time node;
step S262: and sequencing the actual acquisition time nodes according to time and time, inputting the corresponding texts to be classified into the network model in batches according to the sequenced actual acquisition time nodes, and outputting a relation classification result of the texts to be classified through the operation of the network model.
Specifically, step S300: based on a block chain technology, Hash chaining and storing the generated relation classification result of the text to be classified; the method specifically comprises the following steps:
step S310: based on a block chain technology, carrying out Hash value extraction processing on the generated relation classification result of the text to be classified, and acquiring a current actual Hash value of the relation classification result of the text to be classified;
step S320: according to the obtained current actual hash value of the relation classification result of the text to be classified;
step S330: and according to the obtained current actual hash value of the relation classification result of the text to be classified, carrying out hash chain-linking on the relation classification result of the text to be classified and storing the relation classification result.
Specifically, step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; the method specifically comprises the following steps:
step S110: acquiring a loaded financial big data initial text in real time;
step S120: collecting the initial text of the financial big data acquired in real time and respectively generating a financial big data subset;
step S130: and generating the original text set according to the generated financial big data subset.
A blockchain-based financial big data processing system, the system comprising:
the system comprises an original text set acquisition module, a text analysis module and a text analysis module, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
the relation classification result generation module is used for performing text batch model classification processing on the original text set according to the original text set and generating a relation classification result of the text to be classified;
and the Hash chain-loading module is used for Hash chain-loading and storing the generated relation classification result of the text to be classified based on a block chain technology.
Specifically, the relationship classification result generation module includes:
the system comprises an original text collection module, a marking expression module and a text analysis module, wherein the original text collection module is used for collecting an original text set, and preliminarily processing texts and relationship marks in the original text set to obtain marked expression, and the original text set comprises discourse texts and relationship marks corresponding to the discourse texts; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
a global information obtaining module, for randomly selecting a pair of sample sentences from the network model learning set and the network model checking set as the network model input according to the preset batch size, before inputting the selected pair of sample sentences into the pre-learned pre-training language model, segmenting the first sentence and the second sentence in the selected sample sentences, then adding a first dynamic vector bit before the first sentence, and adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence, then inputting them into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using the model parameters in the pre-training language model, each segmentation of the input pre-training language model corresponds to a dynamic word vector expression, and the output vector corresponding to the first dynamic vector bit contains the relationship between the upper sentence and the lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
the sequence information characteristic expression module is used for respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, outputting the word vector matrixes to respectively obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain the expression of the sequence information characteristic;
the sentence relation characteristic expression module is used for inputting the expression of the sequence information characteristics obtained by combination into a graph convolution neural model and modeling the relation between words in a sentence pair by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
the network model learning sample module is used for classifying the output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
and the operation output module is used for loading the archived network model inspection set to represent the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification result of the texts to be classified through the operation of the network model.
Specifically, the hash chaining module further includes:
the current actual hash value acquisition module is used for performing hash value extraction processing on the generated relation classification result of the text to be classified based on a block chain technology and acquiring a current actual hash value of the relation classification result of the text to be classified;
the relation classification result acquisition module is used for acquiring the current actual hash value of the relation classification result of the text to be classified;
and the Hash chain-winding storage module is used for Hash chain-winding and storing the relation classification result of the text to be classified according to the obtained current actual Hash value of the relation classification result of the text to be classified.
A computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above-mentioned financial big data processing method based on block chain when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned blockchain-based financial big data processing method.
The invention has the following technical effects:
1. firstly, collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; then, performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; finally, based on a block chain technology, the generated relation classification result of the text to be classified is subjected to Hash chain linking and storage, wherein the problems of inaccurate data relation classification and low classification task accuracy rate when text processing classification is carried out on financial related text big data are solved based on the block chain technology, and the effects of improving the data relation classification accuracy rate and the classification task accuracy rate are realized;
2. the pre-learning model is used for obtaining better dynamic word vector expression, so that the overall expression effect of sentences in the big data text is improved; meanwhile, the interaction between the upper sentence and the lower sentence is enhanced by adopting a graph convolution neural model, so that the accuracy of relation classification is improved; moreover, the cross-entropy costs are calculated separately for the first probability distribution and the second probability distribution, then added, and then the sum of the costs is minimized, which results in higher accuracy of the classification task.
Drawings
FIG. 1 is a flow diagram illustrating a method for processing financial big data based on a blockchain in one embodiment;
FIG. 2 is a block diagram of a blockchain-based financial big data processing system in one embodiment;
FIG. 3 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a method for processing financial big data based on a blockchain, the method including:
step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified;
step S300: and Hash chaining and storing the generated relation classification result of the text to be classified based on a block chain technology.
Firstly, collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; then, performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; and finally, based on a block chain technology, Hash chaining and storing the generated relation classification result of the text to be classified, wherein based on the block chain technology, the problems of inaccurate data relation classification and low classification task accuracy rate when text processing classification is carried out on financial related text big data are solved, and the effects of improving the data relation classification accuracy rate and the classification task accuracy rate are realized.
In one embodiment, step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; the method specifically comprises the following steps:
step S210: collecting an original text set, and carrying out primary processing on a text and a relation mark in the original text set to obtain a marked expression, wherein the original text set comprises a chapter text and a relation mark corresponding to the chapter text; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
specifically, in this step, the preliminary processing of the chapter text and the relationship mark containing the implicit relationship includes extracting each pair of texts with the implicit relationship in the texts, corresponding to the relationship, and processing the texts into a series of ordered, fixed-format, chapter-level network models for input, so as to realize the ordered entry of financial big data and improve the data processing efficiency.
Step S220: randomly selecting a pair of sample sentences from a network model learning set and a network model checking set as network model input according to a preset batch size, segmenting a first sentence and a second sentence in the selected sample sentences before inputting the selected pair of sample sentences into a pre-learned pre-training language model, adding a first dynamic vector bit before the first sentence, adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence respectively, inputting the segmented sentences into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using model parameters in the pre-training language model, wherein each segmented word input into the pre-training language model corresponds to a dynamic word vector expression, and an output vector corresponding to the first dynamic vector bit comprises a relationship between an upper sentence and a lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
specifically, in the step, vectors obtained from the upper sentence and the lower sentence are combined into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, and the word vector corresponding to the first dynamic vector bit output by the pre-training language model contains the relationship between the two sentences and the global information, so that the better dynamic word vector expression is obtained by using the pre-learning model, and the overall expression effect of the sentences in the big data text is improved.
Step S230: respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, specifically, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, respectively outputting to obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain expression of sequence information characteristics;
step S240: inputting the expression of the sequence information features obtained by combination into a graph convolution neural model, and modeling the relation between words in sentence pairs by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
specifically, in this step, the interaction between the upper sentence and the lower sentence is enhanced by using the graph convolution neural model, so that the accuracy of the relationship classification is improved.
Step S250: classifying output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
in the step, the cross entropy costs are calculated respectively for the first probability distribution and the second probability distribution, then are added, and then the sum of the costs is minimized, so that the accuracy of the classification task is higher.
Step S260: and loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model.
In one embodiment, step S260: loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model; the method specifically comprises the following steps:
step S261: loading an archived network model inspection set with an optimal network model, fixing parameters of the network model, and acquiring actual acquisition time nodes of the texts to be classified, wherein one text to be classified corresponds to one actual acquisition time node;
specifically, in this step, the actual obtaining time node of the texts to be classified is obtained, which is beneficial to orderly arranging the obtained texts to be classified according to the time sequence.
Step S262: and sequencing the actual acquisition time nodes according to time and time, inputting the corresponding texts to be classified into the network model in batches according to the sequenced actual acquisition time nodes, and outputting a relation classification result of the texts to be classified through the operation of the network model.
In one embodiment, step S300: based on a block chain technology, Hash chaining and storing the generated relation classification result of the text to be classified; the method specifically comprises the following steps:
step S310: based on a block chain technology, carrying out Hash value extraction processing on the generated relation classification result of the text to be classified, and acquiring a current actual Hash value of the relation classification result of the text to be classified;
specifically, in this step, the current actual hash value of the relation classification result of the text to be classified is obtained, and the current actual hash value is used for chaining, so that the data is guaranteed not to be tampered and stable.
Step S320: according to the obtained current actual hash value of the relation classification result of the text to be classified;
step S330: and according to the obtained current actual hash value of the relation classification result of the text to be classified, carrying out hash chain-linking on the relation classification result of the text to be classified and storing the relation classification result.
In one embodiment, step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; the method specifically comprises the following steps:
step S110: acquiring a loaded financial big data initial text in real time;
step S120: collecting the initial text of the financial big data acquired in real time and respectively generating a financial big data subset;
step S130: and generating the original text set according to the generated financial big data subset.
In one embodiment, as shown in fig. 2, there is provided a blockchain-based financial big data processing system, the system comprising:
the system comprises an original text set acquisition module, a text analysis module and a text analysis module, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
the relation classification result generation module is used for performing text batch model classification processing on the original text set according to the original text set and generating a relation classification result of the text to be classified;
and the Hash chain-loading module is used for Hash chain-loading and storing the generated relation classification result of the text to be classified based on a block chain technology.
In one embodiment, the relational classification result generation module includes:
the system comprises an original text collection module, a marking expression module and a text analysis module, wherein the original text collection module is used for collecting an original text set, and preliminarily processing texts and relationship marks in the original text set to obtain marked expression, and the original text set comprises discourse texts and relationship marks corresponding to the discourse texts; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
a global information obtaining module, for randomly selecting a pair of sample sentences from the network model learning set and the network model checking set as the network model input according to the preset batch size, before inputting the selected pair of sample sentences into the pre-learned pre-training language model, segmenting the first sentence and the second sentence in the selected sample sentences, then adding a first dynamic vector bit before the first sentence, and adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence, then inputting them into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using the model parameters in the pre-training language model, each segmentation of the input pre-training language model corresponds to a dynamic word vector expression, and the output vector corresponding to the first dynamic vector bit contains the relationship between the upper sentence and the lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
the sequence information characteristic expression module is used for respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, outputting the word vector matrixes to respectively obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain the expression of the sequence information characteristic;
the sentence relation characteristic expression module is used for inputting the expression of the sequence information characteristics obtained by combination into a graph convolution neural model and modeling the relation between words in a sentence pair by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
the network model learning sample module is used for classifying the output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
and the operation output module is used for loading the archived network model inspection set to represent the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification result of the texts to be classified through the operation of the network model.
Specifically, the hash chaining module further includes:
the current actual hash value acquisition module is used for performing hash value extraction processing on the generated relation classification result of the text to be classified based on a block chain technology and acquiring a current actual hash value of the relation classification result of the text to be classified;
the relation classification result acquisition module is used for acquiring the current actual hash value of the relation classification result of the text to be classified;
and the Hash chain-winding storage module is used for Hash chain-winding and storing the relation classification result of the text to be classified according to the obtained current actual Hash value of the relation classification result of the text to be classified.
In one embodiment, the method is further used for acquiring the initial text of the loaded financial big data in real time; collecting the initial text of the financial big data acquired in real time and respectively generating a financial big data subset; generating the original text set according to the generated financial big data subset
In one embodiment, as shown in fig. 3, a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above-mentioned financial big data processing method based on block chain when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned blockchain-based financial big data processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A financial big data processing method based on a block chain is characterized by comprising the following steps:
step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified;
step S300: and Hash chaining and storing the generated relation classification result of the text to be classified based on a block chain technology.
2. The method for processing financial big data based on block chain according to claim 1, wherein the step S200: performing text batch model classification processing on the original text set according to the original text set, and generating a relation classification result of the text to be classified; the method specifically comprises the following steps:
step S210: collecting an original text set, and carrying out primary processing on a text and a relation mark in the original text set to obtain a marked expression, wherein the original text set comprises a chapter text and a relation mark corresponding to the chapter text; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
step S220: randomly selecting a pair of sample sentences from a network model learning set and a network model checking set as network model input according to a preset batch size, segmenting a first sentence and a second sentence in the selected sample sentences before inputting the selected pair of sample sentences into a pre-learned pre-training language model, adding a first dynamic vector bit before the first sentence, adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence respectively, inputting the segmented sentences into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using model parameters in the pre-training language model, wherein each segmented word input into the pre-training language model corresponds to a dynamic word vector expression, and an output vector corresponding to the first dynamic vector bit comprises a relationship between an upper sentence and a lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
step S230: respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, specifically, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, respectively outputting to obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain expression of sequence information characteristics;
step S240: inputting the expression of the sequence information features obtained by combination into a graph convolution neural model, and modeling the relation between words in sentence pairs by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
step S250: classifying output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
step S260: and loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model.
3. The method for processing financial big data based on block chain according to claim 2, wherein the step S260: loading the archived network model inspection set with the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification results of the texts to be classified through the operation of the network model; the method specifically comprises the following steps:
step S261: loading an archived network model inspection set with an optimal network model, fixing parameters of the network model, and acquiring actual acquisition time nodes of the texts to be classified, wherein one text to be classified corresponds to one actual acquisition time node;
step S262: and sequencing the actual acquisition time nodes according to time and time, inputting the corresponding texts to be classified into the network model in batches according to the sequenced actual acquisition time nodes, and outputting a relation classification result of the texts to be classified through the operation of the network model.
4. The blockchain-based financial big data processing method according to any one of claims 1 to 3, wherein the step S300: based on a block chain technology, Hash chaining and storing the generated relation classification result of the text to be classified; the method specifically comprises the following steps:
step S310: based on a block chain technology, carrying out Hash value extraction processing on the generated relation classification result of the text to be classified, and acquiring a current actual Hash value of the relation classification result of the text to be classified;
step S320: according to the obtained current actual hash value of the relation classification result of the text to be classified;
step S330: and according to the obtained current actual hash value of the relation classification result of the text to be classified, carrying out hash chain-linking on the relation classification result of the text to be classified and storing the relation classification result.
5. The blockchain-based financial big data processing method according to any one of claims 1 to 3,
step S100: collecting an original text set containing financial big data, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts; the method specifically comprises the following steps:
step S110: acquiring a loaded financial big data initial text in real time;
step S120: collecting the initial text of the financial big data acquired in real time and respectively generating a financial big data subset;
step S130: and generating the original text set according to the generated financial big data subset.
6. A blockchain-based financial big data processing system, the system comprising:
the system comprises an original text set acquisition module, a text analysis module and a text analysis module, wherein the original text set comprises a plurality of financial big data subsets, and each financial big data subset comprises a plurality of financial data initial texts;
the relation classification result generation module is used for performing text batch model classification processing on the original text set according to the original text set and generating a relation classification result of the text to be classified;
and the Hash chain-loading module is used for Hash chain-loading and storing the generated relation classification result of the text to be classified based on a block chain technology.
7. The blockchain-based financial big data processing system according to claim 6, wherein the relational classification result generation module includes:
the system comprises an original text collection module, a marking expression module and a text analysis module, wherein the original text collection module is used for collecting an original text set, and preliminarily processing texts and relationship marks in the original text set to obtain marked expression, and the original text set comprises discourse texts and relationship marks corresponding to the discourse texts; the preliminary processing of the discourse texts containing the implicit relations and the relation marks comprises the steps of extracting each pair of texts with the implicit relations in the texts, corresponding to the relations, and processing the texts into a series of ordered, fixed-format and discourse-level network models needing input; finally, dividing the input area required by the processed network model into a network model learning set and a network model checking set;
a global information obtaining module, for randomly selecting a pair of sample sentences from the network model learning set and the network model checking set as the network model input according to the preset batch size, before inputting the selected pair of sample sentences into the pre-learned pre-training language model, segmenting the first sentence and the second sentence in the selected sample sentences, then adding a first dynamic vector bit before the first sentence, and adding a second dynamic vector bit between the first sentence and the second sentence and after the second sentence, then inputting them into the pre-learned pre-training language model together, generating hidden layer vectors for the whole sentence pair by using the model parameters in the pre-training language model, each segmentation of the input pre-training language model corresponds to a dynamic word vector expression, and the output vector corresponding to the first dynamic vector bit contains the relationship between the upper sentence and the lower sentence, finally, combining vectors obtained from the upper sentence and the lower sentence into a sequence to obtain the initial matrix vector expression of the whole sentence to the sequence, wherein the word vector corresponding to the first dynamic vector bit output by the pre-training language model comprises the relationship between the two sentences and the global information;
the sequence information characteristic expression module is used for respectively acquiring specific sequence information of each sentence by adopting a time-cycle neural network, establishing a context relationship, respectively inputting word vector matrixes of upper and lower sentences into the time-cycle neural network, outputting the word vector matrixes to respectively obtain forward expression and backward expression, and then combining the forward expression and the backward expression to obtain the expression of the sequence information characteristic;
the sentence relation characteristic expression module is used for inputting the expression of the sequence information characteristics obtained by combination into a graph convolution neural model and modeling the relation between words in a sentence pair by using a graph convolution method; the expression of each word output by the graph convolution neural model is fused with word pair information between sentence pairs, and then the expressions of all words of the two sentences are input into the pooling layer to obtain the characteristic expression of the inter-sentence relation modeled by the graph convolution neural model;
the network model learning sample module is used for classifying the output characteristics of a first dynamic vector bit of the pre-training language model by using a first classifier, and converting the expression of a vector corresponding to the first dynamic vector bit into a first probability distribution of each relation through a feedforward network and a classification layer; inputting the feature expression of the interphrase relation modeled by the graph convolution neural model into a feed-forward network and a classification layer by using a second classifier, and converting the feature expression into a second probability distribution of each interphrase relation; respectively calculating cross entropy costs for the first probability distribution and the second probability distribution, adding the cross entropy costs to obtain a cost sum, and then minimizing the cost sum; finally, inputting network model learning samples and network model checking samples into the network model in random batches, continuously updating optimal parameter values of the network model by using an incremental gradient descent method, and simultaneously calculating optimal indexes on a network model checking set, wherein the optimal indexes are accuracy, recall rate and macro average value, when the optimal indexes on the network model checking set are not improved any more or the network model iterates for a certain number of times, the learning is stopped, and the optimal network model is represented on the network model checking set;
and the operation output module is used for loading the archived network model inspection set to represent the optimal network model, fixing the parameters of the network model, inputting the texts to be classified into the network model in batches, and outputting the relational classification result of the texts to be classified through the operation of the network model.
8. The blockchain-based financial big data processing system according to claim 1, wherein the hash chaining module further comprises:
the current actual hash value acquisition module is used for performing hash value extraction processing on the generated relation classification result of the text to be classified based on a block chain technology and acquiring a current actual hash value of the relation classification result of the text to be classified;
the relation classification result acquisition module is used for acquiring the current actual hash value of the relation classification result of the text to be classified;
and the Hash chain-winding storage module is used for Hash chain-winding and storing the relation classification result of the text to be classified according to the obtained current actual Hash value of the relation classification result of the text to be classified.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260389.3A CN113283461A (en) | 2021-03-10 | 2021-03-10 | Financial big data processing system and method based on block chain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260389.3A CN113283461A (en) | 2021-03-10 | 2021-03-10 | Financial big data processing system and method based on block chain |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113283461A true CN113283461A (en) | 2021-08-20 |
Family
ID=77275925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110260389.3A Withdrawn CN113283461A (en) | 2021-03-10 | 2021-03-10 | Financial big data processing system and method based on block chain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113283461A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116303786A (en) * | 2023-03-18 | 2023-06-23 | 上海圈讯科技股份有限公司 | Block chain financial big data management system based on multidimensional data fusion algorithm |
CN118523965A (en) * | 2024-07-23 | 2024-08-20 | 国网浙江省电力有限公司青田县供电公司 | Message acquisition and analysis method and system for electric power system |
-
2021
- 2021-03-10 CN CN202110260389.3A patent/CN113283461A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116303786A (en) * | 2023-03-18 | 2023-06-23 | 上海圈讯科技股份有限公司 | Block chain financial big data management system based on multidimensional data fusion algorithm |
CN116303786B (en) * | 2023-03-18 | 2023-10-27 | 上海圈讯科技股份有限公司 | Block chain financial big data management system based on multidimensional data fusion algorithm |
CN118523965A (en) * | 2024-07-23 | 2024-08-20 | 国网浙江省电力有限公司青田县供电公司 | Message acquisition and analysis method and system for electric power system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765265B (en) | Information classification extraction method and device, computer equipment and storage medium | |
CN109543032B (en) | Text classification method, apparatus, computer device and storage medium | |
CN108509596B (en) | Text classification method and device, computer equipment and storage medium | |
CN109829155B (en) | Keyword determination method, automatic scoring method, device, equipment and medium | |
CN111859986B (en) | Semantic matching method, device, equipment and medium based on multi-task twin network | |
CN110021439A (en) | Medical data classification method, device and computer equipment based on machine learning | |
CN111814466A (en) | Information extraction method based on machine reading understanding and related equipment thereof | |
CN112016318B (en) | Triage information recommendation method, device, equipment and medium based on interpretation model | |
CN109522406A (en) | Text semantic matching process, device, computer equipment and storage medium | |
CN110532353A (en) | Text entities matching process, system, device based on deep learning | |
CN113326379B (en) | Text classification prediction method, device, equipment and storage medium | |
CN113283461A (en) | Financial big data processing system and method based on block chain | |
CN112052684A (en) | Named entity identification method, device, equipment and storage medium for power metering | |
WO2022134586A1 (en) | Meta-learning-based target classification method and apparatus, device and storage medium | |
WO2021169364A1 (en) | Semantic emotion analysis method and apparatus, device, and storage medium | |
CN111984792A (en) | Website classification method and device, computer equipment and storage medium | |
CN113536795B (en) | Method, system, electronic device and storage medium for entity relation extraction | |
CN113221960B (en) | Construction method and collection method of high-quality vulnerability data collection model | |
CN112036151A (en) | Method and device for constructing gene disease relation knowledge base and computer equipment | |
Gong et al. | Continual pre-training of language models for math problem understanding with syntax-aware memory network | |
Shen et al. | A joint model for multimodal document quality assessment | |
CN114064852A (en) | Method and device for extracting relation of natural language, electronic equipment and storage medium | |
CN113886577A (en) | Text classification method, device, equipment and storage medium | |
CN113988048A (en) | Emotional cause pair extraction method based on multi-wheel machine reading understanding | |
CN114357174B (en) | Code classification system and method based on OCR and machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210820 |
|
WW01 | Invention patent application withdrawn after publication |