CN114357495A - Prediction machine under-chain aggregation method, device, equipment and medium based on block chain - Google Patents

Prediction machine under-chain aggregation method, device, equipment and medium based on block chain Download PDF

Info

Publication number
CN114357495A
CN114357495A CN202210250975.4A CN202210250975A CN114357495A CN 114357495 A CN114357495 A CN 114357495A CN 202210250975 A CN202210250975 A CN 202210250975A CN 114357495 A CN114357495 A CN 114357495A
Authority
CN
China
Prior art keywords
node
data access
request message
predictive
access result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210250975.4A
Other languages
Chinese (zh)
Other versions
CN114357495B (en
Inventor
刘晓赫
郑旗
郑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210250975.4A priority Critical patent/CN114357495B/en
Publication of CN114357495A publication Critical patent/CN114357495A/en
Application granted granted Critical
Publication of CN114357495B publication Critical patent/CN114357495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a prediction machine under-chain aggregation method, device, equipment and medium based on a block chain, relates to the technical field of computers, and particularly relates to a block chain technology. The method is applied to a prediction machine network and comprises the following steps: acquiring a downlink data access request generated by a block chain node; according to the downlink data access request, sending a data request message to each talker node in the talker network to request the talker node to execute downlink data access; performing a Byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement; and feeding back the access result of the aggregated data to the block chain node. The technical scheme solves the malicious behavior of the nodes of the prediction machine and improves the safety of the block chain system.

Description

Prediction machine under-chain aggregation method, device, equipment and medium based on block chain
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a blockchain technique.
Background
Since the blockchain system is a deterministic environment, it can be implemented by an Oracle mechanism if an out-of-chain data source is to be accessed. The prediction machine mechanism is that prediction machine nodes are arranged outside a block chain, and the prediction machine nodes can access data sources outside the chain and feed back access results to the block chain.
However, in the existing talker scheduling mechanism, a service of a talker node is prone to a single point fault, such as downtime and network obstruction, which may affect the availability of the blockchain application; the nodes of the prediction machine can also act badly, and the safety of the whole system is influenced.
Disclosure of Invention
The disclosure provides a prediction machine under-chain aggregation method, device, equipment and medium based on a block chain, which are used for solving the malicious behavior of a prediction machine node and improving the safety of a block chain system.
According to an aspect of the present disclosure, there is provided a block chain-based prolog-machine-link-down aggregation method applied to a prolog-machine network, performed by a leader node in the prolog-machine network, the method including:
acquiring a downlink data access request generated by a block chain node;
according to the downlink data access request, sending a data request message to each talker node in the talker network to request the talker node to execute downlink data access;
performing a Byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement;
and feeding back the access result of the aggregated data to the block chain node.
According to another aspect of the present disclosure, there is provided a block chain-based predictive language under-chain aggregation method applied to a predictive language machine network, which is performed by a normal node in the predictive language machine network, the method including:
receiving a data request message sent by a leader node in the predictive speech machine network, and executing downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node;
executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feeding back the aggregated data access result to the block chain node; and the aggregation data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
According to another aspect of the present disclosure, there is provided a block chain-based prolog-machine offline aggregation apparatus, applied to a prolog-machine network, configured at a leader node in the prolog-machine network, the apparatus including:
the access request acquisition module is used for acquiring a downlink data access request generated by a block chain node;
a request message sending module, configured to send a data request message to each talker node in the talker network according to the downlink data access request, so as to request the talker node to perform downlink data access;
an access result determination module, configured to perform a byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement;
and the access result feedback module is used for feeding back the aggregated data access result to the block chain node.
According to another aspect of the present disclosure, there is provided a block chain-based predictive language under-chain aggregation apparatus, applied to a predictive language machine network, configured in a common node in the predictive language machine network, the apparatus including:
the request message receiving module is used for receiving a data request message sent by a leader node in the predictive speech machine network and executing downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node;
the access result determining module is used for executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result and feeding back the aggregated data access result to the block chain node; and the aggregation data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
According to another aspect of the present disclosure, there is also provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of predictive under-chain aggregation based on blockchains provided by embodiments of the present disclosure.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium having stored thereon computer instructions for causing a computer to perform any one of the methods for chunk chain based predictive offline aggregation provided by the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is also provided a computer program product, including a computer program, which when executed by a processor, implements any one of the prediction under-chain aggregation method based on block chains provided by the embodiments of the present disclosure.
According to the technology disclosed by the invention, the malicious behavior of the nodes of the prediction machine is solved, and the safety of the block chain system is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a method for aggregation under a predictor chain based on a block chain according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of another block chain-based predictive under-chain aggregation method provided in accordance with an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another prediction under-chain aggregation method based on a block chain according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of another prediction under-chain aggregation method based on a block chain according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an aggregation apparatus under a predictor chain based on a block chain according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an aggregation apparatus under a predictor chain based on a block chain according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a method for aggregation under a predictor chain based on a blockchain in an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a prolog-under-chain aggregation method based on a blockchain according to an embodiment of the present disclosure, which is applicable to a situation where a blockchain system accesses an out-of-chain data source based on a prolog-machine mechanism. The technical scheme of the embodiment of the disclosure is applied to a language prediction machine network, the language prediction machine network comprises a plurality of language prediction machine nodes, the language prediction machine nodes further distinguish roles and comprise a leader node, and other language prediction machine nodes are common nodes, preferably, the language prediction machine network also comprises a transmitter node. The roles may be switched between talker nodes, for example by polling or random selection. The same talker node may assume one, two, or three of the three roles of the leader node, the transmitter node, and the normal node at the same time. The talker network and the blockchain network operate independently of each other, but may enable point-to-point communication interaction between nodes. Clients of the blockchain may also be deployed on the prolog nodes, thereby enabling transaction requests to be initiated to the blockchain network. The node performing the transaction on the block chain may be referred to as an on-chain node, and the corresponding execution in the prolog network may be referred to as an off-chain transaction or an off-chain service.
The method of this embodiment may be performed by a predictive language under-chain aggregation apparatus based on a block chain, which may be implemented in hardware and/or software and may be configured in an electronic device, where the electronic device may be a leader node in a predictive language network. Referring to fig. 1, the method of this embodiment is applied to a predictive speech machine network, and is executed by a leader node in the predictive speech machine network, and specifically includes the following steps:
and S110, acquiring a downlink data access request generated by the block chain node.
The data access request may be a data acquisition request from the node on the link to the data source outside the link. The data access request under the chain can be generated by a prediction machine contract request deployed at a blockchain node, and specifically, the prediction machine contract is called by the blockchain node in the process of executing the service transaction request to execute the data access request under the chain.
The oracle contract may be an intelligent contract pre-deployed in a blockchain node to provide off-chain data source acquisition services for on-chain users. The service transaction request can be a transaction request initiated by a user to a service intelligent contract through a client of the block chain according to the requirement of the user. The business intelligent contracts can be various intelligent contracts which are deployed in the blockchain system and can realize corresponding business functions. For example, the business function may be a function directly provided to the user for electronic goods purchase, electronic games, or the like.
For example, when the block link point executes a game drawing function initiated by a user, the random number needs to be acquired out of the chain, and at this time, the block link point may call a prediction engine contract to execute a request for acquiring the random number out of the chain, that is, a request for data access under the chain.
Among them, it may be the leader node in the talker network that performs the acquire downlink data access request. The leader node may be any one of the nodes of the predictive speech machine network, and may specifically be determined by random selection of the predictive speech machine network. All nodes in the predictive speech machine network can possibly serve as leader nodes to execute functions of acquiring requests, collecting data and the like by periodically and randomly selecting the leader nodes, and the malicious behaviors of using the same predictive speech machine node as the leader node every time are avoided.
Illustratively, a user can initiate a service transaction request to a blockchain based on a client of the blockchain according to actual needs, and a block chain node invokes a predictive engine contract to generate a data access request under a link when data acquisition outside the link is required in the process of executing the service transaction request, and a leader node pre-selected in a predictive engine network acquires the data access request under the link generated by the blockchain node.
It should be noted that the block link node may send the downlink data access request to the talker network by using an interface call. Specifically, each block link point in the block chain network is respectively in interface communication with a talker node of the talker network, so that a leader node of the talker network acquires a downlink data access request. However, the blockchain network includes a plurality of blockchain nodes, and a large number of access requests may be generated if interface calls are made to the talker nodes, respectively.
Optionally, a manner of monitoring event logs may be adopted to implement acquisition of the linked data access request by a leader node of the predictive speech machine network.
In an optional embodiment, acquiring the downlink data access request generated by the blockchain node includes: monitoring an event log in a block; if it is monitored that there is an event log of the need for the off-chain access, the off-chain data access request is read from the chunk.
The content to be recorded in the event log can be preset by a related technician when the predictive engine contract is deployed at a block link point. For example, the event log may be event information generated during execution of the data access request under the prediction engine contract. The event log may be recorded in a block header or a block body in the block.
For example, the blockchain node may broadcast the generated downlink data access request and listen by the talker network, and when the leader node of the talker network listens to an event log of the blockchain node that has a downlink access requirement, the downlink data access request is read from the blockchain node.
In the scheme of the optional embodiment, the downlink data access request is acquired from the block link node in a mode of monitoring the event log in the block, so that the downlink data access request is acquired in time, and the predictive machine contract of the block link node does not need to perform interface call on a specific predictive machine node. The blockchain network includes a plurality of blockchain nodes, and if the predictor nodes are called separately, a large number of call requests may be generated.
And S120, sending a data request message to each predictive machine node in the predictive machine network according to the downlink data access request so as to request the predictive machine node to execute the downlink data access.
The data request message may be an acquisition request message of the downlink data access result sent by the leader node to other talker nodes. Wherein, the other nodes of the language predictive machine can be other nodes except the leader node in the network of the language predictive machine, namely common nodes.
For example, the leader node may send a data request message to other talker nodes of the talker network after acquiring the downlink data access request, so that the other talker nodes acquire the downlink data from at least one data source outside the chain after receiving the data request message.
S130, executing a Byzantine consensus algorithm with a predictive controller node in a predictive controller network to determine an aggregated data access result; and the aggregated data access result comprises the data access result under the link fed back by each propheter node meeting the Byzantine quantity requirement.
The byzantine consensus algorithm can be preset in the predictive phone network by the related technical personnel. The talker nodes satisfying the byzantine number requirement may be talker nodes that reach consensus in performing the byzantine consensus algorithm. The result of the data access in the chain can be the data obtained by the prediction machine node accessing at least one data source out of the chain. The number of byzantine nodes may be preset by the skilled person according to the number of talker nodes.
It should be noted that, if the predictive engine node accesses data to the multiple off-link data sources to obtain multiple pieces of off-link data fed back by the multiple off-link data sources, the multiple pieces of off-link data fed back by the multiple off-link data sources may be used as an off-link data access result of the predictive engine node; the predictive controller node can also aggregate a plurality of pieces of downlink data fed back by a plurality of off-link data sources according to a preset aggregation rule, and a data result obtained by aggregating the plurality of pieces of downlink data is used as a downlink data access result of the predictive controller node. The aggregation rule may be preset by a skilled person, for example, the aggregation rule may be a median or an average, and the like.
Illustratively, the leader node and other nodes of the language predictive machine in the network execute a Byzantine consensus algorithm to determine the downlink data access results fed back by the nodes of the language predictive machine respectively, which meet the Byzantine number requirement.
And S140, feeding back the access result of the aggregated data to the block chain node.
For example, the feedback from the talker network to the block link points may be downlink data access results, i.e. aggregated data access results, fed back by the talker nodes respectively, which meet the requirement of the number of byzants; or aggregating the downlink data access results fed back by all nodes of the prophetic machine meeting the Byzantine quantity requirement. The aggregation operation performed by the leader node may be a secondary data processing operation performed on the results fed back by each of the talker nodes, for example, an operation of taking a median or an average, or may be a result set formed by recording the results together.
If the feedback to the block link point is the downlink data access result fed back by each of the propheter nodes meeting the byzantine quantity requirement, the block link node can call a propheter contract after obtaining the aggregated data access result, and aggregate the aggregated data access result based on a preset aggregation rule to obtain a final aggregation result. The aggregation rule may be a median or average, etc.
Optionally, the talker node may feed back the aggregated data access result by initiating a transaction request. For example, the talker node may initiate a data feedback transaction request to a blockchain node based on a client of the blockchain; after the block chain node obtains the data feedback transaction request, a prediction machine contract can be called to process the data feedback transaction request, so that an aggregated data access result is obtained.
The method comprises the steps of obtaining a downlink data access request generated by a block chain node; according to the downlink data access request, sending a data request message to each talker node in the talker network to request the talker node to execute the downlink data access; executing a Byzantine consensus algorithm with a predictive engine node in a predictive engine network to determine an aggregated data access result; and feeding back the access result of the aggregated data to the block chain node. According to the scheme, the distributed pre-talker nodes are adopted to acquire the data outside the link, the single-point fault caused by network outage or downtime and the like is avoided, the data outside the link can be acquired by the block link nodes in time, the acquisition efficiency of the data outside the link is improved, and the problem of usability of block link application is solved. By executing the Byzantine consensus algorithm by each predictor node in the predictor network and determining the aggregated data access result, the malicious behavior of the predictor node is solved, the reliability of the aggregated data access result is improved, and the safety of the whole block chain system is improved.
Fig. 2 is a schematic diagram of another prediction under-chain aggregation method based on a block chain according to an embodiment of the present disclosure, which is an alternative proposed scheme based on the foregoing embodiment.
Referring to fig. 2, the method for aggregation under a predictor chain based on a block chain provided in this embodiment includes:
and S210, acquiring a downlink data access request generated by a block chain node.
S220, according to the downlink data access request, sending a first request message to each prediction machine node in the prediction machine network to request the prediction machine node to execute the downlink data access.
S230, receiving a first response message responding to the first request message.
The first request message is a data request message sent by a leader node in the talker network to other talker nodes. The first response message may be a response message generated by the other talker node from the received first request message. The first response message may carry the downlink data access result acquired by the talker node itself and/or the signature of the talker node.
Illustratively, other predictive-machine nodes in the predictive-machine network acquire a first request message sent by the leader node, and access the data under the link to the data source outside the link according to the first request message; and each predicting machine node sends the acquired downlink data access result and the signature of the predicting machine node as a first response message to the leader node so that the leader node verifies the first response message.
S240, if the number of the first response messages meets the first number threshold value, carrying the downlink data access result in each first response message in a second request message and sending the second request message to the first target predictive player node to request the first target predictive player node to verify the second request message; and the first target speaker node is the speaker node which feeds back the first response message.
The first number threshold may be preset by a relevant technician, and specifically may be determined according to a preset number of byzantine nodes. For example, the first number threshold may be equal to 2f +1, where f is equal to the preset number of byzantine nodes. The number of byzantine nodes may be preset by the relevant technician according to the number of talker nodes in the talker network, for example, f may satisfy the relationship of n >3f + 1. And n is the total number of the nodes of the prediction machine in the prediction machine network.
Wherein the second request message may be a data request message sent by the leader node in the talker network to the first target talker node based on the first response message.
It should be noted that, in the talker network, some talker nodes that receive the first request message may have security problems, and the first response message cannot be generated or sent to the talker nodes, so that only most talker nodes may generate the first response message for the first request message and send the first response message to the leader node. Correspondingly, the speaker node which successfully generates and feeds back the first response message to the leader node is the first target speaker node.
For example, the determination of whether the number of the first response messages satisfies the first number threshold may be based on that, if the number of the first response messages is greater than or equal to the first number threshold, the number of the first response messages is considered to satisfy the first number threshold. Optionally, in order to ensure that the obtained first response message has timeliness, so as to ensure reliability of a subsequently obtained downlink data access result, a grace period may also be set for a process of obtaining the first response message, that is, the reacquisition and statistics of the first response message are stopped when the grace period arrives, and the grace period may be started when the first request message is sent.
In an alternative embodiment, determining that the number of first response messages satisfies the first number threshold value comprises: the leader node determines that the number of first response messages satisfies a first number threshold within a time range in which a grace period is set.
The time range of the grace period may be preset by a skilled person, for example, the time range of the grace period is within 2 seconds from the start of the first request message.
Illustratively, if the number of the acquired first response messages is not less than a first number threshold within a time range of the set grace period, the number of the first response messages may be considered to satisfy the first number threshold; if the number of the first response messages acquired by the leader node within the time range of the set grace period is smaller than the first number threshold, the number of the first response messages may be considered not to satisfy the first number threshold.
In this optional embodiment, by setting the time range of the grace period, whether the number of the first response messages acquired by the leader node satisfies the first number threshold is determined, so that timeliness of the first response messages is ensured, and reliability of subsequently acquired downlink data access results is ensured.
Illustratively, the leader node may obtain first response messages sent by other nodes of the talker in a time range of a preset grace period, and determine whether the number of the obtained first response messages meets a first number threshold; if so, carrying the acquired downlink data access result in each first response message in a second request message, and sending a second message request to the talker node which feeds back the first response message so as to verify the second request message by the talker node which requests to feed back the first response message; if not, the message processing is considered to be failed.
The data access result under the chain can be a result generated by local aggregation at the predictive machine node after one predictive machine node accesses one or more data sources out of the chain; or one or more off-chain data results obtained by a predictive node accessing one or more off-chain data sources, not aggregated locally.
It should be noted that, in order to ensure the comprehensiveness of the verification of the second request message by the subsequent first target predictive player node, the signature of the predictive player node may also be included in the first response message.
In an optional embodiment, the sending the downlink data access result in each first response message to the first target talker node by being carried in the second request message includes: each first response message is carried in a second request message and sent to the first target speaker node; the first response message comprises the result of the data access in the chain and the signature of the nodes of the prediction machine.
The signature of the propheter node may be a signature of the propheter node on the acquired downlink data access result. The first target predictive-speaker node may be a predictive-speaker node that feeds back the data access result and a signature of the predictive-speaker node down the chain.
Illustratively, the leader node carries the acquired downlink data access result of each language-pretending machine node and the acquired signature of the language-pretending machine node in the second request message, and sends the second request message to the language-pretending machine node which feeds back the downlink data access result and the signature of the language-pretending machine node.
In the optional embodiment, the downlink data access result and the signature of the prolog node in each first response message are carried in the second request message and sent to the first target prolog node, so that the comprehensiveness of the subsequent first target prolog node in verification of the second request message is improved, and the accuracy of the number of the subsequently acquired second response messages is improved.
And S250, receiving a second response message which responds that the second request message is verified.
The second response message may be a response message generated by the other talker node according to the received second request message. The second response message may include the aggregated offline data access result and the signature of the propheter node, wherein the aggregated data access result may include the offline data access result fed back by each of the propheter nodes that meet the byzantine number requirement.
Optionally, the second response message may include the aggregated data access result and the signature of the propheter node. The aggregated data access result under the aggregation chain can be an aggregated result obtained by aggregating the data access results under the chain fed back by the nodes of the propheter meeting the requirement of the number of Byzantines.
For example, after receiving the second request message, the other talker nodes may verify the downlink data access result of each talker node in the second request message and the signature of the other talker, and use the downlink data access result of each talker node that passes the verification as the aggregated downlink data access result. And the verified talker nodes send the data access result under the aggregation chain and the signature of the own talker node to the leader node as a second response message. Through the process, each talker node receiving the second request message can further verify all the downlink data access results, and reliability is obviously provided.
In order to improve the processing efficiency of the prediction machine contract of the block chain node, the aggregation process of the data access result under the aggregation chain can be completed under the chain by the prediction machine network. Meanwhile, in order to improve the accuracy of the obtained second response message, the leader node may verify the second response message after receiving the second response message sent by each talker node.
In an optional embodiment, after receiving the second response message in response to the second request message being verified, the method further comprises: and the leader node acquires the aggregated data access result under the aggregation chain and the signature of the speaker node feeding back the second response message from each second response message, and verifies the data access result and the signature.
Wherein. The aggregated data access result under the aggregation chain can be an aggregation result obtained by aggregating the data access results under the chain fed back by the nodes of the propheters meeting the requirement of the number of Byzantines by other nodes of the propheters.
For example, after receiving the second response message, the leader node obtains the aggregated data access result under the aggregation chain and the signature of the talker node, which are fed back by each talker node, from each second response message, and verifies the aggregated data access result under the aggregation chain and the signature of the talker node. Specifically, whether the obtained aggregated data access results under the aggregation links are consistent and whether the signature of the propheter node is correct are verified, and if the aggregated results are consistent and the signature is correct, the received second response message is considered to pass verification; if there is a second response message whose aggregated result is inconsistent with other aggregated results or whose signature of the predictor node is incorrect, the second response message may be considered to be not verified.
In this optional embodiment, by means of verifying the aggregated data access result under the aggregation chain and the signature of the prolog node in the second response message after the leader node receives the second response message, accuracy verification of the obtained second response message is improved, so that accuracy of the aggregated data access result after aggregation is improved, and further accuracy of determining whether the aggregated data access result after subsequent aggregation passes through the bypath consensus algorithm is improved.
And S260, if the number of the second response messages meets a second number threshold value, confirming the aggregated data access result of each downlink data access result after aggregation, and performing a Byzantine consensus algorithm.
The second number threshold may be preset by a related technician, and specifically may be determined according to a preset number of byzantine nodes. For example, the second number threshold may be equal to f +1, where f is equal to the preset number of byzantine nodes.
Optionally, the condition that the number of the second response messages satisfies the second number threshold may be that after the leader node verifies the second response messages, if the number of the second response messages that pass the verification is not less than the second number threshold, the number of the second response messages may be considered to satisfy the second number threshold.
For example, if the number of second response messages passing the verification is not less than the second number threshold after the leader node verifies each obtained second response message, the aggregated data access result corresponding to each aggregated downlink data access result may be considered to pass through the byzantine consensus algorithm.
And S270, feeding back the access result of the aggregated data to the block chain node.
For example, the aggregated data access result, which confirms that the data access result under each link is aggregated by the byzantine consensus algorithm, may be fed back to the blockchain node.
Illustratively, the aggregated data access result of the data access results under each link of the byzantine consensus algorithm can also be fed back to the blockchain node.
In an alternative embodiment, feeding back the aggregated data access results to the block link points comprises: feeding back the aggregated data access result and the signature of the second target language predictive machine node to a language predictive machine contract in the block chain node; and the second target language predictive machine node is a language predictive machine node which feeds back the second response message, and the signature is used for signature verification and signature quantity verification of the language predictive machine contract.
For example, the aggregated data access result and the signature of the second target language predictive machine node may be directly fed back to the language predictive machine contract in the blockchain node by the leader node. And carrying out signature verification and signature quantity verification on the signature of the second target predictive machine node by the predictive machine contract, and aggregating the obtained aggregated data access result based on a preset aggregation rule after the signature passes verification. Wherein, the aggregation rule can be a median or an average.
In the optional embodiment, the aggregated data access result and the signature of the second target predictive engine node are fed back to the predictive engine contract of the block chain node, and the fed-back signature and result are verified by the predictive engine contract, so that the accuracy of the fed-back aggregated data access result is verified, and the reliability of the aggregated data access result obtained by the block chain node is ensured by verifying the signature.
In an alternative embodiment, feeding back the aggregated data access results to the block link points comprises: and carrying the aggregated data access result and the signature of the second target language predictive machine node in a third request message and sending the third request message to the second target language predictive machine node so as to request the second target language predictive machine node to feed back the aggregated data access result and the signature of the second target language predictive machine node to the blockchain node through a transmitter node in the language predictive machine network.
And the second target speaker node is the speaker node feeding back the second response message. It should be noted that, in all the talker nodes acquiring the second request message, there may be a case where part of the talker nodes fail to verify the second request message, that is, the second response message cannot be successfully fed back to the leader node. Therefore, the second target talker node is the talker node which passes the verification of the second request message and successfully feeds back the second response message to the leader node.
Wherein the third request message may be a data request message sent by the leader node in the talker network to the second target talker node based on the second response message.
The transmitter node may be a predictive engine node pre-selected from a predictive engine network for transmitting data to the block-link node. The number of transmitter nodes is one and only one, and specifically may be randomly selected from the predictive player network.
In an alternative embodiment, the transmitter node is selected by the talker network for determination after generating the third request message; the selection mode comprises polling selection or random selection.
After the leader node generates the third request message, the talker network selects the transmitter node in a polling mode or randomly, so that the attack can be avoided, the possibility that the transmitter node is selected in advance to make a malicious attack is avoided, and the possibility that the aggregated data access result fails to be fed back to the block link node is reduced.
Illustratively, the leader node acquires the aggregated data access result and the signature of the second target talker node from the second response message, carries the acquired aggregated data access result and the signature of the talker node in the third request message, sends the third request message to the second target talker node, feeds back the result to the block link node by the second target talker node, and simultaneously selects the transmitter node randomly by the talker network. And after the second target predictive player node acquires the third request message, acquiring the aggregated data access result and the signature of the second predictive player node from the third request message, and feeding back the aggregated data access result and the signature of the second target predictive player node to the block link node through the randomly selected transmitter node.
In the optional embodiment, the aggregated data access result and the signature of the second target talker node are carried in the third request message and sent to the second target talker node, and the second target talker node feeds back data to the block link node through the transmitter node. By only transmitting data to the block chain link points through the transmitter nodes, network congestion on the block chain link is reduced, the situation that data transmission efficiency is low due to the fact that a plurality of prediction machine nodes transmit data to the block chain link points at the same time is avoided, and the block chain nodes can acquire data results fed back by the prediction machine network in time.
The scheme of the embodiment of the disclosure receives a first response message responding to the first request message; if the number of the first response messages meets the first number threshold value, carrying the downlink data access result in each first response message in a second request message and sending the second request message to the first target prediction machine node; receiving a second response message responding that the second request message is verified; and if the number of the second response messages meets a second number threshold value, confirming the aggregated data access result of each downlink data access result after aggregation, and passing through a Byzantine consensus algorithm. According to the scheme, the accurate determination of whether the aggregated data access result passes through the Byzantine consensus algorithm or not is realized by performing message request and response message verification twice between the leader node and other talker nodes. The situation that data results are inaccurate or unavailable due to the fact that a small number of prediction machine nodes jointly form attacks is avoided. Through a consensus mechanism among all the nodes of the prediction machines, the occurrence of the wordbreak of the Byzantine is reduced.
Fig. 3 is a schematic diagram of a prolog-under-chain aggregation method based on a blockchain according to an embodiment of the present disclosure, which is applicable to a situation where a blockchain system accesses an out-of-chain data source based on a prolog-machine mechanism. The method can be executed by an out-of-chain service implementation apparatus of a block chain, which can be implemented in hardware and/or software and can be configured in an electronic device, which can be a predictive-speaker node in a predictive-speaker network. Referring to fig. 3, the method is applied to a predictive machine network, and is executed by a common node in the predictive machine network, and specifically includes the following steps:
s310, receiving a data request message sent by a leader node in a predictive speech machine network, and executing downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node.
The leader node may be any one of the nodes of the predictive speech machine network, and may be specifically determined by random selection of the predictive speech machine network.
Illustratively, a common node in the predictive machine network receives a data request message sent by a leader node in the predictive machine network; after obtaining the data request message, the common node accesses and obtains the data under the link from at least one data source out of the link, thereby obtaining the data access result under the link. The common node can be a speaker node except for the leader node in the speaker network.
S320, executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feeding back the aggregated data access result to the block chain node; and the aggregated data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
The byzantine consensus algorithm can be preset in the predictive phone network by the related technical personnel. The talker nodes satisfying the byzantine number requirement may be talker nodes that reach consensus in performing the byzantine consensus algorithm. The result of the data access in the chain can be the data obtained by the prediction machine node accessing at least one data source out of the chain. The number of byzantine nodes may be preset by the skilled person according to the number of talker nodes.
Illustratively, a common node and a leader node in the predictive speech machine network execute a Byzantine consensus algorithm, determine downlink data access results fed back by the predictive speech machine nodes meeting the Byzantine number requirement respectively, and feed back the aggregated data access results to the block link points.
It should be noted that the feedback to the block link point may be the downlink data access result fed back by each of the common nodes meeting the requirement of the byzantine number, that is, the aggregated data access result; or aggregating the downlink data access results fed back by all common nodes meeting the requirement of the Byzantine quantity after aggregation.
The embodiment of the disclosure receives a data request message sent by a leader node in a predictive speech machine network, and executes downlink data access according to the data request message; and executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feeding back the aggregated data access result to the blockchain node. According to the scheme, the distributed pre-talker nodes are adopted to acquire the data outside the link, the single-point fault caused by network outage or downtime and the like is avoided, the data outside the link can be acquired by the block link nodes in time, the acquisition efficiency of the data outside the link is improved, and the problem of usability of block link application is solved. By executing the Byzantine consensus algorithm by each predictor node in the predictor network and determining the aggregated data access result, the malicious behavior of the predictor node is solved, the reliability of the aggregated data access result is improved, and the safety of the whole block chain system is improved.
Fig. 4 is a schematic diagram of another prediction under-chain aggregation method based on a block chain according to an embodiment of the present disclosure, which is an alternative proposed scheme based on the foregoing embodiment.
Referring to fig. 4, the method for aggregation under a predictor chain based on a block chain provided in this embodiment includes:
s410, receiving a first request message sent by a leader node in the predictive speech machine network, and executing downlink data access according to the first request message.
Wherein the first request message is triggered by a downlink data access request generated by a block link node.
And S420, generating a first response message according to the downlink data access result, and feeding back the first response message to the leader node.
The first response message may be a response message generated by the common node according to the received first request message and based on the result of the downlink data access. The first response message may carry the downlink data access result acquired by the common node itself and/or the signature of the common node. The signature of the common node may be a signature of the common node on the result of the acquired downlink data access.
Illustratively, a common node in the predictive speech machine network acquires a first request message sent by a leader node, and accesses linked data to an out-of-link data source according to the first request message to obtain a linked data access result; and the common node sends the acquired downlink data access result and the signature of the common node as a first response message to the leader node so that the leader node can verify the first response message.
S430, receiving a second request message sent by the leader node, wherein the second request message carries each first response message received by the leader node.
Illustratively, the leader node receives a first response message responding to the first request message, verifies the first response message, and sends a downlink data access result in each verified first response message to the common node by carrying the downlink data access result in a second request message; the generic node receives the second request message sent by the leader node. And the second request message carries the first response messages received by the leader node.
And S440, respectively verifying the signature of each first response message in the second request message.
And the common node verifies whether the signature of the first response message is correct, if so, the signature of the first response message is verified to be passed, and if not, the signature of the first response message is not verified to be passed.
S450, aggregating the downlink access results in each verified first response message to form an aggregated data access result, carrying the aggregated data access result in the second response message, and signing the second response message.
And the aggregated data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
Illustratively, the common node aggregates the downlink access results in the first response message passing the verification to obtain an aggregated data access result; and carrying the aggregated data access result in the second response message, and signing the second response message.
And S460, feeding back the second response message to the leader node.
Illustratively, the second response message is fed back to the leader node, so that the leader node verifies the second response message, and carries the aggregated data access result in the verified second response message and the signature of the second target language predictive machine node in the third request message and sends the third request message to the second target language predictive machine node.
And S470, receiving a third request message sent by the leader node.
The third request message carries the aggregated data access result and the signature of each second target predictive player node, and the second target predictive player node is a common node feeding back the second response message.
And S480, after the signature in the third request message is verified, feeding back the aggregated data access result and the signature of the second target predictive player node to the block chain node through the transmitter node in the predictive player network.
The transmitter node is determined by the talker network selection after generating the third request message, and the selection mode includes polling selection or random selection.
After the leader node generates the third request message, the talker network selects the transmitter node in a polling mode or randomly, so that the attack can be avoided, the possibility that the transmitter node is selected in advance to make a malicious attack is avoided, and the possibility that the aggregated data access result fails to be fed back to the block link node is reduced.
According to the scheme of the embodiment of the disclosure, a first response message is generated according to the downlink data access result and fed back to the leader node; receiving a second request message sent by the leader node and verifying the second request message; and aggregating the verified downlink access results to form an aggregated data access result, carrying the aggregated data access result in the second response message, signing the aggregated data access result, and feeding back the aggregated data access result to the leader node. According to the scheme, through the mode of performing message request and response message verification twice between the common node and the leader node, whether the aggregated data access result passes through the Byzantine consensus algorithm or not is accurately determined, the occurrence of the condition that the Byzantine is bad is reduced, and the occurrence of the condition that the data result is inaccurate or unavailable due to the fact that a small number of talker nodes jointly form attacks is avoided. Receiving a third request message sent by the leader node and verifying the third request message; after the verification is passed, the aggregated data access result and the signature of the second target predictive engine node are fed back to the block chain node by the transmitter node, so that network congestion on the block chain is reduced, the situation that data transmission efficiency is low due to the fact that a plurality of predictive engine nodes send data to the block chain node at the same time is avoided, and the block chain node can obtain the data result fed back by the predictive engine network in time.
Fig. 5 is a schematic diagram of a block chain-based predictive language offline aggregation apparatus according to an embodiment of the present disclosure, which is applicable to an application scenario in which a block chain system accesses an out-of-chain data source based on a predictive language mechanism, and the apparatus is configured in an electronic device, and can implement the block chain-based predictive language offline aggregation method according to any embodiment of the present disclosure. The electronic device may be a leader node in a predictive machine network, and referring to fig. 5, the aggregation apparatus 500 under the predictive machine chain based on the block chain specifically includes the following:
an access request obtaining module 501, configured to obtain a downlink data access request generated by a block chain node;
a request message sending module 502, configured to send a data request message to each talker node in the talker network according to the downlink data access request, so as to request the talker node to perform downlink data access;
an access result determining module 503, configured to perform a byzantine consensus algorithm with the talker nodes in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement;
an access result feedback module 504, configured to feed back the aggregated data access result to the blockchain node.
The method comprises the steps of obtaining a downlink data access request generated by a block chain node; according to the downlink data access request, sending a data request message to each talker node in the talker network to request the talker node to execute the downlink data access; executing a Byzantine consensus algorithm with a predictive engine node in a predictive engine network to determine an aggregated data access result; and feeding back the access result of the aggregated data to the block chain node. According to the scheme, the distributed pre-talker nodes are adopted to acquire the data outside the link, the single-point fault caused by network outage or downtime and the like is avoided, the data outside the link can be acquired by the block link nodes in time, the acquisition efficiency of the data outside the link is improved, and the problem of usability of block link application is solved. By executing the Byzantine consensus algorithm by each predictor node in the predictor network and determining the aggregated data access result, the malicious behavior of the predictor node is solved, the reliability of the aggregated data access result is improved, and the safety of the whole block chain system is improved.
In an optional implementation manner, if the data request message is recorded as a first request message, the access result determining module 503 includes:
a first response message receiving unit, configured to receive a first response message in response to the first request message;
a second request message sending unit, configured to, if the number of the first response messages meets a first number threshold, send a downlink data access result in each of the first response messages to a first target talker node by being carried in a second request message, so as to request the first target talker node to verify the second request message; wherein the first target speaker node is a speaker node which feeds back the first response message;
a second response message receiving unit, configured to receive a second response message that is in response to the second request message being verified;
and the data access result determining unit is used for confirming the aggregated data access result of each downlink data access result after aggregation if the number of the second response messages meets a second number threshold value, and the aggregated data access result passes through a Byzantine consensus algorithm.
In an optional implementation manner, the second request message sending unit includes:
a second request message sending subunit, configured to send each first response message to the first target talker node, where the first response message is carried in the second request message; the first response message comprises the data access result and the signature of the propheter node.
In an optional implementation manner, the access result determining module 503 further includes:
and the result verification unit is used for acquiring the aggregated data access result under the aggregation chain and the signature of the speaker node feeding back the second response message from each second response message after receiving the second response message which responds that the second request message passes the verification, and verifying the result.
In an optional implementation manner, the access result determining module 503 further includes:
a quantity satisfaction condition determining unit, configured to determine that the quantity of the first response messages satisfies a first quantity threshold value;
the number satisfaction condition determination unit includes:
and the quantity meeting condition determining subunit is used for determining that the quantity of the first response messages meets a first quantity threshold value within a time range of setting a grace period.
In an optional implementation, the access result feedback module 504 includes:
a third request message sending unit, configured to carry the aggregated data access result and the signature of the second target talker node in a third request message, and send the third request message to the second target talker node, so as to request the second target talker node to feed back the aggregated data access result and the signature of the second target talker node to the block chain node through a transmitter node in the talker network;
and the second target speaker node is the speaker node which feeds back the second response message.
In an alternative embodiment, the transmitter node is selected by the talker network for determination after generating the third request message; the selection mode comprises polling selection or random selection.
In an optional implementation, the access result feedback module 504 includes:
a result feedback unit, configured to feed back the aggregated data access result and a signature of a second target prediction machine node to a prediction machine contract in the block chain node; and the second target language predictive machine node is a language predictive machine node which feeds back the second response message, and the signature is used for signature verification and signature quantity verification of the language predictive machine contract.
In an optional implementation manner, the first number threshold is equal to 2f +1, the second number threshold is equal to f +1, f is equal to a preset number of byzantine nodes, f satisfies a relationship that n >3f +1, and n is a total number of talker nodes in the talker network.
In an optional implementation manner, the access request obtaining module 501 includes:
the event log monitoring unit is used for monitoring event logs in the blocks;
and the data access request reading unit is used for reading the data access request under the chain from the block if the event log with the requirement of the access under the chain is monitored.
In an alternative embodiment, the leader node is determined by random selection by the predictive engine network.
The device for polymerizing under the predictive language chain based on the block chain, provided by the technical scheme of the embodiment of the disclosure, can execute the method for polymerizing under the predictive language chain based on the block chain provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects for executing the method for polymerizing under the predictive language chain based on the block chain.
Fig. 6 is a schematic diagram of a block chain-based predictive language offline aggregation apparatus according to an embodiment of the present disclosure, which is applicable to an application scenario in which a block chain system accesses an out-of-chain data source based on a predictive language mechanism, and the apparatus is configured in an electronic device, and can implement the block chain-based predictive language offline aggregation method according to any embodiment of the present disclosure. The electronic device may be a common node in a predictive engine network, and referring to fig. 6, the apparatus 600 for aggregation under a predictive engine chain based on a block chain specifically includes the following:
a request message receiving module 601, configured to receive a data request message sent by a leader node in the predictive speech machine network, and execute downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node;
an access result determining module 602, configured to perform a byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feed back the aggregated data access result to the blockchain node; and the aggregation data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
The embodiment of the disclosure receives a data request message sent by a leader node in a predictive speech machine network, and executes downlink data access according to the data request message; and executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feeding back the aggregated data access result to the blockchain node. According to the scheme, the distributed pre-talker nodes are adopted to acquire the data outside the link, the single-point fault caused by network outage or downtime and the like is avoided, the data outside the link can be acquired by the block link nodes in time, the acquisition efficiency of the data outside the link is improved, and the problem of usability of block link application is solved. By executing the Byzantine consensus algorithm by each predictor node in the predictor network and determining the aggregated data access result, the malicious behavior of the predictor node is solved, the reliability of the aggregated data access result is improved, and the safety of the whole block chain system is improved.
In an optional implementation manner, the data request message is denoted as a first request message, and the access result determining module 602 includes:
the first response message feedback unit is used for generating a first response message according to the downlink data access result and feeding the first response message back to the leader node;
a second request message receiving unit, configured to receive a second request message sent by the leader node, where the second request message carries each first response message received by the leader node;
a signature verification unit, configured to verify signatures of the first response messages in the second request message respectively;
the result aggregation unit is used for aggregating the downlink access results in each verified first response message to form an aggregated data access result, carrying the aggregated data access result in a second response message, and signing the second response message;
a second response message feedback unit, configured to feed back the second response message to the leader node.
In an alternative embodiment, the apparatus 600 further comprises:
a third request message receiving module, configured to receive a third request message sent by the leader node; the third request message carries the aggregated data access result and the signature of each second target predictive machine node, and the second target predictive machine node is a common node for feeding back the second response message;
and the signature verification module is used for feeding back the aggregated data access result and the signature of the second target speaker node to the block chain node through a transmitter node in the speaker network after the signature in the third request message passes verification.
In an alternative embodiment, the transmitter node is selected by the talker network for determination after generating the third request message;
the selection mode comprises polling selection or random selection.
The device for polymerizing under the predictive language chain based on the block chain, provided by the technical scheme of the embodiment of the disclosure, can execute the method for polymerizing under the predictive language chain based on the block chain provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects for executing the method for polymerizing under the predictive language chain based on the block chain.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and the like of the related downlink data access request, data request message, data access result and the like all conform to the regulations of related laws and regulations, and do not violate the good custom of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as a predictive under-chain aggregation method based on block chains. For example, in some embodiments, the block chain based predictive under-chain aggregation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above described pre-senter under-chain aggregation method based on block chains may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the predictor-under-chain aggregation method based on a blockchain.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Network (WAN) blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome. The server may also be a server of a distributed system, or a server incorporating a blockchain.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
Cloud computing (cloud computing) refers to a technology system that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in a self-service manner as needed. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be performed in parallel, sequentially, or in a different order, as long as the desired results of the technical solutions provided by this disclosure can be achieved, and are not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A block chain based predictive machine offline aggregation method applied to a predictive machine network, performed by a leader node in the predictive machine network, the method comprising:
acquiring a downlink data access request generated by a block chain node;
according to the downlink data access request, sending a data request message to each talker node in the talker network to request the talker node to execute downlink data access;
performing a Byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement;
and feeding back the access result of the aggregated data to the block chain node.
2. The method of claim 1, wherein the data request message is marked as a first request message, and performing a Byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result comprises:
receiving a first response message in response to the first request message;
if the number of the first response messages meets a first number threshold value, carrying a downlink data access result in each first response message in a second request message and sending the second request message to a first target predictive player node to request the first target predictive player node to verify the second request message; wherein the first target speaker node is a speaker node which feeds back the first response message;
receiving a second response message responding that the second request message is verified;
and if the number of the second response messages meets a second number threshold value, confirming the aggregated data access result of each downlink data access result after aggregation, and passing through a Byzantine consensus algorithm.
3. The method of claim 2, wherein the sending the downlink data access result in each first response message to the first target talker node in the second request message comprises:
carrying the first response messages in a second request message and sending the second request message to a first target speaker node; the first response message comprises the data access result and the signature of the propheter node.
4. The method of claim 2, after receiving a second response message in response to the second request message being verified, further comprising:
and acquiring the aggregated data access result under the aggregation chain and the signature of the propheter node feeding back the second response message from each second response message, and verifying.
5. The method of claim 2, wherein determining that the number of first response messages satisfies a first number threshold comprises:
and determining that the number of the first response messages meets a first number threshold within a time range of a set grace period.
6. The method of claim 2, wherein feeding back the aggregated data access result to a block link point comprises:
carrying the aggregated data access result and the signature of the second target language predictive machine node in a third request message, and sending the third request message to the second target language predictive machine node to request the second target language predictive machine node to feed back the aggregated data access result and the signature of the second target language predictive machine node to the blockchain node through a transmitter node in a language predictive machine network;
and the second target speaker node is the speaker node which feeds back the second response message.
7. The method of claim 6, wherein the transmitter node is determined by the talker network election after generating the third request message;
the selection mode comprises polling selection or random selection.
8. The method of claim 2, wherein feeding back the aggregated data access result to a block link point comprises:
feeding back the aggregated data access result and the signature of a second target language predictive machine node to a language predictive machine contract in the block chain node; and the second target language predictive machine node is a language predictive machine node which feeds back the second response message, and the signature is used for signature verification and signature quantity verification of the language predictive machine contract.
9. The method of claim 2, wherein the first number threshold is equal to 2f +1, the second number threshold is equal to f +1, f is equal to a preset number of byzantine nodes, f satisfies a relationship of n >3f +1, and n is a total number of talker nodes in the talker network.
10. The method of claim 1, wherein acquiring the downlink data access request generated by the blockchain node comprises:
monitoring an event log in a block;
if it is monitored that there is an event log of the need for the off-chain access, the off-chain data access request is read from the chunk.
11. The method of claim 1, wherein the leader node is determined by a predictive engine network random pick.
12. A block chain-based predictive language offline aggregation method is applied to a predictive language machine network and is executed by a common node in the predictive language machine network, and the method comprises the following steps:
receiving a data request message sent by a leader node in the predictive speech machine network, and executing downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node;
executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result, and feeding back the aggregated data access result to the block chain node; and the aggregation data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
13. The method of claim 12, wherein the data request message is marked as a first request message, then performing a byzantine consensus algorithm with the leader node to determine an aggregate data access result comprises:
generating a first response message according to the downlink data access result, and feeding back the first response message to the leader node;
receiving a second request message sent by the leader node, wherein the second request message carries each first response message received by the leader node;
respectively verifying the signature of each first response message in the second request message;
aggregating the downlink access results in each verified first response message to form an aggregated data access result, carrying the aggregated data access result in a second response message, and signing the second response message;
feeding back the second response message to the leader node.
14. The method of claim 12, further comprising:
receiving a third request message sent by the leader node; the third request message carries the aggregated data access result and the signature of each second target predictive machine node, and the second target predictive machine node is a common node for feeding back the second response message;
and after the signature in the third request message passes the verification, feeding back the aggregated data access result and the signature of the second target prediction machine node to the block chain node through a transmitter node in the prediction machine network.
15. The method of claim 14, wherein the transmitter node is determined by the talker network election after generating the third request message;
the selection mode comprises polling selection or random selection.
16. A block chain-based predictive language offline aggregation device applied to a predictive language machine network and configured at a leader node in the predictive language machine network, the device comprising:
the access request acquisition module is used for acquiring a downlink data access request generated by a block chain node;
a request message sending module, configured to send a data request message to each talker node in the talker network according to the downlink data access request, so as to request the talker node to perform downlink data access;
an access result determination module, configured to perform a byzantine consensus algorithm with a talker node in the talker network to determine an aggregated data access result; wherein the aggregated data access result comprises the downlink data access results fed back by the nodes of the propheters respectively and meeting the Byzantine quantity requirement;
and the access result feedback module is used for feeding back the aggregated data access result to the block chain node.
17. An aggregation device under a predictive engine chain based on a block chain, which is applied to a predictive engine network and configured at a common node in the predictive engine network, the device comprising:
the request message receiving module is used for receiving a data request message sent by a leader node in the predictive speech machine network and executing downlink data access according to the data request message; wherein the data request message is triggered by a downlink data access request generated by a block link node;
the access result determining module is used for executing a Byzantine consensus algorithm with the leader node to determine an aggregated data access result and feeding back the aggregated data access result to the block chain node; and the aggregation data access result comprises the downlink data access results fed back by the common nodes meeting the Byzantine quantity requirement.
18. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the block chain based predictive under-chain aggregation method of any of claims 1-11 or 12-15.
19. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the block chain based predictive under-chain aggregation method of any of claims 1-11 or 12-15.
CN202210250975.4A 2022-03-15 2022-03-15 Prediction machine under-chain aggregation method, device, equipment and medium based on block chain Active CN114357495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210250975.4A CN114357495B (en) 2022-03-15 2022-03-15 Prediction machine under-chain aggregation method, device, equipment and medium based on block chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210250975.4A CN114357495B (en) 2022-03-15 2022-03-15 Prediction machine under-chain aggregation method, device, equipment and medium based on block chain

Publications (2)

Publication Number Publication Date
CN114357495A true CN114357495A (en) 2022-04-15
CN114357495B CN114357495B (en) 2022-06-17

Family

ID=81094807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210250975.4A Active CN114357495B (en) 2022-03-15 2022-03-15 Prediction machine under-chain aggregation method, device, equipment and medium based on block chain

Country Status (1)

Country Link
CN (1) CN114357495B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115206018A (en) * 2022-06-16 2022-10-18 湖南天河国云科技有限公司 Lottery drawing method and lottery drawing equipment based on block chain prediction machine
CN116049319A (en) * 2023-03-07 2023-05-02 天聚地合(苏州)科技股份有限公司 Method and device for acquiring out-of-chain data based on prestige reputation value
CN116722966A (en) * 2023-07-26 2023-09-08 云南大学 Efficient trusted chain data feeding method based on DAG predictor network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112003941A (en) * 2020-08-25 2020-11-27 杭州时戳信息科技有限公司 Method, system, node device and storage medium for distributing downlink data request
CN112948900A (en) * 2021-03-31 2021-06-11 工银科技有限公司 Method and device for acquiring data under link applied to block chain system
WO2021179661A1 (en) * 2020-03-13 2021-09-16 腾讯科技(深圳)有限公司 Cross-blockchain data mutual storage method, apparatus and device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021179661A1 (en) * 2020-03-13 2021-09-16 腾讯科技(深圳)有限公司 Cross-blockchain data mutual storage method, apparatus and device, and storage medium
CN112003941A (en) * 2020-08-25 2020-11-27 杭州时戳信息科技有限公司 Method, system, node device and storage medium for distributing downlink data request
CN112948900A (en) * 2021-03-31 2021-06-11 工银科技有限公司 Method and device for acquiring data under link applied to block chain system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115206018A (en) * 2022-06-16 2022-10-18 湖南天河国云科技有限公司 Lottery drawing method and lottery drawing equipment based on block chain prediction machine
CN116049319A (en) * 2023-03-07 2023-05-02 天聚地合(苏州)科技股份有限公司 Method and device for acquiring out-of-chain data based on prestige reputation value
CN116049319B (en) * 2023-03-07 2023-07-25 天聚地合(苏州)科技股份有限公司 Method and device for acquiring out-of-chain data based on prestige reputation value
CN116722966A (en) * 2023-07-26 2023-09-08 云南大学 Efficient trusted chain data feeding method based on DAG predictor network
CN116722966B (en) * 2023-07-26 2024-03-12 云南大学 Efficient trusted chain data feeding method based on DAG predictor network

Also Published As

Publication number Publication date
CN114357495B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN114357495B (en) Prediction machine under-chain aggregation method, device, equipment and medium based on block chain
US10997063B1 (en) System testing from production transactions
CN109542718B (en) Service call monitoring method and device, storage medium and server
CN114327803A (en) Method, apparatus, device and medium for accessing machine learning model by block chain
CN114328132A (en) Method, device, equipment and medium for monitoring state of external data source
CN112804333B (en) Exception handling method, device and equipment for out-of-block node and storage medium
CN109728981A (en) A kind of cloud platform fault monitoring method and device
CN114327804B (en) Block chain based distributed transaction processing method, device, equipment and medium
US10135939B2 (en) Method and apparatus for sending delivery notification of network application-related product
CN117651003B (en) ERP information transmission safety monitoring system
CN113742174B (en) Cloud mobile phone application monitoring method and device, electronic equipment and storage medium
CN105530110A (en) Network failure detection method and related network elements
CN112883106A (en) Method, device, equipment and medium for determining out-of-block node of block chain
CN114338051B (en) Method, device, equipment and medium for acquiring random number by block chain
CN116192534A (en) Train control data communication transmission method, device, equipment and storage medium
CN111901174B (en) Service state notification method, related device and storage medium
CN113485862B (en) Method and device for managing service faults, electronic equipment and storage medium
CN113225356B (en) TTP-based network security threat hunting method and network equipment
CN112054926B (en) Cluster management method and device, electronic equipment and storage medium
CN107707383B (en) Put-through processing method and device, first network element and second network element
CN113656239A (en) Monitoring method and device for middleware and computer program product
CN114338536B (en) Scheduling method, device, equipment and medium based on block chain
CN116016265B (en) Message all-link monitoring method, device, system, equipment and storage medium
CN116723111B (en) Service request processing method, system and electronic equipment
US20230089235A1 (en) Transaction processing method and apparatus, medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant