Disclosure of Invention
The invention provides a power block chain multi-equipment consensus method, which aims to solve the problems that in the prior art, a consensus mechanism of a block chain has large calculated amount and is not suitable for power equipment.
The technical scheme of the invention is as follows.
A power blockchain multi-device consensus method comprises the following steps:
s1: the central server establishes a power block chain with a plurality of nodes;
s2: when power dispatching is generated among the nodes, the relevant nodes check the dispatching result, and upload dispatching records to the central server after the checking is correct;
s3: after receiving two different scheduling records, the central server packages the two scheduling records and calculates a feature code, generates a block according to the feature code calculated at the previous time and the feature code calculated at the current time, generates an information label according to the scheduling records, sends the block and the information label to a plurality of nodes irrelevant to the scheduling records, and deletes one scheduling record after receiving a receipt;
s4: after receiving an external query request, a central server or a node analyzes the query request, searches information tags meeting conditions, extracts feature codes in blocks corresponding to the information tags, and calls two scheduling records related to the information tags from related nodes;
s5: and recalculating the feature codes by the central server according to the two dispatching records, judging whether the front and rear feature codes are the same, if so, checking the two dispatching records to select invalid dispatching records and rejecting the power block chain from the corresponding node, and if not, checking the two dispatching records according to the undeleted dispatching records in the central server.
The invention belongs to a variation of a center chain, a center server is used as a non-interested party of interests and is regarded as absolute trust, power scheduling is carried out between nodes according to requirements, scheduling records are uploaded, the center server calculates feature codes by means of strong computing power and generates blocks to be issued, and the nodes are only responsible for storage. When the information is traced, a large number of nodes are not needed to perform consensus verification, and the central server is used for judgment.
After the blocks are generated, the central server can reduce the storage requirement of the central server by 50% by adopting an alternative storage mode for the scheduling records, meanwhile, the nodes only need to store the scheduling records participating in the nodes, the blocks are stored by other nodes, the storage space is saved, the possibility of communication among the nodes is reduced, and although the block chain of each node is not complete, all the nodes store the complete block chain as a whole. Therefore, in the whole block chain, the calculation tasks are all placed in the central server, the nodes do not need to perform repeated and invalid calculation, the storage tasks are shared by the nodes and the central server, the storage space requirement of each device is reduced, and the whole block chain has the tracing function.
Preferably, in S2, after power scheduling is performed between the nodes, the relevant nodes check the scheduling result, and upload the scheduling record to the central server after checking the scheduling result without error, including:
and after the two parties confirm that no error exists, a scheduling record is generated according to the record to be confirmed and is uploaded to a central server.
Preferably, in S3, after receiving every two different scheduling records, the central server packages the two scheduling records and calculates feature codes, generates a block according to the feature code calculated last time and the feature code calculated this time, generates an information tag according to the scheduling records, sends the block and the information tag to a plurality of nodes unrelated to the scheduling records, and deletes one of the scheduling records after receiving the receipt, including:
the central server sequentially receives the scheduling records sent by the nodes, wherein the scheduling records with the same content are not repeatedly received, and the central server packages the scheduling records and calculates the feature codes according to a preset algorithm after receiving two different scheduling records;
generating a block according to the feature code calculated at the previous time and the feature code calculated at the current time, and generating an information tag according to the scheduling record;
and selecting a plurality of nodes irrelevant to the scheduling records from all the nodes, sending the blocks and the information labels to the selected nodes, and deleting one of the two scheduling records by the central server after receiving the receipt.
Preferably, the information tag includes: the scheduling records the relevant node, time, and scheduling amount.
Preferably, in S4, after the central server or the node receives the external query request, parsing the query request and searching for the information tag meeting the condition, extracting the feature code in the block corresponding to the information tag, and retrieving two scheduling records related to the information tag from the related node, including:
after the central server or the node receives an external query request, analyzing the query request to obtain at least two items of information of the node, the time and the adjustment amount, and searching according to the analyzed information to obtain a corresponding information label;
and extracting the feature codes in the blocks corresponding to the information labels, and calling two scheduling records related to the information labels from related nodes.
Preferably, in S5, recalculating, by the central server, the feature code according to the two scheduling records, determining whether the feature codes are the same before and after the recalculation, if so, the two scheduling records are valid, and if not, checking according to the undeleted scheduling record in the central server to select an invalid scheduling record, and rejecting the power block chain from the corresponding node, includes:
the central server recalculates the feature codes by using a preset algorithm according to the two scheduling records;
judging whether the feature codes calculated before and after are the same or not, if so, determining that two scheduling records are effective, and if not, determining that a message loss node exists and removing the message loss node by the following mode:
calling a scheduling record corresponding to the block stored by the central server and a scheduling record provided by a node to check, if the scheduling record is consistent with the scheduling record provided by the node, determining a providing node of another scheduling record packaged at the same time as a lost message node, and removing an electric power block chain from the node;
if not, the node is determined to be a lost node, and the power block chain is removed from the node.
In the inquiry checking process, when the feature code recalculated by the scheduling record is inconsistent with the previous feature code, the scheduling record is changed, the scheduling record is checked by taking the part stored in the central server, whether the scheduling record is changed or not can be known, if not, the other part is changed definitely, and if the part is changed, whether the other part is changed or not is judged independently.
Preferably, in the process of checking the scheduling record corresponding to the block stored in the central server and the scheduling record provided by the node, when the determination result is inconsistent, further determining whether the providing node of another scheduling record packed at the same time is a distrusted node, where the determining process is:
extracting another dispatch record from a providing node of the simultaneously packed another dispatch record;
calculating the scheduling record corresponding to the block and the other scheduling record packed at the same time, which are stored by the central server, by using a preset algorithm to obtain a feature code;
and judging whether the feature code is consistent with the feature code in the block, if not, judging the providing node of the other scheduling record to be a distrusted node, otherwise, judging the providing node to be a trusted node.
Preferably, in S3, the sending the block and the information tag to a plurality of nodes unrelated to the scheduling record further includes:
defining the time of the current node from the last receiving block as T, and judging whether T belongs to [ L, R ] or not;
if yes, the node is selected as a block receiving node, otherwise, the node is not selected as the block receiving node;
wherein,
,
wherein
The super-parameter is determined according to the generation rate of the blocks and is increased along with the increase of the generation rate of the blocks,
is defined to be between 0.5 and 1, and n is the total number of nodes in the blockchain.
According to the invention, through the judgment of T, the pressure of the individual node storage block can be prevented from being overlarge, the storage capacity of the whole block chain can be coordinated, and the utilization efficiency is improved.
Preferably, the method further comprises the following steps: and calculating the credit value of the node at preset time intervals, marking and warning the nodes with the credit values lower than the preset credit value, and if two continuous preset time intervals are lower than the preset credit value, rejecting the power block chain.
The invention also discloses an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the power block chain multi-device consensus method when calling the computer program in the memory.
The invention also discloses a storage medium, wherein the storage medium stores computer executable instructions, and when the computer executable instructions are loaded and executed by a processor, the steps of the power block chain multi-device consensus method are realized.
The substantial effects of the invention include:
the central server is used for bearing main calculation tasks in a central chain form in the block chain, all nodes share data storage tasks, and the function of information tracing is reserved in a post verification mode, so that the calculation amount, the energy consumption and the cost are greatly reduced compared with the traditional block chain, and the method is more suitable for equipment in a power grid;
after the block is generated, the central server can reduce the storage requirement of the central server by 50% by adopting an alternative storage mode for the scheduling records, meanwhile, the nodes only need to store the scheduling records participating in the node, and the blocks are stored by other nodes, so that the storage space is saved, the possibility of communication among the nodes is reduced, and although the block chain of each node is not complete, all the nodes store the complete block chain as a whole. Therefore, in the whole block chain, the calculation tasks are all placed in the central server, the nodes do not need to perform repeated and invalid calculation, the storage tasks are shared by the nodes and the central server, and the storage space requirement of each device is reduced;
in the aspect of the specific tracing function of the block chain, the invention checks the recalculated feature code with the previous feature code to know whether the scheduling record is changed, and further combines the scheduling record stored in the alternative way to know which scheduling record is inconsistent, thereby realizing the tracing function in a smaller storage space.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions will be clearly and completely described below with reference to the embodiments, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that, in the various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising a, B and C", "comprising a, B, C" means that all three of a, B, C are comprised, "comprising a, B or C" means comprising one of a, B, C, "comprising a, B and/or C" means comprising any 1 or any 2 or 3 of a, B, C.
The technical solution of the present invention will be described in detail below with specific examples. Embodiments may be combined with each other and descriptions of the same or similar concepts or processes may be omitted in some embodiments.
Example (b):
fig. 1 shows a power block chain multi-device consensus method, which includes the following steps:
s1: the central server establishes a power block chain with a number of nodes.
In this embodiment, a user or a company deploys power generation equipment that the user or the company wants to participate in power scheduling to a power blockchain network through internet of things equipment or a gateway, and the internet of things equipment and the gateway that participate in deployment become nodes in a blockchain. The central server is used as an absolute trust party and bears the main calculation amount of the block chain.
The block chain is mainly used for power dispatching, nodes with redundant power can initiate power dispatching transaction through paying the dispatching fee, when the transaction is successfully linked and confirmed, the completion of one-time power dispatching is proved, the corresponding successful dispatching counter value (P) is updated, and the definition of the power dispatching fee is as follows:
wherein
Is the total maximum power generation amount of all predefined network access equipment in a unit time interval
And the average value of the generated energy in the latest 100 unit time intervals in the whole network access equipment is represented, K represents a cost coefficient, and C represents a basic cost. For example, within a certain unit period of time,
10 ten thousand kilowatt-hours, and
at 20 ten thousand kilowatt hours, the power generation efficiency is 50%, and therefore the electric power calling fee is half of the product of the cost coefficient K and the basic cost C. According to the defined Fin, if the power dispatching quantity of the blockchain network is found to be reduced, the cost coefficient K can be reduced, and then the dispatching cost Fin is reduced to promote the power dispatching times, otherwise, if the power dispatching exceeds the load of the network, the cost coefficient K can be increased, and then the Fin is increased, and the load of the network is reduced, and the dispatching cost can be dynamically changed to enable the whole power dispatching system to be more flexible and efficient. When the corresponding transaction is successfully linked, indicating that the scheduled transaction is completed, the number of times of the counter value (P) of the successful transaction of the corresponding device is increased by 1, which can be used as one of the bases of the credit value.
S2: when power dispatching is generated among the nodes, the relevant nodes check the dispatching result, and upload the dispatching record to the central server after checking the dispatching result, wherein the method comprises the following steps:
and after the two parties confirm that no error exists, a scheduling record is generated according to the record to be confirmed and is uploaded to a central server. The scheduling records typically include associated nodes, time, scheduling amounts, costs, etc.
If the node A calls the power to the node B, after the power is called, the node A and the node B respectively generate a record to be confirmed and send the record to the other node, and the record is uploaded to the server after the record is confirmed.
S3: after receiving every two different scheduling records, the central server packs the two scheduling records and calculates the feature codes, generates a block according to the feature codes calculated at the previous time and the feature codes calculated at this time, generates an information label according to the scheduling records, sends the block and the information label to a plurality of nodes irrelevant to the scheduling records, and deletes one of the scheduling records after receiving the receipt by the central server, wherein the method comprises the following steps:
the central server sequentially receives the scheduling records sent by the nodes, wherein the scheduling records with the same content are not repeatedly received, and the central server packages the scheduling records and calculates the feature codes according to a preset algorithm after receiving two different scheduling records;
generating a block according to the feature code calculated at the previous time and the feature code calculated at the current time, and generating an information tag according to the scheduling record;
and selecting a plurality of nodes irrelevant to the scheduling records from all the nodes, sending the blocks and the information labels to the selected nodes, and deleting one of the two scheduling records by the central server after receiving the receipt.
The information tag of the present embodiment includes: the scheduling records the relevant node, time, and scheduling amount.
For example, the central server receives the scheduling record a1 from any one of the node a and the node B and the scheduling record a2 from the node C or the node D, packages the scheduling records and calculates the feature code according to a preset algorithm, where the preset algorithm may be a hash algorithm or any other preset encryption algorithm. After sending the block and receiving the receipt, the central server deletes one of the two scheduling records, for example, deletes a2, and only keeps a1.
S4: after receiving an external query request, a central server or a node analyzes the query request and searches for information tags meeting conditions, extracts feature codes in blocks corresponding to the information tags, and retrieves two scheduling records related to the information tags from related nodes, wherein the two scheduling records comprise:
after the central server or the node receives an external query request, analyzing the query request to obtain at least two items of information of the node, the time and the adjustment amount, and searching according to the analyzed information to obtain a corresponding information label;
and extracting the feature codes in the blocks corresponding to the information labels, and calling two scheduling records related to the information labels from the related nodes.
For example, the scheduling amount and time are obtained by parsing the query request, the relevant information tag is queried, and the block containing the feature code is determined, so that the relevant node a and node B, and node C and node D are found, the scheduling record a1 is retrieved from node a or B, and the scheduling record a2 is retrieved from node C or D.
S5: recalculating the feature codes by the central server according to the two scheduling records, judging whether the front and rear feature codes are the same, if so, checking the two scheduling records to select an invalid scheduling record, and rejecting the power block chain from the corresponding node, wherein the recalculating comprises:
the central server recalculates the feature codes by using a preset algorithm according to the two scheduling records;
judging whether the feature codes calculated before and after are the same or not, if so, determining that two scheduling records are effective, and if not, determining that a message loss node exists and removing the message loss node by the following mode:
calling a scheduling record corresponding to the block stored by the central server and a scheduling record provided by a node to check, if the scheduling record is consistent with the scheduling record provided by the node, determining a providing node of another scheduling record packaged at the same time as a lost message node, and removing an electric power block chain from the node;
if not, the node is determined to be a lost node, and the power block chain is removed from the node.
In this embodiment, when the signature recalculated for the scheduling record is inconsistent with the previous signature, it indicates that the scheduling record has changed, and the signature is checked by retrieving the portion stored in the central server, so as to know whether the scheduling record has changed, if not, the other portion must change, and if so, the other portion is separately determined whether the scheduling record has changed.
For example, for the newly retrieved scheduling records a1 and a2, a previous preset algorithm is used for calculation, if the obtained feature codes are not consistent with the feature codes in the blocks, a distrusted node is determined to exist, since only a1 is stored in the previous central server, the newly retrieved a1 is compared with the previously stored a1, if the a1 is consistent with the previously stored a1, the node currently providing a2 is a distrusted node, and if the a2 is inconsistent with the previously stored a1, the node currently providing a1 is a distrusted node.
In addition, in the process of checking the scheduling record corresponding to the block stored in the central server and the scheduling record provided by the node, when the judgment result is inconsistent, further judging whether the providing node of another scheduling record packed at the same time is a distrusted node, wherein the judgment process is as follows:
extracting another dispatch record from a providing node of the simultaneously packed another dispatch record;
calculating the scheduling record which is stored by the central server and corresponds to the block and the other scheduling record which is packed at the same time by using a preset algorithm to obtain a feature code;
and judging whether the feature code is consistent with the feature code in the block, if not, judging the providing node of the other scheduling record to be a distrusted node, otherwise, judging the providing node to be a trusted node.
For example, in the case where a1 is determined to be inconsistent, a1 stored in the central server and a2 currently provided are calculated by a preset algorithm, so as to further determine whether a2 is consistent with the original version.
In addition, in S3, the sending the block and the information tag to a plurality of nodes unrelated to the scheduling record further includes:
defining the time of the current node from the last receiving block as T, and judging whether T belongs to [ L, R ] or not;
if yes, the node is selected as a block receiving node, otherwise, the node is not selected as the block receiving node;
wherein,
,
wherein
The super-parameter is determined according to the generation rate of the blocks and is increased along with the increase of the generation rate of the blocks,
is defined to be between 0.5 and 1, and n is the total number of nodes in the blockchain.
According to the embodiment, the pressure of the individual node storage blocks can be prevented from being too high through the judgment of T, so that the storage capacity of the whole block chain can be coordinated, and the utilization efficiency is improved.
Meanwhile, in the embodiment, the reputation value of the node is calculated every preset time interval, the nodes lower than the preset reputation value are marked and warned, and if two continuous preset time intervals are lower than the preset reputation value, the power block chain is rejected.
In the embodiment, the credit values of the users participating in the power scheduling are quantized, so that the users with high credit values obtain the accounting right as much as possible, and participate in more power scheduling to form a good cycle and the stability of the block chain network.
The reputation value is calculated in the following manner:
calculating the transaction deviation of the distributed power supply, wherein the transaction electric quantity deviation of the distributed power supply needs to be introduced because the predicted load of a user has an error with the actual load of the distributed power supply, and the electricity generation of the distributed power supply is unfixed and different due to natural reasons such as weather.
Wherein,
is the actual consumed electrical energy in the distributed power supply transaction;
and predicting the issued electric energy for the user according to the load prediction and the output of the distributed power supply. Transaction quality
The distribution is random from 0 to 1.
Calculating individual contribution values of the user devices:
in the above formula:
is a node
iA corresponding power plant deviation value during the trade period.
Is the node
The total number of successful transactions the device has been engaged in.
For each device contribution over a time period t.
Calculating the total reputation value of the user:
in the above formula:nthe number of power generation facilities owned by the same user.CIs the total reputation value for that user. In the block chain, one user binds to one node.
In addition, the embodiment further includes an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the above-mentioned steps of the power block chain multi-device common identification method when calling the computer program in the memory.
The present embodiment also includes a storage medium, in which computer-executable instructions are stored, and when being loaded and executed by a processor, the steps of the above-mentioned power block chain multi-device consensus method are implemented.
The substantial effects of the present embodiment include:
the central server is used for bearing main calculation tasks in a central chain form in the block chain, all nodes share data storage tasks, and the function of information tracing is reserved in a post verification mode, so that the calculation amount, the energy consumption and the cost are greatly reduced compared with the traditional block chain, and the method is more suitable for equipment in a power grid;
after the block is generated, the central server can reduce the storage requirement of the central server by 50% by adopting an alternative storage mode for the scheduling records, meanwhile, the nodes only need to store the scheduling records participating in the node, the blocks are stored by other nodes, the storage space is saved, the possibility of communication among the nodes is reduced, and although the block chain of each node is not complete, all the nodes store the complete block chain as a whole. Therefore, in the whole block chain, the calculation tasks are all placed in the central server, the nodes do not need to perform repeated and invalid calculation, the storage tasks are shared by the nodes and the central server, and the storage space requirement of each device is reduced;
in the aspect of the specific tracing function of the block chain, the invention checks the recalculated feature code with the previous feature code to know whether the scheduling record is changed, and further combines the scheduling record stored in the alternative way to know which scheduling record is inconsistent, thereby realizing the tracing function in a smaller storage space.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of a specific device is divided into different functional modules to complete all or part of the above described functions.
In the embodiments provided in this application, it should be understood that the disclosed structures and methods may be implemented in other ways. For example, the above-described embodiments with respect to structures are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may have another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another structure, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, structures or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a separate product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.