CN114860400A - Under-link processing method and device for block chain task - Google Patents
Under-link processing method and device for block chain task Download PDFInfo
- Publication number
- CN114860400A CN114860400A CN202210473040.2A CN202210473040A CN114860400A CN 114860400 A CN114860400 A CN 114860400A CN 202210473040 A CN202210473040 A CN 202210473040A CN 114860400 A CN114860400 A CN 114860400A
- Authority
- CN
- China
- Prior art keywords
- task
- blockchain
- block chain
- node
- engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title description 2
- 238000004364 calculation method Methods 0.000 claims abstract description 138
- 238000000034 method Methods 0.000 claims abstract description 120
- 238000012545 processing Methods 0.000 claims abstract description 80
- 230000008569 process Effects 0.000 claims description 51
- 230000008859 change Effects 0.000 claims description 32
- 230000004044 response Effects 0.000 claims description 11
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The specification provides a method and a device for processing a block chain task under a chain. The method is applied to the downlink computing nodes corresponding to the block chain link points, and comprises the following steps: obtaining a cached historical blockchain task after the starting is finished, wherein the historical blockchain task is generated by the blockchain node before the computing node under the chain stops; acquiring a current block chain task generated by the block chain link point, wherein the current block chain task is generated by the block chain node after the calculation node under the chain is stopped; and processing the historical blockchain task and the current blockchain task in parallel.
Description
Technical Field
The embodiment of the specification belongs to the technical field of block chains, and particularly relates to a method and a device for processing a block chain task under a chain.
Background
The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. In the block chain system, data blocks are combined into a chain data structure in a sequential connection mode according to a time sequence, and a distributed account book which is not falsifiable and counterfeitable is ensured in a cryptographic mode.
Blockchain tasks generated by the blockchain network may be processed by the down-link compute node, but during the course of performing the blockchain tasks, the down-link compute node may be down for some reason. In the related art, the chain-down computing node after the completion of the startup processes the old task that is not processed and completed before the shutdown and the new task that is generated after the shutdown in sequence according to the time sequence of generating the block chain task.
Disclosure of Invention
The invention aims to provide a method and a device for processing a block chain task under a chain.
According to a first aspect of one or more embodiments of the present specification, there is provided a method for processing a block chain task under a chain, which is applied to an under-chain computing node corresponding to a block chain node point, the method including:
obtaining a cached historical blockchain task after the starting is finished, wherein the historical blockchain task is generated by the blockchain node before the computing node under the chain stops;
acquiring a current block chain task generated by the block chain link point, wherein the current block chain task is generated by the block chain node after the calculation node under the chain is stopped;
and processing the historical blockchain task and the current blockchain task in parallel.
According to a second aspect of one or more embodiments of the present specification, there is provided an apparatus for processing a block chain task, which is applied to an off-chain computing node corresponding to a block chain node, the apparatus including:
a history task obtaining unit, configured to obtain a cached history blockchain task after the start is completed, where the history blockchain task is generated by the blockchain node before the down-chain computing node is stopped;
a current task obtaining unit, configured to obtain a current block chain task generated by the block chain link point, where the current block chain task is generated by the block chain node after the down-chain computing node is stopped;
and the parallel processing unit is used for processing the historical block chain task and the current block chain task in parallel.
According to a third aspect of one or more embodiments of the present specification, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any of the first aspects by executing the executable instructions.
According to a fourth aspect of one or more embodiments of the present description, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any one of the first aspect.
In the scheme, after the down-link computing node corresponding to the block link node is started, the historical block link task generated and cached by the block link node before the down-link computing node is stopped and the current block link task generated after the down-link computing node is stopped can be respectively obtained, and then the historical block link task and the current block link task are processed in parallel.
It is to be understood that the historical blockchain task is an old task that is acquired by the offline computing node before shutdown but is not yet processed, and the current blockchain task is a new task acquired by the offline computing node after shutdown. Compared with the scheme of preferentially processing the old tasks in the related technology, the scheme performs parallel processing on the historical block chain tasks and the current block chain tasks, can shorten the interval time from the acquisition of the current block chain tasks to the start of processing the tasks by the lower computing node, and namely shortens the waiting time of the current block chain tasks. Therefore, the method and the device can improve the execution efficiency of the current block chain task, and are beneficial to avoiding the phenomenon that the current block chain task generated after shutdown is massively accumulated because the current block chain task cannot be processed in time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and it is obvious for a person skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic structural diagram of a down-link computing node according to an exemplary embodiment.
Fig. 2 is a flowchart of a method for processing a block chain task under a chain according to an exemplary embodiment.
Fig. 3 is a schematic diagram of a parallel processing procedure of a blockchain task according to an exemplary embodiment.
Fig. 4 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram of an apparatus for processing a block chain task under a chain according to an exemplary embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
As described above, after the start-up of the down-link computing node in the related art is completed, the historical blockchain task and the current blockchain task are generally executed in sequence in the time order of generating the blockchain task. The historical blockchain task is generated by the blockchain node before the down-chain computing node stops, the current blockchain task is generated by the blockchain node after the down-chain computing node stops, and the generation time of the historical blockchain task is earlier than that of the current blockchain task. Therefore, if the method proposed in the related art is adopted to preferentially process the old task, the current blockchain task is processed after all the historical blockchain tasks are processed.
Obviously, this approach may result in the latency of the current blockchain task being potentially too long. For example, in the case of a large number of history blockchain tasks or a slow processing, all history blockchain tasks need a long time to be processed, so that the current blockchain task needs to wait for a long time to be processed. Therefore, in the related art, the way of preferentially processing the historical blockchain tasks inevitably results in that the waiting time of the current blockchain task is too long, and even a large number of current blockchain tasks are accumulated because the current blockchain tasks cannot be processed in time, so that the execution efficiency of the current blockchain task is seriously influenced.
In order to solve the above problems in the related art, the present specification provides a method for processing a block chain task under a chain, which shortens a waiting time of a current block chain task by processing a historical block chain task and the current block chain task in parallel. The method can be applied to the downlink computing nodes corresponding to the block link points. This scheme is described in detail below with reference to the accompanying drawings.
The node under link according to the embodiment of the present disclosure may be composed of a plurality of functional modules, and first, a structure of the node under link according to the embodiment of the present disclosure is described with reference to fig. 1. As shown in fig. 1, the blockchain network is composed of a plurality of blockchain nodes, such as nodeA to E, and any blockchain node may correspond to a corresponding calculation node under the chain. Taking nodeA as an example, the node corresponding to the down-link computing node a may include a scheduling engine and at least one computing engine framework. For example, the scheduling engine is connected with n computing engine frameworks, such as computing engine frameworks 1-n. Wherein, any calculation engine framework and m calculation units managed by the same form a calculation engine. For example, the calculation engine 1 includes a calculation engine frame 1 and calculation units 11 to 13 (m is 3 in this case), the calculation engine 2 includes a calculation engine frame 2 and calculation units 21 to 22 (m is 2 in this case), …, and the calculation engine n includes a calculation engine frame n and calculation units n1 to nm. It can be seen that any of the down-link compute nodes includes a scheduling engine, at least one compute engine framework, and at least one compute unit, and the scheduling engine can schedule each compute unit to execute a corresponding blockchain task. Wherein n and m are each an integer of 1 or more.
Each of the calculation nodes under the chain corresponding to any block link point may be a calculation main body formed by a corresponding scheduling engine, a calculation engine frame, and a calculation unit in a logic level, and as for the entity device, each component of the calculation node under the chain may be deployed in the same or different entity devices, which is not limited in this description embodiment. For example, the node device to which the corresponding blockchain node belongs may be deployed, that is, the node device to which the node device belongs is deployed. In this case, the scheduling engine, the computation engine framework and the computation unit included in the down-link computation node may be corresponding functional modules running in the node device. Alternatively, the scheduling engine and each computing engine framework may run in a functional module in the node device, and the computing unit managed by each computing engine framework may run in another computing device, so that the computing engine framework implements remote invocation of the computing unit. For another example, the down-link computing node a may also be deployed in a computing device other than the node device, and the corresponding computing engine framework and the computing unit may be deployed in the node device or other node devices, which is not described again.
It is understood that the scheduling engine, each computation engine framework and each computation unit in the computation nodes under the chain can be regarded as a functional module for the blockchain node to call. In order to realize efficient management of each functional module, the functional modules can be registered in a unified service center in advance. The functional modules included in the down-link compute nodes, which correspond to respective blockchain nodes in the blockchain network, may be registered with the service center, respectively. The computing unit can register relevant information such as the computing type and the access address of the computing unit to the service center, and the service center can allocate service identification for the computing unit for identification. After the registration of any computing unit is completed, the service center may maintain the registration information of the computing unit. Similarly, the scheduling engine and the compute engine framework in each of the down-link compute nodes may be registered with the service center, referred to as a registered service managed by the service center. The computing engine framework and the computing unit which are included in any one of the down-link computing nodes and are registered to the service center are available services corresponding to the scheduling engine, and can be scheduled by the scheduling engine to execute the block chain task. In other words, the scheduling engine may schedule the received blockchain task to any computing unit managed by any computing engine framework corresponding to the scheduling engine.
The following describes the processing procedure (i.e., scheduling and executing procedure) of the blockchain task in conjunction with the method for processing the blockchain task under the chain shown in fig. 2. Fig. 2 is a flowchart illustrating an example of a method for processing a block chain task under a chain, where the method is applied to corresponding calculation nodes under a chain of block chain nodes. As shown in fig. 2, the method includes step 202-206.
In an embodiment, the blockchain nodes and the down-link compute nodes may be deployed in a variety of ways. For example, the blockchain node and the down-link compute node may each be in different processes in the same node device. The node device may run the blockchain node in either process and the down-chain compute node in another process. It can be understood that, because the blockchain node and the down-chain computing node are in the same node device, the time required for data interaction between the blockchain node and the down-chain computing node can be reduced as much as possible; and because the two processes belong to different processes, the node equipment can be called to realize independent operation respectively, and the mutual influence between the two processes can be avoided.
For another example, in a case where the calculation node under the chain includes a scheduling engine and a calculation engine composed of a calculation engine framework and a calculation unit managed by the calculation engine framework, the blockchain node and the scheduling engine may be respectively located in different processes in the same node device, and the calculation engine framework and the calculation unit managed by the calculation engine framework may be located in the same calculation device or different calculation devices other than the node device. The deployment mode can realize mutual independence between the calculation nodes under the chain and the block chain link points, each functional module in the block chain nodes can also realize independence, and the calculation nodes under the chain are only logically formed. This approach facilitates cross-scheduling of compute units by different down-link compute nodes, as if one compute unit could be scheduled by either one down-link compute node for performing any blockchain task or another down-link compute node for performing another blockchain task. The execution time of the two blockchain tasks may not coincide (the computing unit may execute the tasks serially), or may also coincide (the computing unit may execute the tasks in parallel without passing through the tasks), which is not limited in this embodiment of the present disclosure.
In this embodiment of the present specification, the current startup of the down-link computing node may be a normal startup, or may also be a restart after an abnormal shutdown (or referred to as a downtime), and the restart after the abnormal shutdown is described as an example below. Since the down-link computing node is essentially a functional module running in the computing device, the down-link computing node may stop running for the functional module itself, or may stop running for the computing device as a whole, and this is not limited in the embodiments of the present specification.
Since the scheduling engine is essentially a functional module running in a computing device (the computing device may be the node device), the scheduling engine may stop running for the functional module itself or for the computing device as a whole, and this is not limited in the embodiments of the present specification. Since the down-link computing node only includes one scheduling engine, the down-link computing node cannot normally process the block link task after the scheduling engine is stopped, and therefore the down-link computing node where the scheduling engine is located is considered to be stopped.
A blockchain nexus in a blockchain network may generate blockchain tasks that require processing down-chain by down-chain computing nodes, e.g., a blockchain nexus may generate blockchain tasks by executing intelligent contracts or blockchain transactions. For the above blockchain task, the down-link compute node may obtain and process it. As described above, the calculation node under the chain may include a scheduling engine and a calculation engine composed of a calculation engine framework and calculation units, and in this structure, the process of processing any blockchain task by the calculation node under the chain may include a process of scheduling the blockchain task to a corresponding calculation unit by the scheduling engine to be executed, and returning the execution result to the blockchain node. If the execution result of any blockchain task has not been successfully returned to the blockchain node (e.g., the task has been acquired by the scheduling engine but has not been scheduled to the corresponding computing unit, the task has been scheduled to the corresponding computing unit for execution but has not been completed, the computing unit has returned the execution result to the scheduling engine but has not been successfully returned to the blockchain node by the scheduling engine, etc.), the blockchain task may be considered to have not been processed and completed.
In view of a certain time required for the processing process, in order to effectively manage the not-yet-processed blockchain tasks, the downlink computing node may cache the acquired blockchain tasks, for example, the acquired blockchain tasks may be cached in a local storage space of the computing device or other storage spaces that the downlink computing node allows to access. The scheduling engine may maintain a task list for recording such tasks, and the task list and the block chain tasks are used for recording the block chain tasks which are acquired by the scheduling engine and whose execution results have not been successfully returned to the block chain nodes. Under the condition of acquiring any blockchain task, the scheduling engine can record the task in the task list; and in the event that the execution result of the task is returned (processed and) to the blockchain node completion, the scheduler engine may delete the task from the task list. Or, before the execution result of any blockchain task is successfully returned to the blockchain node, the task state of the task may be set to be an incomplete state in the task list; and adjusting the task state to be a completion state when the execution result is successfully returned to the blockchain node, which is not limited in the embodiment of the present specification with respect to the specific form of recording the blockchain task.
At the time of shutdown, the down-chain compute node may not have processed the above blockchain tasks that have been acquired and cached. For example, at the time of shutdown, there may be blockchain tasks in the task list that have not yet been processed, so after startup is complete, the down-chain compute node may retrieve such tasks from the list in order to continue processing them. In addition, the started-up under-chain computing node needs to acquire and process the blockchain task generated by the blockchain node after the under-chain computing node is stopped. In order to distinguish the blockchain tasks that the calculation node needs to acquire under the chain, in the embodiments of the present specification, by the above-mentioned shutdown time, the blockchain task generated before the scheduling engine is shutdown is referred to as a historical blockchain task, and the blockchain task generated by the blockchain node after the scheduling engine is shutdown is referred to as a current blockchain task.
In summary, after the current start is completed, the calculation node under the chain can obtain the historical block chain task generated and cached before the shutdown, and can obtain the current block chain task generated after the shutdown. In other words, the historical blockchain task and the current blockchain task can be acquired independently by the calculation node under the chain after the start is completed, and the two processes do not have a necessary sequence. Although fig. 1 describes the two acquisition processes as step 102 and step 104, respectively, the two acquisition processes may be implemented in parallel in time to improve the acquisition efficiency of the blockchain task.
After the startup is completed, the down-link computing node may determine a task recorded in the task list of the scheduling engine (or a task recorded therein in an uncompleted state) as the blockchain task. Or, the historical blockchain task may also be read from the current storage space of the calculation node under the chain and recorded in the task list maintained by the scheduling engine, which is not described again.
In an embodiment, in the case of acquiring the historical blockchain task, the scheduling engine may further initiate a status query request to the blockchain node, and determine whether the historical blockchain task needs to be executed according to a query result returned by the blockchain node: and taking the historical blockchain task as a task to be processed if the query result indicates that the historical blockchain task is in an incomplete state, for example, a subsequent processing flow aiming at the task can be triggered.
Of course, the above query result may also indicate that the historical blockchain task is in a completed state, i.e., the blockchain node no longer needs the scheduler engine to process the historical blockchain task to obtain the execution result, and the scheduler engine does not need to process the task at this time. Therefore, the scheduling engine may respond to the query result indicating that the historical blockchain task is in a completed state, and avoid processing the historical blockchain task, such as terminating processing the historical blockchain task and deleting the historical blockchain task, so as to not only effectively avoid invalid processing of the historical blockchain task, but also save time cost and resource cost for processing the current blockchain task, thereby improving the overall efficiency of processing tasks of the computing nodes under the chain.
And 104, acquiring a current block chain task generated by the block chain link point, wherein the current block chain task is generated by the block chain node after the down-chain computing node is stopped.
In the embodiment described in this specification, the current blockchain task generated by a blockchain node after the down-link computing node is stopped may also be divided into two types from the generation time, one type is a blockchain task generated after the down-link computing node is stopped until the current start-up is completed, and the other type is a blockchain task generated after the down-link computing node is started up this time. It should be noted that the two types of history blockchain tasks are different only in generation time, and there is no essential difference in the specific processes of processing the two by the calculation node under the chain.
The calculation node under the chain can acquire any current block chain task in a mode of monitoring an event. For example, during the course of performing a blockchain transaction or an intelligent contract, a blockchain node may generate a task allocation event including the current blockchain task, so that the scheduling engine may acquire the current blockchain task by monitoring the task allocation event. The off-chain computing node may listen for the above task allocation event generated by the blockchain link point and, in response to listening for the event, determine a participant of the current blockchain task included in the task allocation event. Further, the current blockchain task may be determined to be the current blockchain task when it is determined that the current blockchain task is allocated to a blockchain member to which the blockchain node belongs (i.e., the blockchain node is a participant of the current blockchain task). The above monitoring process may be implemented by a scheduling engine in the calculation node under the link, that is, the scheduling engine may obtain the current block link task by monitoring the task allocation event.
It can be understood that the process of acquiring the historical blockchain task (which is not the historical blockchain task when the historical blockchain task is acquired) by the scheduling engine before shutdown does not have essential difference from the process of acquiring the current blockchain task after startup is completed, and is not described again. Actually, the historical blockchain task and the current blockchain task described in the embodiment of the present specification are only different in the generation timing and the acquisition manner, and there is no essential difference in the process of processing the historical blockchain task and the current blockchain task by the scheduling engine.
And 106, processing the historical blockchain task and the current blockchain task in parallel.
In the scheme, after the down-link computing node corresponding to the block link node is started, the historical block link task generated and cached by the block link node before the down-link computing node is stopped and the current block link task generated after the down-link computing node is stopped can be respectively obtained, and then the historical block link task and the current block link task are processed in parallel.
It is to be understood that the historical blockchain task is an old task that the down-link computing node acquired before shutdown but has not yet processed to completion, and the current blockchain task is a new task that the down-link computing node acquired after shutdown. Compared with the scheme of preferentially processing the old tasks in the related technology, the scheme performs parallel processing on the historical block chain tasks and the current block chain tasks, can shorten the interval time from the acquisition of the current block chain tasks to the start of processing the tasks by the lower computing node, and effectively shortens the waiting time of the current block chain tasks. Therefore, the method and the device can improve the execution efficiency of the current block chain task, and are beneficial to avoiding the phenomenon that the current block chain task generated after shutdown is massively accumulated because the current block chain task cannot be processed in time.
It is to be understood that the at least one historical block chain task acquired in the foregoing manner constitutes a historical task set, and the at least one current block chain task acquired constitutes a current task set. In an embodiment, the downlink computing node may alternately select the historical blockchain task and the current blockchain task from the historical task set and the current task set, and sequentially execute the selected blockchain tasks according to a selection order of the blockchain tasks, thereby effectively achieving parallel processing of the historical blockchain task and the current blockchain task. For the historical task set, the calculation node under the chain can randomly select each historical block chain task in sequence so as to simplify the selection process; or, each history block chain task can be selected according to the generation time sequence of the tasks, so that the execution sequence of each history block chain task is consistent with the generation sequence of each history block chain task. Similarly, for the current task set, the calculation node under the chain can randomly select each current block chain task in sequence; or each current block chain task can be selected according to the generation time sequence of the tasks.
In an embodiment, each acquired blockchain task may be placed in a corresponding task queue, so as to implement parallel processing on the blockchain tasks based on the task queues. For example, the downlink computing node may sequentially input each historical block chain task into the first queue according to the time sequence for generating each historical block chain task, and sequentially input each current block chain task into the second queue according to the time sequence for generating each current block chain task; further, the historical blockchain task and the current blockchain task may be acquired from the first queue and the second queue, respectively, and the acquired blockchain tasks may be sequentially processed according to an acquisition order.
For the mode, it can be understood that, on one hand, by using the characteristic that the queue itself has a fixed sequence, the historical block chain tasks are input into the first queue according to the generated time sequence, so that the historical block chain tasks in the first queue are sequentially arranged according to the time sequence; similarly, the current block chain tasks are input into the second queue according to the generated time sequence, so that the current block chain tasks in the second queue are sequentially arranged according to the time sequence. Therefore, the time sequence is met among all historical block chain tasks which are sequentially acquired and processed by the calculation node under the chain from the first queue, and the time sequence is also met among all current block chain tasks which are sequentially acquired and processed from the second queue, so that the processing sequence of the block chain tasks of the same type is met with the time sequence of task generation, and processing errors possibly caused by the fact that the processing sequence and the generation sequence of the block chain tasks of the same type are inconsistent can be effectively avoided. On the other hand, because the historical blockchain tasks and the current blockchain tasks are alternately acquired and the acquired blockchain tasks are sequentially processed according to the acquisition sequence, the historical blockchain tasks and the current blockchain tasks can be alternately processed, and thus parallel processing of the blockchain tasks is effectively realized.
In order to avoid the task congestion caused by the excessive number of the block chain tasks of the same type, a number threshold value can be set for the first queue and the second queue. In other words, the number of historical blockchain tasks in the first queue and the number of current blockchain tasks in the second queue do not exceed respective number thresholds. For example, the threshold number of the first queue may be set to S1, and the threshold number of the second queue may be set to S2, where the first queue may store S1 historical blockchain tasks at most simultaneously, and the second queue may store S2 current blockchain tasks at most simultaneously. The above-mentioned S1 and S2 are both positive integers greater than or equal to 1, and in the application of the scheme, the specific values thereof can be adjusted according to specific situations, and the examples in this specification do not limit this.
In addition, the main thread corresponding to the down-chain computing node may include a first sub-thread, a second sub-thread, and a third sub-thread. Based on the method, each historical block chain task can be sequentially input into a first queue by a first sub thread, and each current block chain task is sequentially input into a second queue by a second sub thread; and alternately acquiring the historical block chain task and the current block chain task from the first queue and the second queue by a third sub-thread. The main thread of the calculation node under the chain can uniformly manage the sub-threads, so that the sub-threads are matched with each other to complete parallel processing of the historical block chain task and the current block chain task. By the mode, the calculation node under the chain can respectively complete different steps in the task processing process through different threads, so that the steps can be independently executed without mutual influence, the modular realization of parallel processing is facilitated, and the efficiency of the parallel processing is improved.
As shown in fig. 3, the first sub-thread may obtain each historical blockchain task in a local cache (such as a task list maintained by the scheduling engine) of the down-chain computing node, and sequentially input the historical blockchain task into the first queue. Similarly, the second sub-thread may obtain each current blockchain task from the blockchain link point by the event snooping method described above and input the current blockchain tasks into the second queue in sequence. Furthermore, the third sub-thread may alternately acquire the historical block chain tasks and the current block chain tasks from the first queue and the second queue, and sequentially input the acquired historical block chain tasks into a queue to be processed, and obviously, the historical block chain tasks and the current block chain tasks waiting for subsequent processing are recorded in the queue to be processed. Further, a calculation node (e.g., a scheduling engine or the third sub-process) under the chain may sequentially obtain each block chain task output by the queue to be processed, and perform subsequent processing on the block chain tasks.
As mentioned above, there is no essential difference between the specific processing procedures of the down-link computation node for the historical blockchain task and the current blockchain task. In fact, in the process of parallel processing, the subsequent processing processes performed by the calculation node under the chain for the acquired historical blockchain task and the current blockchain task may be completely the same, and the subsequent processing process for any one of the historical blockchain task and the current blockchain task includes scheduling and executing.
As previously described, the down-link compute node may contain a scheduling engine and a compute engine that is made up of a compute engine framework and its managed compute units. Based on the structure, the scheduling and execution can be realized, that is, the scheduling engine may schedule the blockchain task to a corresponding computing unit for execution, and return the execution result to the blockchain node. The following describes in detail the scheduling and execution process of any blockchain task (hereinafter referred to as a target blockchain task) of the historical blockchain task and the current blockchain task, with reference to the internal structure of the calculation node under the chain shown in fig. 1.
As previously described, the scheduling engine in the down-link compute node corresponds to at least one compute engine, and in one embodiment, the scheduling engine may schedule (i.e., distribute) the historical blockchain tasks to be executed by the respective compute engine. For example, the scheduling engine may schedule the historical blockchain task to be executed by a target computing engine matched with the target blockchain task, and receive an execution result of the target blockchain task returned by the computing engine. In this way, the scheduling engine can control the target computing engine matched with the target block chain task to execute the task, thereby being beneficial to realizing smooth and efficient processing of the task.
The calculation engine framework for managing the target calculation unit is a target calculation engine framework, and the target calculation engine framework and the target calculation unit belong to the target calculation engine. It can be seen that, in the case where the scheduling engine determines the target computing unit, the corresponding target computing engine framework and target computing engine are also determined accordingly. Further, the scheduling engine may send the target blockchain task to the target computing engine framework, and the target computing engine framework forwards the target blockchain task to the target computing unit to be executed by the target computing unit when determining that the target computing unit is in an available state.
Taking fig. 1 as an example, if the scheduling engine determines that the target computing unit matched with the target block chain task is the computing unit 21 shown in fig. 1, the target computing engine framework is the computing engine framework 2, and the target computing engine is the computing engine 2. At this time, the scheduling engine may issue the target blockchain task to the computing engine framework 2, and the computing engine framework 2 forwards the task to the computing unit 21 for execution. In the above process, the scheduling engine is used to determine which computing unit executes the target blockchain task (i.e. determine the target computing unit), and the target computing engine framework is used to forward the task to the target computing unit.
In addition, in order to ensure that the identified target computing unit can execute the target blockchain task as much as possible, the scheduling engine may identify, according to available service information maintained by the scheduling engine, an available service that is allowed to be called by the scheduling engine, where the available service includes a computing engine framework and each computing unit managed by the computing engine framework, and further, the scheduling engine may identify, from among the computing units, a target computing unit that matches the target blockchain task. Each computing unit determined according to the available service information can be called by the scheduling engine, so that the target computing unit determined in the computing units can be called by the scheduling engine inevitably, and the target blockchain task is favorably executed.
As described above, the scheduling engine, the calculation engine framework and the calculation unit in the calculation node under the chain are all registered to the service center in advance, and therefore, the registered service corresponding to the scheduling engine includes the calculation engine framework and the calculation unit which are registered to the service center. The scheduling engine maintains the above available service information, namely, the computing engine framework and the computing unit used for indicating the registered service to be called by the scheduling engine. In the event that a change occurs to an available service, such as a compute engine framework or a compute unit, the scheduler engine may update the locally maintained available service information to ensure that the available service indicated by the available service information does have availability (i.e., can indeed be invoked by the scheduler engine). For example, the scheduling engine may obtain a service change message indicating the registered service after the change from the service center, and update the available service information maintained by itself according to the message. By the method, the scheduling engine can sense that the available service is changed in time, so that the accuracy of the information is ensured by updating the corresponding available service information, and the accuracy of the subsequent determination of the target computing unit and the target computing engine framework is further improved.
Wherein the service change message may be sent by a service center to a scheduling engine in response to a change in the registered service. For example, a scheduler engine may specify a registered service of interest to itself when registering with a service center, e.g., a scheduler engine in any of the down-chain computing nodes may specify each computing engine framework and computing unit in the down-chain computing node as a registered service of interest to itself. On the basis, the service center can actively send a service change message about any registered service concerned by the scheduling engine to the scheduling engine when the service is changed (such as newly added, deleted, updated registration information and the like). By the method, the service center can inform the scheduling engine in time after the personnel concerned by the scheduling engine have registered service change, so that the service center can update the relevant information of the service maintained by the service center in time. For the target computing engine framework, the service center actively sends the service change message to the scheduling engine under the condition that the target computing engine framework is stopped, so that the scheduling engine can timely and accurately know that the target computing engine framework is stopped, and the scheduling engine can timely recover the historical block chain task which cannot be continuously executed by the target computing unit.
Or, the scheduling engine may also initiate a change query request to the service center according to a preset query period, and the service center may not return a response message or only return an empty message to the scheduling engine when the service state of each registered service is not changed; and in the event that the service state of at least one registered service changes, a service change message for the changed registered service may be returned to the scheduler engine. For example, the service center may return the service change message to the scheduling engine in case that it is determined that the registered service is changed, so as to inform the scheduling engine of the change, so that the scheduling engine can update the available service information in time.
In one embodiment, any computing unit has a corresponding computing type, which may be considered as the type of task that the computing unit is capable of performing. In the case where a plurality of computing units are included in any one computing engine, the computing types of the computing units may be the same or different. Similarly, in the case where multiple compute units are included in the compute node under the chain, the compute units may be of the same or different compute types. In order to further ensure that the target computing unit smoothly executes the target block chain task and improve the execution efficiency of the task, the scheduling engine may determine the target computing unit according to the computing type. For example, the scheduling engine may determine a target calculation type corresponding to the target block chain task, and then determine, as the target calculation unit, a calculation unit whose calculation type is the target calculation type in each of the calculation units. By the method, the calculation type of the target calculation unit executing the historical block chain task can be ensured to be matched with the target calculation type of the task, execution errors can be avoided, and the execution efficiency can be improved to a certain extent.
In order to record the calculation type of each calculation unit, the scheduling engine may maintain a service type list, where the service type list is used to record the calculation type of each calculation unit included in the calculation node under the chain, and at this time, the scheduling engine may determine the target calculation unit based on the service type list. For example, the scheduling engine may determine a target computation type of the history blockchain task, then query the service type list for the target computation type, and determine a computation unit corresponding to the type as a target computation unit. The calculation type and the target calculation type may be a forwarding type, an MFT (Managed File Transfer) type, a privacy calculation type, a data query type, and the like, which are not limited in this embodiment of the specification.
In one embodiment, the service status of any available service at any time may be an available status or an unavailable status. For example, if the computing unit 11 is down (normally down or abnormally down), the computing unit cannot perform any task, and the computing unit may be considered to be in an unavailable state; if the computing unit 11 is executing a certain task, the computing unit cannot execute other tasks including the history blockchain task at the current time, and therefore the computing unit 11 at this time is also in an unavailable state. For another example, if the computing engine framework 2 is down, the computing engine framework cannot execute any task, and the computing engine framework may be considered to be in an unavailable state; alternatively, if both the calculation unit 21 and the calculation unit 22 managed by the calculation engine 2 are in the unavailable state, the calculation engine framework does not have a calculation unit that can call and execute the task at the present time, and therefore the calculation engine framework 2 is also in the unavailable state at this time.
For each of the available services described above, the scheduling engine may maintain its service state. It is understood that if the target computing unit is scheduled with the history blockchain task in the unavailable state, the task may not be executed smoothly, and therefore, the scheduling engine may also forward the target blockchain task to the target computing unit for execution through the target computing engine framework in the available state of the target computing unit and the target computing engine framework. Further, in the case that the target computing engine framework has been halted, the scheduling engine may update the service status of the computing engine framework and each computing unit managed by the computing engine framework to an unavailable status; alternatively, in the event that the target compute unit has been shutdown, the scheduling engine may update the service state of the compute unit to an unavailable state. Therefore, in the method, the target computing engine framework and the target computing unit are in the available state as a precondition for scheduling the historical block chain task to the target computing unit through the target computing engine framework, so that the scheduled historical block chain task can be smoothly executed by the target computing unit as much as possible, and errors in scheduling and executing processes are avoided.
In addition, it is also possible to use "the target calculation unit and the calculation engine framework that manages the calculation unit are in an available state" as a precondition for "the calculation unit is determined as the target calculation unit". For example, in a case that it is determined that a certain computing unit matches the target blockchain task, the scheduling engine may further determine, based on the locally maintained service state, current states of the computing unit and a computing engine framework managing the computing unit, and in a case that it is determined that both the computing unit and the computing engine framework managing the computing unit are in an available state, determine the computing unit as the target computing unit and determine the computing engine framework managing the computing unit as the target computing engine framework, and forward the historical blockchain task to the target computing unit for execution through the target computing engine framework. The method can lead the judgment process of the service state to be advanced, is favorable for further ensuring the smooth execution of the block chain task, and improves the execution efficiency to a certain extent.
In an embodiment, considering that the target computing unit needs to consume a certain resource to execute the target blockchain task, in order to improve the success rate of the target computing unit to execute the target blockchain task as much as possible, the scheduling engine may further determine the target computing unit according to the resource amount of the computing unit. For example, the scheduling engine may determine current available resource amounts of the computing units in the computing nodes under the chain respectively, and select a target computing unit from the computing units whose current available resource amounts satisfy the execution conditions of the blockchain task. Wherein the execution condition may include: the current available resource amount is not less than a resource threshold, or the current available resource amount is not less than the resource amount required for executing the blockchain task, and the like. In addition, the execution condition may be recorded in the target blockchain task, for example, in the case where the task is generated by a blockchain link point executing blockchain transaction or an intelligent contract, the execution condition may be recorded in the blockchain transaction or the intelligent contract, or may be determined by the blockchain link point according to an intermediate parameter of the execution process. Alternatively, the execution condition may also be uniformly set by a user of the blockchain node or an administrator of the blockchain network for at least one blockchain task, for example, a memory threshold corresponding to a blockchain task of the down-chain privacy computation type is uniformly set to be 16M, and the specific content of the execution condition and the determination method thereof are not limited in the embodiment of the present specification. In this way, the available resources in the execution process can be determined for the target blockchain task between the execution of the target blockchain task, so that the target computing unit can smoothly and efficiently execute the task.
It should be noted that the above-mentioned plurality of modes of determining the target calculation unit may be individually adopted, or may be simultaneously adopted in a plurality of modes. For example, the computing unit in the available state may be determined in the computing units whose computing types are the target computing types, and the target computing unit may be further selected. Or, a computing unit whose current available resource amount satisfies the execution condition of the block chain task may be determined in a computing unit whose computing type is the target computing type, and then a target computing unit may be further selected from the determined computing units. Of course, the target calculation unit may also be determined in other manners, which is not described in detail.
After the target block chain task forwarded by the target computing engine framework is acquired, the target computing unit may execute the task according to necessary data. For example, the data required for executing the target blockchain task may be already carried in the task, wherein the data may be determined by the blockchain link point in the process of generating the target blockchain task and recorded in the generated task; the data and the target blockchain task association may also be sent to the scheduling engine, and the scheduling engine may also issue the data and the target computing unit association to the target computing unit. In this scenario, the target computing unit may directly use the data to execute the target blockchain task, thereby enabling fast execution of the task. For another example, the data required to perform the target blockchain task may also need to be obtained from other data managers. In this scenario, the target computing unit may initiate a data acquisition request for the target blockchain task to the data manager, and execute the task according to the data returned by the data manager. Of course, the data from multiple different sources may be required for executing the target blockchain task, for example, some data is carried by the target blockchain task, and another data needs to be acquired from the data manager, which is not limited by the embodiment of this specification.
In addition, corresponding to the foregoing forwarding type, the target computing unit may further forward the target block chain task to a preset executive party for execution, and obtain an execution result returned by the preset executive party. At this time, although the target blockchain task is actually executed by the preset executive party, the execution effect that the target computing unit executes the target blockchain task is still presented relative to the calculation node, the scheduling engine or the calculation engine framework under the chain, so that the task can be considered to be executed by the target computing unit.
In an embodiment, for the target blockchain task, the scheduling engine may record the target blockchain task in a task list maintained by the scheduling engine, determine a target computing unit and a target computing engine frame, and then send the task to the target computing engine frame. And the target computing engine frame can record the task in a task list maintained by the target computing engine frame under the condition of receiving the target block chain task issued by the scheduling engine, and forward the task to the target computing unit for execution. After the target computing unit completes the execution of the target blockchain task, the execution result can be returned to the target computing engine framework. The target computing engine framework, upon receiving the execution result returned by the target computing unit, may record it in a result list maintained by itself and attempt to return the execution result to the scheduling engine. The target computing engine framework can sequentially return each execution result to the scheduling engine according to the time sequence of receiving each execution result, so that the problem that the time consumed for returning the results is too long is avoided. Until the execution result of the target blockchain task is successfully returned to the scheduling engine (the scheduling engine may return a confirmation message to the calculation engine framework after successfully receiving the execution result), the target calculation engine framework may delete the historical blockchain task recorded in the task list maintained by the target calculation engine framework, and delete the execution result of the task recorded in the result list maintained by the target calculation engine framework.
Similarly, in the case of receiving the execution result returned by the target computing engine framework, the scheduling engine may record the execution result in a result list maintained by the scheduling engine, and attempt to upload the execution result to the blockchain node. The scheduling engine can upload each execution result to the block chain node in sequence according to the time sequence of receiving each execution result, so that the long time for uploading the result is avoided. Alternatively, in the event that the upload fails for some reason, the scheduling engine may wait a period of time to attempt the upload again until the upload is successful. In a case where the execution result of the target blockchain task is successfully uploaded to a blockchain node (the blockchain node may also return a confirmation message to the scheduling engine after successfully receiving the execution result), the scheduling engine may delete the target blockchain task recorded in the task list maintained by the scheduling engine itself, and delete the execution result of the task recorded in the result list maintained by the scheduling engine itself.
As can be seen from the scheduling process of the blockchain task and the returning process of the execution result, if the target blockchain task is the historical blockchain task, the execution result of the task returned by the calculation engine framework may not have been received before the scheduling engine is stopped, or the execution result may have been received but not successfully uploaded to the blockchain node. Therefore, the scheduling engine can process the historical blockchain task according to whether the execution result is received or not (namely whether the execution result is recorded in the task list maintained by the scheduling engine) after being started. For example, in the case where the result list maintained by the scheduling engine records the execution result of any historical blockchain task, the scheduling engine may return the execution result to the blockchain node without processing the historical blockchain task in the manner described above. And in the case that the execution result of any historical blockchain task is not recorded in the result list maintained by the scheduling engine, the scheduling engine may schedule the historical blockchain task to a computing unit in the computing engine for execution, and record the execution result returned by the computing engine to the result list and return to the blockchain node.
As can be seen, in the case that the execution result of the history blockchain task is recorded in the result list maintained by the scheduling engine, the scheduling engine does not need to schedule the task to the computing unit. That is, because the execution result of the history blockchain task has been returned to the scheduler engine before the scheduler engine is shut down, the scheduler engine can directly upload the execution result to the blockchain node without having to repeatedly schedule and execute the task. And under the condition that the execution result is not recorded in the result list maintained by the scheduling engine, the scheduling engine may determine the corresponding target computing unit and the target computing engine frame in the manner of the foregoing embodiment, then schedule the task to the target computing unit for execution through the target computing engine frame, receive the execution result returned by the target computing unit through the target computing engine frame, and further upload the result to the block chain node. In addition, in the case that it is determined that the execution result is successfully returned to the blockchain node, the scheduling engine may delete the execution result recorded in the result list maintained by itself.
In the case of obtaining the execution result of the target blockchain task, the calculation node under the chain may return the execution result to the blockchain node, so that the blockchain node uses the result. Wherein, in case the blockchain transaction that generated the target RR is used to instruct the blockchain link point to invoke a workflow defined in an intelligent contract, the execution result can be used by the JJ to push the workflow. For example, the workflow corresponding to the above blockchain transaction includes a plurality of task nodes, and a dependency relationship exists between the task nodes, and the workflow may be advanced according to the dependency relationship. The target blockchain task may correspond to any task node, and after receiving an execution result of the target RR, the JJ may advance the workflow to execute another RR that depends on any task node. Through the method, the JJ can sequentially execute the RR corresponding to each task node according to the preset steps corresponding to the workflow until the block chain transaction is executed.
In one embodiment, the blockchain link points described herein may belong to a blockchain subnet managed by a blockchain master network. As shown in fig. 1, if there is a need for a small-scale interaction between some of the segment link points nodeA-E in a segment link network (denoted as net1, not shown in the figure), a new segment link network (denoted as net2, that is, the segment link network shown in fig. 1) can be established between them. It is not assumed that there are 12 nodes nodeA-L in total contained in the net1, wherein a user corresponding to at least one of the nodeA-E or an administrator of the net1 may initiate a subnet generation transaction in the net1, so that the respective nodes nodeA-L of the net1 may perform the transaction respectively. During the execution of the transaction, nodeA-E may determine itself to be a participant of net2 based on the transaction content, while other nodes nodeF-L may determine themselves not to be participants of net 2. Based on this, the nodeA-E can establish net2 among the nodeA-E and the nodeF-L by means of the consensus link of net1, and the nodeF-L does not participate in the subnet establishing process. For the net2 established in the above manner, it can be managed by net1, where net1 is the main blockchain network and net2 is the sub blockchain network.
In this scenario, when obtaining the execution result of the historical blockchain task and/or the current blockchain task, the under-chain computing node corresponding to the blockchain link point a in the blockchain subnet may submit the execution result to the blockchain master network, so as to store the execution result in the blockchain master network. By the mode, the block chain main network for managing the block chain sub-network can also store the execution result of the block chain task corresponding to the block chain link point in the block chain sub-network, so that the execution result can be stored depending on the data management capability of the block chain main network, and the data storage safety can be improved.
FIG. 4 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 4, at the hardware level, the apparatus includes a processor 402, an internal bus 404, a network interface 406, a memory 408, and a non-volatile memory 410, but may also include hardware required for other services. One or more embodiments of the present description may be implemented in software, such as by processor 402 reading a corresponding computer program from non-volatile storage 410 into memory 408 and then executing the computer program. Of course, besides the software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combination of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Fig. 5 is a block diagram of an apparatus for processing a block chain task under a chain according to an exemplary embodiment, which may be applied to the device shown in fig. 4 to implement the technical solution of the present specification. The device is applied to the corresponding under-chain computing node of block chain link point, and the device includes:
a history task obtaining unit 501, configured to obtain a cached history blockchain task after the start is completed, where the history blockchain task is generated by a blockchain node before a computing node under the chain is stopped;
a current task obtaining unit 502, configured to obtain a current block chain task generated by the block chain node, where the current block chain task is generated by the block chain node after the down-chain computing node is stopped;
a parallel processing unit 503 for processing the historical blockchain task and the current blockchain task in parallel
Optionally, the current task obtaining unit 502 is further configured to:
in response to monitoring a task allocation event generated by the block chain link node executing the block chain transaction, determining the block chain task contained in the task allocation event as the current block chain task if the block chain task is allocated to the block chain member to which the block chain node belongs.
Alternatively to this, the first and second parts may,
the queue input unit 504 is further included, and is configured to sequentially input each historical block chain task into the first queue according to a time sequence for generating each historical block chain task, and sequentially input each current block chain task into the second queue according to a time sequence for generating each current block chain task;
the parallel processing unit 503 is further configured to: and respectively acquiring the historical block chain task and the current block chain task from the first queue and the second queue, and sequentially executing the acquired block chain tasks according to the acquisition sequence.
Optionally, the main thread corresponding to the down-link computing node includes a first sub-thread, a second sub-thread, and a third sub-thread,
the queue input unit 504 is further configured to: sequentially inputting each historical block chain task into a first queue by a first sub-thread, and sequentially inputting each current block chain task into a second queue by a second sub-thread;
the parallel processing unit 503 is further configured to: and alternately acquiring the historical block chain task and the current block chain task from the first queue and the second queue by a third sub-thread.
Optionally, the number of the historical blockchain tasks in the first queue and the number of the current blockchain tasks in the second queue do not exceed the corresponding number thresholds.
Optionally, the downlink computing node includes a scheduling engine and a computing engine composed of a computing engine framework and computing units managed by the computing engine framework, where the scheduling engine is configured to schedule a blockchain task to the computing engine for execution; the parallel processing unit 503 is further configured to:
determining, by the scheduling engine, a target computing unit matched with any blockchain task from the computing units, and sending the any blockchain task to a target computing engine framework to which the target computing unit belongs;
forwarding, by the target compute engine framework, the any blockchain task to the target compute unit if it is determined that the target compute unit is in an available state;
executing, by the target computing unit, the any blockchain task.
Optionally, the parallel processing unit 503 is further configured to:
determining available services allowed to be called by the scheduling engine according to available service information maintained by the scheduling engine, wherein the available services comprise a calculation engine framework and each calculation unit managed by the calculation engine framework;
determining, by the scheduling engine, a target compute unit from the respective compute units that matches the any blockchain task.
Optionally, the method further includes:
a change message acquiring unit 505, configured to acquire, by the scheduling engine, a service change message from a service center, where the service change message is used to indicate a registered service after a change occurs, and the registered service includes a computing engine framework and a computing unit that are registered in the service center;
a service information updating unit 506, configured to update, by the scheduler engine, the available service information according to the service change message, where the available service information is used to indicate a computing engine framework and a computing unit that are allowed to be invoked by the scheduler engine in the registered service.
Optionally, the change message obtaining unit 505 is further configured to:
initiating a change query request to a service center by the scheduling engine according to a preset query cycle, and receiving a service change message returned by the service center under the condition that the registered service is determined to be changed; or,
receiving, by the scheduler engine, a service change message sent by a service center in response to a change in the registered service.
Optionally, the available service information includes a computation type of the computation unit, and the parallel processing unit 503 is further configured to:
determining a target calculation type corresponding to any block chain task by the scheduling engine;
determining, by the scheduling engine, a computing unit of which the computing type is the target computing type among the computing units as the target computing unit.
Optionally, the parallel processing unit 503 is further configured to:
respectively determining the current available resource quantity of each computing unit in the down-link computing nodes by the scheduling engine;
selecting a target computing unit from computing units of which the current available resource quantity meets the execution condition of the block chain task by the scheduling engine, wherein the execution condition comprises that: the current amount of available resources is not less than a resource threshold, or the current amount of available resources is not less than an amount of resources required to perform the blockchain task.
Optionally, the parallel processing unit 503 is further configured to:
the target computing unit initiates a data acquisition request aiming at any block chain task to a data manager, and executes any block chain task according to data returned by the data manager; and/or the presence of a gas in the gas,
and executing any block chain task by the target computing unit according to the data carried by any block chain task.
Optionally, the method further includes:
a task recording and deleting unit 507, configured to record, by the scheduling engine, the acquired any blockchain task in a first task list maintained by the scheduling engine, and set a task state of the blockchain task as an uncompleted state; and in response to successfully submitting the execution result of any blockchain task to the blockchain node, deleting the blockchain task recorded in the first task list or updating the task state of the blockchain task to a completed state; and/or the presence of a gas in the gas,
a result recording and deleting unit 508, configured to record, by the target computing engine framework, the any blockchain task sent by the scheduling engine in a second task list maintained by the target computing engine framework, and set a task state of the any blockchain task as an uncompleted state; and in response to successfully returning the execution result of any blockchain task to the scheduling engine, deleting the blockchain task recorded in the second task list or updating the task state of the blockchain task to be the completed state.
Alternatively to this, the first and second parts may,
the block chain node and the down-chain computing node are respectively positioned in different processes in the same node device; or,
in a case where the calculation node under the chain includes a scheduling engine and a calculation engine composed of a calculation engine framework and a calculation unit managed thereby, the blockchain node and the scheduling engine are respectively in different processes in the same node device, and the calculation engine framework and the calculation unit managed thereby are in the same or different calculation devices other than the node device.
Optionally, the blockchain node belongs to a blockchain subnet managed by a blockchain master network, and the apparatus further includes:
a result storing unit 509, configured to submit the execution results of the historical blockchain task and the current blockchain task to the blockchain master network, so as to store the execution results in the blockchain master network.
Optionally, the current blockchain task includes at least one of:
the block chain node calculates a block chain task generated after the node is stopped under the chain and at the moment when the starting is finished;
and the block chain node calculates a block chain task generated after the current starting of the node under the chain is completed.
Optionally, the current starting of the calculation node under the link is normal starting or restarting after abnormal shutdown.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. One typical implementation device is a server system. Of course, the present invention does not exclude that as future computer technology develops, the computer implementing the functionality of the above described embodiments may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, if the terms first, second, etc. are used to denote names, they do not denote any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.
Claims (20)
1. A method for processing a block chain task under a chain is applied to a corresponding calculating node under the chain of the block chain, and comprises the following steps:
obtaining a cached historical blockchain task after the starting is finished, wherein the historical blockchain task is generated by the blockchain node before the computing node under the chain stops;
acquiring a current block chain task generated by the block chain link point, wherein the current block chain task is generated by the block chain node after the calculation node under the chain is stopped;
and processing the historical blockchain task and the current blockchain task in parallel.
2. The method of claim 1, the obtaining the current blockchain task generated by the blockchain link point, comprising:
in response to monitoring a task allocation event generated by the block chain link node executing the block chain transaction, determining the block chain task contained in the task allocation event as the current block chain task if the block chain task is allocated to the block chain member to which the block chain node belongs.
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
further comprising: sequentially inputting each historical block chain task into a first queue according to the time sequence of generating each historical block chain task, and sequentially inputting each current block chain task into a second queue according to the time sequence of generating each current block chain task;
the parallel processing of the historical blockchain task and the current blockchain task comprises: and acquiring the historical block chain task and the current block chain task from a first queue and a second queue respectively, and sequentially executing the acquired block chain tasks according to an acquisition sequence.
4. The method according to claim 3, wherein the main thread corresponding to the down-link computing node includes a first sub-thread, a second sub-thread, and a third sub-thread, and the sequentially inputting each historical block chain task into the first queue, sequentially inputting each current block chain task into the second queue, and alternately acquiring the historical block chain task and the current block chain task from the first queue and the second queue respectively comprises:
the first sub-thread inputs all historical block chain tasks into a first queue in sequence, the second sub-thread inputs all current block chain tasks into a second queue in sequence, and the third sub-thread alternately acquires the historical block chain tasks and the current block chain tasks from the first queue and the second queue respectively.
5. The method of claim 3, wherein the number of historical blockchain tasks in the first queue and the number of current blockchain tasks in the second queue do not exceed respective number thresholds.
6. The method of claim 1, the down-chain compute node comprising a scheduling engine and a compute engine comprised of a compute engine framework and compute units managed by the compute engine framework, the scheduling engine to schedule blockchain tasks to the compute engine for execution; the down-link compute node processing either the historical blockchain task or the current blockchain task, including:
the scheduling engine determines a target computing unit matched with any block chain task from the computing units and sends the any block chain task to a target computing engine framework to which the target computing unit belongs;
the target computing engine framework forwards the any blockchain task to the target computing unit under the condition that the target computing unit is determined to be in an available state;
the target computing unit executes the any blockchain task.
7. The method of claim 6, the scheduling engine determining a target compute unit from the compute units that matches the any blockchain task, comprising:
the scheduling engine determines available services allowed to be called by the scheduling engine according to available service information maintained by the scheduling engine, wherein the available services comprise a calculation engine framework and each calculation unit managed by the calculation engine framework;
and the scheduling engine determines a target computing unit matched with any blockchain task from the computing units.
8. The method of claim 7, further comprising:
the scheduling engine acquires a service change message from a service center, wherein the service change message is used for indicating a registered service after change, and the registered service comprises a calculation engine framework and a calculation unit which are registered to the service center;
and the scheduling engine updates the available service information according to the service change message, wherein the available service information is used for indicating the computing engine framework and the computing unit which are allowed to be called by the scheduling engine in the registered service.
9. The method of claim 8, the scheduler engine obtaining a service change message from a service center, comprising:
the scheduling engine initiates a change query request to a service center according to a preset query cycle and receives a service change message returned by the service center under the condition that the registered service is determined to be changed; or,
the scheduler engine receives a service change message sent by a service center in response to a change in the registered service.
10. The method of claim 7, the available service information including a compute type of the compute unit, the scheduling engine determining a target compute unit from the respective compute units that matches the any blockchain task, comprising:
the scheduling engine determines a target calculation type corresponding to any block chain task;
and the scheduling engine determines the calculation unit with the calculation type being the target calculation type in each calculation unit as the target calculation unit.
11. The method of claim 6, the scheduling engine determining a target compute unit from the compute units that matches the any blockchain task, comprising:
the scheduling engine respectively determines the current available resource amount of each computing unit in the calculation nodes under the chain;
the scheduling engine selects a target computing unit from computing units of which the current available resource quantity meets the execution condition of the block chain task, wherein the execution condition comprises that: the current amount of available resources is not less than a resource threshold, or the current amount of available resources is not less than an amount of resources required to perform the blockchain task.
12. The method of claim 6, the target compute unit to perform the any blockchain task, comprising:
the target computing unit initiates a data acquisition request aiming at any block chain task to a data manager and executes any block chain task according to data returned by the data manager; and/or the presence of a gas in the gas,
and the target computing unit executes any block chain task according to the data carried by any block chain task.
13. The method of claim 6, further comprising:
the scheduling engine records the acquired any blockchain task in a first task list maintained by the scheduling engine and sets the task state of the blockchain task as an uncompleted state; and in response to successfully submitting the execution result of any blockchain task to the blockchain node, deleting the blockchain task recorded in the first task list or updating the task state of the blockchain task to a completed state; and/or the presence of a gas in the gas,
the target calculation engine framework records any block chain task sent by the scheduling engine in a second task list maintained by the target calculation engine framework and sets the task state of the block chain task as an uncompleted state; and in response to successfully returning the execution result of any blockchain task to the scheduling engine, deleting the blockchain task recorded in the second task list or updating the task state of the blockchain task to be the completed state.
14. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the block chain node and the down-chain computing node are respectively positioned in different processes in the same node device; or,
in a case where the calculation node under the chain includes a scheduling engine and a calculation engine composed of a calculation engine framework and a calculation unit managed thereby, the blockchain node and the scheduling engine are respectively in different processes in the same node device, and the calculation engine framework and the calculation unit managed thereby are in the same or different calculation devices other than the node device.
15. The method of claim 1, the blockchain node belonging to a blockchain subnet managed by a blockchain master network, the method further comprising:
and submitting the execution results of the historical block chain task and the current block chain task to the block chain master network so as to store the execution results in the block chain master network.
16. The method of claim 1, the current blockchain task comprising at least one of:
the block chain node calculates a block chain task generated after the node is stopped under the chain and at the moment when the starting is finished;
and the block chain node calculates a block chain task generated after the current starting of the node under the chain is completed.
17. The method of claim 1, wherein the current startup of the down-link compute node is a normal startup or a restart after an abnormal shutdown.
18. An apparatus for processing a block chain task under a chain, applied to an under-chain computing node corresponding to a block chain node point, the apparatus comprising:
a historical task obtaining unit, configured to obtain a cached historical blockchain task after the starting is completed, where the historical blockchain task is generated by a blockchain node before the shutdown of the catenated computing node;
a current task obtaining unit, configured to obtain a current block chain task generated by the block chain link point, where the current block chain task is generated by the block chain node after the down-chain computing node is shut down;
and the parallel processing unit is used for processing the historical block chain task and the current block chain task in parallel.
19. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-17 by executing the executable instructions.
20. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 17.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210473040.2A CN114860400A (en) | 2022-04-29 | 2022-04-29 | Under-link processing method and device for block chain task |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210473040.2A CN114860400A (en) | 2022-04-29 | 2022-04-29 | Under-link processing method and device for block chain task |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114860400A true CN114860400A (en) | 2022-08-05 |
Family
ID=82636170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210473040.2A Pending CN114860400A (en) | 2022-04-29 | 2022-04-29 | Under-link processing method and device for block chain task |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114860400A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115460222A (en) * | 2022-09-05 | 2022-12-09 | 蚂蚁区块链科技(上海)有限公司 | Block chain data flow calculating device |
-
2022
- 2022-04-29 CN CN202210473040.2A patent/CN114860400A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115460222A (en) * | 2022-09-05 | 2022-12-09 | 蚂蚁区块链科技(上海)有限公司 | Block chain data flow calculating device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2454666B1 (en) | Fault tolerant batch processing | |
CN107251486B (en) | Method, device and system for expanding linkage | |
CN104156263A (en) | Interruption of chip component managing tasks, chip, and assembly comprising chip | |
JP2015011716A (en) | Task execution by idle resources in grid computing system | |
CN113067850B (en) | Cluster arrangement system under multi-cloud scene | |
CN113382077B (en) | Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium | |
CN112965817B (en) | Resource management method and device and electronic equipment | |
CN111858007A (en) | Task scheduling method and device based on message middleware | |
CN112860387A (en) | Distributed task scheduling method and device, computer equipment and storage medium | |
CN114860400A (en) | Under-link processing method and device for block chain task | |
CN114820187A (en) | Data processing method and device, electronic equipment and storage medium | |
CN114237891A (en) | Resource scheduling method and device, electronic equipment and storage medium | |
CN117499490A (en) | Multi-cluster-based network scheduling method and device | |
CN114780243A (en) | Service updating method and device | |
WO2023160418A1 (en) | Resource processing method and resource scheduling method | |
CN114710492B (en) | Method and device for establishing direct connection channel, electronic equipment and storage medium | |
CN112015515A (en) | Virtual network function instantiation method and device | |
CN111475277A (en) | Resource allocation method, system, equipment and machine readable storage medium | |
WO2018188958A1 (en) | A method and a host for managing events in a network that adopts event-driven programming framework | |
CN114880093A (en) | Under-link processing method and device for block chain task | |
CN114780296A (en) | Data backup method, device and system for database cluster | |
US8589924B1 (en) | Method and apparatus for performing a service operation on a computer system | |
CN114691309A (en) | Batch business processing system, method and device | |
CN114979160B (en) | Block chain task allocation method and device, electronic equipment and computer readable storage medium | |
CN115017167A (en) | Block chain system and information updating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |