WO2023185044A1 - 一种数据处理方法、装置、电子设备和存储介质 - Google Patents

一种数据处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2023185044A1
WO2023185044A1 PCT/CN2022/135207 CN2022135207W WO2023185044A1 WO 2023185044 A1 WO2023185044 A1 WO 2023185044A1 CN 2022135207 W CN2022135207 W CN 2022135207W WO 2023185044 A1 WO2023185044 A1 WO 2023185044A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing
task
heterogeneous
engine
chain
Prior art date
Application number
PCT/CN2022/135207
Other languages
English (en)
French (fr)
Inventor
谢桂鲁
邓福喜
石柯
王毅飞
Original Assignee
蚂蚁区块链科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 蚂蚁区块链科技(上海)有限公司 filed Critical 蚂蚁区块链科技(上海)有限公司
Publication of WO2023185044A1 publication Critical patent/WO2023185044A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/76Adapting program code to run in a different environment; Porting

Definitions

  • the embodiments of this specification belong to the field of blockchain technology, and particularly relate to a data processing method, device, electronic equipment and storage medium.
  • Blockchain is a new application model of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • data blocks are combined into a chained data structure in a chronological manner and are cryptographically guaranteed to be an untamperable and unforgeable distributed ledger. Due to the characteristics of blockchain, such as decentralization, non-tamperable information, and autonomy, blockchain has also received more and more attention and applications.
  • the blockchain network can undertake off-chain computing tasks based on smart contract definitions. At this time, each node device of each blockchain node in the blockchain network will call the locally deployed off-chain device under the guidance of events generated by the smart contract.
  • the computing engine is used to implement off-chain computing tasks.
  • the total number of off-chain computing engines on node devices is limited and needs to follow a specific development paradigm. This requires some existing computing engines that do not comply with the relevant development paradigm to undergo major modifications before they can be used to implement off-chain computing tasks. Algorithm migration costs are high.
  • the object of the present invention is to provide a data processing method, device, electronic equipment and storage medium.
  • a data processing method is proposed, which is applied to a first node device where a first blockchain node is deployed, and a blockchain network to which the first blockchain node belongs.
  • Deployed off-chain computing contracts The method includes: monitoring the task event generated by the off-chain computing contract for the first computing task; when the first blockchain node belongs to the participant node corresponding to the first computing task, calling the first node device.
  • the deployed first standard computing engine performs the first computing task.
  • the first standard computing engine is configured to: send the standard task request corresponding to the first computing task to the conversion module, and the conversion module converts the standard task request into a heterogeneous task identifiable by the first heterogeneous computing engine. request, and receive the standard execution result returned by the conversion module.
  • the standard execution result is obtained by the conversion module converting the heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request.
  • a data processing device which is applied to a first node device where a first blockchain node is deployed, and a blockchain network to which the first blockchain node belongs.
  • the device includes: an event monitoring unit for monitoring task events generated by the off-chain computing contract for the first computing task; a task execution unit for participating in the first blockchain node corresponding to the first computing task.
  • the first standard computing engine deployed on the first node device is called to perform the first computing task.
  • the first standard computing engine is configured to: send the standard task request corresponding to the first computing task to the conversion module, and the conversion module converts the standard task request into a heterogeneous task identifiable by the first heterogeneous computing engine. request, and receive the standard execution result returned by the conversion module.
  • the standard execution result is obtained by the conversion module converting the heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request.
  • an electronic device including: a processor; and a memory for storing instructions executable by the processor.
  • the processor implements the method described in the first aspect by running the executable instructions.
  • a computer-readable storage medium on which computer instructions are stored, and when the instructions are executed by a processor, the steps of the method described in the first aspect are implemented.
  • Figure 1 is a flow chart of a data processing method provided by an exemplary embodiment.
  • Figure 2a is an architectural schematic diagram of a data processing system provided by an exemplary embodiment.
  • Figure 2b is an architectural schematic diagram of another data processing system provided by an exemplary embodiment.
  • Figure 3 is a schematic diagram of a computing engine interaction scenario provided by an exemplary embodiment.
  • Figure 4 is a schematic structural diagram of a device provided by an exemplary embodiment.
  • Figure 5 is a block diagram of a data processing device provided in an exemplary embodiment.
  • Figure 1 is a flow chart of a data processing method provided by an exemplary embodiment. This method is applied to a first node device deployed with a first blockchain node, and the blockchain network to which the first blockchain node belongs is deployed with an off-chain computing contract; the method includes:
  • S102 Monitor the task event generated by the off-chain computing contract for the first computing task
  • the off-chain computing contract is an on-chain carrier used to carry off-chain computing tasks.
  • the off-chain computing contract defines several sub-tasks included in the off-chain computing tasks, which are used to describe an off-chain computing task.
  • the data flow direction and the computing collaboration process of each node device Since the off-chain computing contract is deployed on the blockchain network, the participant nodes of the off-chain computing tasks defined by the off-chain computing contract are limited to not exceeding the scope of each blockchain node in the blockchain network.
  • off-chain computing contracts can be deployed in the same blockchain network, and the number and performance of the participant nodes involved in different off-chain computing contracts can be flexibly configured, which makes it possible to rely on the same blockchain
  • the network can realize the deployment of off-chain computing tasks with different task types, task requirements and task scales.
  • off-chain computing contract guides the implementation of its defined off-chain computing tasks
  • the following will briefly introduce the implementation logic of off-chain computing tasks through the operation process of a typical off-chain computing contract.
  • Users can generate the code of the off-chain computing contract through the visual contract orchestration system and deploy the off-chain computing contract in the blockchain network, so that the off-chain computing contract defines a type of workflow for off-chain computing tasks, which is embodied as Several subtasks with execution-dependent order.
  • users who have the authority to call the off-chain computing contract can create and start an off-chain computing task by initiating a task creation transaction to the off-chain computing contract.
  • the off-chain computing contract will receive After the task creation transaction, a task instance of the off-chain computing task belonging to the initiating user will be created accordingly.
  • the task instance maintains the task completion status of the off-chain computing task, which is specifically reflected in the status of each sub-task under the off-chain computing task. Task completion status.
  • the off-chain computing contract responds to the task creation transaction and generates the corresponding task instance, it will further trigger the execution of the first sub-task corresponding to the instance, which is reflected in the off-chain computing contract as generating a participant node containing the first sub-task.
  • each blockchain node in the blockchain network can listen to this event, and the node devices of those blockchain nodes that determine that they belong to the participant nodes of the first subtask will further call the node device matching the first subtask.
  • a subtask's off-chain computing resources and/or off-chain storage resources are used to execute the first subtask off-chain.
  • the node device where the participant node is located will further calculate the contract to the off-chain. Initiate a result return transaction carrying the execution result of the first subtask, so that the off-chain computing contract updates the task completion status of the corresponding task instance.
  • the off-chain computing contract For example, when the execution result of the first subtask is successful, the off-chain computing contract The task completion status of the first subtask in the corresponding task instance will be marked as completed, thereby triggering the execution of the next batch of subtasks according to the dependency order of each subtask included in the predefined off-chain computing task, and then generating the next batch of subtasks.
  • the events of the participant nodes of a batch of subtasks are monitored by each blockchain node in the blockchain network. The subsequent process is similar to the aforementioned process of processing the first subtask.
  • the tasks performed by the off-chain computing contract during the execution of the off-chain computing task only include scheduling tasks such as creating task instances, receiving sub-task results, sub-task scheduling and sub-task delivery.
  • scheduling tasks such as creating task instances, receiving sub-task results, sub-task scheduling and sub-task delivery.
  • the mechanism and the transaction callback mechanism realize a distributed computing based on the blockchain, so that the off-chain computing tasks are anchored by the off-chain computing contracts on the blockchain, fully ensuring that the entire task execution process is traceable.
  • off-chain resources simultaneously enables reliable information interaction and collaborative computing between different node devices relying on the blockchain.
  • off-chain computing tasks are defined in the form of contracts and the design of off-chain computing tasks is not affected by on-chain Resource constraints mean that different off-chain computing contracts can be designed to meet different actual needs, and on-chain collaboration methods can be expanded through off-chain resources.
  • the off-chain computing contract maintains a task completion status corresponding to the off-chain computing task, and the task completion status is used to describe the completion status of each sub-task included in the off-chain computing task; in the first When a computing task belongs to a subtask of the off-chain computing task, the monitoring of the task event generated by the off-chain computing contract for the first computing task includes: monitoring the off-chain computing contract when the task The task event for the first computing task generated when the completion status satisfies the execution condition corresponding to the first computing task.
  • the off-chain computing task is represented as a corresponding task instance on the off-chain computing contract, and its task completion status is maintained in the corresponding task instance of the off-chain computing contract.
  • the task instance maintains various The completion status of the subtask. Since the execution dependency order of each sub-task included in the off-chain computing task has been pre-defined, this means that the execution conditions of each sub-task have also been determined, so the off-chain computing contract can be based on the completion of each sub-task included in the off-chain computing task. status to further determine the first computing task that needs to be executed next, thereby initiating a task event for the first computing task. Further, it also includes: when the execution of the first computing task is completed, initiating a result return transaction containing the execution result corresponding to the first computing task to the off-chain computing contract through the first blockchain node to update the The task completion status corresponding to the off-chain computing tasks maintained by the off-chain computing contract.
  • the node device when the node device executes the first computing task by calling resources and completes the execution, it will update the task completion status of the off-chain computing task maintained by the off-chain computing contract by initiating a result return transaction, thereby making the off-chain
  • the computing contract can further determine the next subtask that should be executed based on the execution dependency sequence of each subtask in the off-chain computing task, and generate a task event for the next subtask.
  • the entity that monitors the task events generated by the off-chain computing contract and initiates the result return transaction to the off-chain computing contract is specifically the scheduling engine deployed on the first node device.
  • the task event generated by the off-chain computing contract and monitored by the node device for the first computing task records the description information of the participant node of the first computing task.
  • the task event includes description information of the participating nodes of the first computing task, which means that the first computing task specifies the identity information of the participating blockchain nodes that it requires.
  • the first node device may determine that the first blockchain node to which it is deployed belongs to the first computing task when it is determined that the description information of the participant node contains the identification information of the first blockchain node deployed by itself.
  • the first node device needs to respond and perform the first computing task; and if the description information of the participant node does not include the identification information of the first blockchain node, the first node device can determine that it is The deployed first blockchain node does not belong to the participant of the first computing task, so the first node device will not respond to perform the first computing task.
  • the task event will also record the task identifier of the off-chain computing task and the first computing task, thereby distinguishing different tasks and subtasks.
  • the first computing task also records the calculations and data transfer operations it needs to perform, and specifies the source of the required data. This information is used to inform each node device of the task type and implementation method of the first computing task. Thereby, the node device is instructed to execute the first computing task according to the expectations of the first computing task after determining the task type of the first computing task and its implementation method corresponding to the callable resources.
  • the task completion status is updated by the off-chain computing contract in response to the transaction corresponding to the off-chain computing task, wherein the transaction corresponding to the off-chain computing task includes the transaction corresponding to the off-chain computing task.
  • the off-chain computing contract maintains the task completion status corresponding to one or more off-chain computing tasks.
  • an off-chain computing contract only defines one type of off-chain computing task, but multiple task instances corresponding to the off-chain computing task can be created, and each task instance will record the completion of the task corresponding to the task instance. state. Therefore, the creation of multiple task instances maintained on the off-chain computing contract can be triggered by different users by initiating task creation contracts to the off-chain computing contract, or the creation can be triggered by the same user by initiating task creation contracts multiple times. , but these task instances all have the same execution logic, that is, the task types of each task maintained by the off-chain computing contract are the same.
  • S104 When the first blockchain node belongs to the participant node corresponding to the first computing task, call the first standard computing engine deployed on the first node device to execute the first computing task.
  • the first standard computing engine is configured to: send the standard task request corresponding to the first computing task to the conversion module, and the conversion module converts the standard task request into a heterogeneous task identifiable by the first heterogeneous computing engine. request, and receive the standard execution result returned by the conversion module.
  • the standard execution result is obtained by the conversion module converting the heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request.
  • the first node device when the first node device determines that the first blockchain node deployed by itself belongs to the participant node corresponding to the first computing task, it will trigger the execution of the first computing task. At this time, the first node The device will locally search for callable resources that can support the execution of the first computing task. Since the first standard computing engine deployed by the first node device can support the task type of the first computing task recorded in the execution task event, the first node The device may call the first standard computing engine to perform the first computing task. In the embodiment of this specification, the entity that locally searches for callable resources and calls the off-chain computing engine to perform computing tasks is specifically the scheduling engine deployed on the first node device.
  • the standard computing engine declares to the first node device that it has the ability to process the first computing task, it does not directly participate in the actual execution process of the first computing task such as data calculation, but only serves as a An intermediary between the scheduling engine deployed on the first node device and the first heterogeneous computing engine that actually executes the first computing task is used to communicate and convert task requests and task execution results.
  • an off-chain computing engine that can be directly called by the first node device, it needs to comply with a set of standardized development paradigms to support the execution of off-chain computing tasks defined for the off-chain computing contract. That is, only in accordance with specific Only the off-chain computing engine obtained by the development paradigm can support the execution of the first computing task.
  • the above development paradigm is mainly reflected in the limitations of the programming language for writing the off-chain computing engine and/or the limitations of the supported network transport layer protocols, and this
  • the limitation of this development paradigm is the set of callable resources such as scheduling engines, computing engines, and data engines used to execute off-chain computing tasks defined by off-chain computing contracts, including the scheduling engine, computing engine, and data engine of each node device where the blockchain node is deployed. This is caused by the fact that the on-chain and off-chain computing systems only have SDK (Software Development Kit) that supports specific programming languages and specific network transport layer protocols.
  • the heterogeneous computing engine involved in the embodiments of this specification refers to an off-chain computing engine that does not conform to the above development paradigm and cannot be directly called by the first node device
  • the standard computing engine involved in the embodiments of this specification refers to an off-chain computing engine that conforms to the above development paradigm.
  • the above development paradigm, an off-chain computing engine that can be directly called by the first node device is reflected at the execution level as follows: when the scheduling engine calls the first standard computing engine, what is sent to the first standard computing engine is the first computing task.
  • Standard task request and this standard task request can be recognized and responded to by the first standard computing engine, but cannot be recognized and responded to by the first heterogeneous computing engine; correspondingly, the first heterogeneous computing engine itself has a set of The calling specification can identify the heterogeneous task request defined by the first heterogeneous computing engine, execute the corresponding computing task in response to the heterogeneous task request, and generate the computing task defined by the first heterogeneous computing engine after completing the computing task. Heterogeneous execution results, but the heterogeneous execution results cannot be directly recognized by the scheduling engine.
  • the first heterogeneous computing engine cannot be directly linked to the scheduling engine and directly called by the scheduling engine. That is, the first heterogeneous computing engine cannot be directly called by the scheduling engine.
  • the computing engine or the on-chain-off-chain computing system is extensively modified, it is introduced into the on-chain-off-chain computing system to participate in the execution of off-chain computing tasks. Therefore, in the embodiment of this specification, the first standard computing engine is additionally set up in the node device as an intermediary to connect the existing first heterogeneous computing engines.
  • the scheduling engine can be as seamless as calling an off-chain computing engine that conforms to the relevant development paradigm.
  • it can also enable the first heterogeneous computing engine that does not conform to the relevant development paradigm to access the on-chain and off-chain computing system without any modification or only a small amount of modification, reducing the development time. cost.
  • the first node device calls the first standard computing engine to perform the first computing task. Specifically, it means that the scheduling engine deployed in the first computing device sends the standard task request corresponding to the first computing task generated by itself to the first computing device. Standard calculation engine. Each first standard computing engine uniquely corresponds to a first heterogeneous computing engine. After the first standard computing engine receives the standard task request, it will further convert it through the conversion module to convert the standard task request into The heterogeneous task request can be recognized by the corresponding first heterogeneous computing engine, and the heterogeneous task request is received by the corresponding first heterogeneous computing engine.
  • the first heterogeneous computing engine After receiving the heterogeneous task request, the first heterogeneous computing engine will execute the first computing task, that is, perform corresponding computing operations according to the computing task type, input data and other parameters carried in the heterogeneous task request. Finally, when the first heterogeneous computing engine completes executing the first computing task, the generated heterogeneous execution result corresponding to the first computing task can be converted into a format that can be recognized by the first standard computing engine and the scheduling engine through the conversion module. The standard execution result is so that the first standard calculation engine finally transmits the standard execution result received from the conversion module back to the scheduling engine, and the scheduling engine carries it to the off-chain calculation after the result return transaction corresponding to the first calculation task The contract initiates the result return transaction.
  • the first heterogeneous computing engine is deployed on the first node device, or the first heterogeneous computing engine is not deployed on the first node device and is between the first heterogeneous computing engine and the first standard computing engine.
  • a network connection is established.
  • the first heterogeneous computing engine can be deployed on the first node device.
  • a local connection is established between the first standard computing engine and the first heterogeneous computing engine, and they interact through local calls; alternatively, the first heterogeneous computing engine also It can be deployed outside the first node device, so that a network connection is established between the heterogeneous computing engine in the external device and the standard computing engine deployed on the first node device.
  • the network connection can be established using a peer-to-peer architecture or a client-server architecture.
  • a client-server architecture is used to establish a network connection
  • the first heterogeneous computing engine can serve as the client and the first standard The computing engine serves as the server, or the first heterogeneous computing engine can also serve as the server and the first standard computing engine serves as the client.
  • the network connection is established by the first standard computing engine and the first heterogeneous computing engine through a network protocol supported by the first heterogeneous computing engine.
  • the heterogeneous computing engine is an existing computing engine, and the network transport layer protocol it supports has been determined, and the specific network transport layer protocol supported by the scheduling engine in the first node device has also been determined. This means that even if the programming language of the heterogeneous computing engine conforms to the development paradigm of on-chain and off-chain computing systems, its corresponding network transport layer protocol is inconsistent with the specific network transport layer protocol supported by the scheduling engine, which will also lead to heterogeneous computing. The engine cannot directly access the scheduling engine and therefore cannot participate in the execution of off-chain computing tasks.
  • the embodiment of this specification introduces a standard computing engine as an intermediary to perform transformation and adaptation of the network transport layer protocol, that is, by establishing a link between the first standard computing engine and the scheduling engine based on the first transport layer protocol supported by the scheduling engine.
  • network connection and establish a second transport layer protocol supported by the first heterogeneous computing engine between the first standard computing engine and the first heterogeneous computing engine, thereby breaking through the network connection between the scheduling engine and the first heterogeneous computing engine.
  • Network transmission obstacles caused by protocol inconsistencies can be solved by using a standard computing engine that supports multiple network protocols at the same time as a bridge between the scheduling engine and the first heterogeneous computing engine to realize the network without any modifications in network transmission.
  • Heterogeneous computing engines with heterogeneous protocols are connected to the on-chain and off-chain computing systems.
  • the conversion module is an interface program of the first standard computing engine or the first heterogeneous computing engine.
  • Figure 2a and Figure 2b are both schematic architectural diagrams of a data processing system provided by an exemplary embodiment, which describe a situation where the first heterogeneous computing engine is not deployed on the first node device.
  • the conversion module is deployed on the first node device as an interface program of the first standard computing engine
  • the conversion module is deployed outside the first node device as the first heterogeneous computing engine 2, and is deployed on the external device together with the first heterogeneous computing engine.
  • the first standard computing engine is further configured to: obtain input data dependent on the first computing task from the first node device, and carry the input data in the standard task request. Since the first heterogeneous computing engine cannot directly access the on-chain-off-chain computing system, but needs to use the standard computing engine as an intermediary, the first heterogeneous computing engine lacks the ability to directly call the on-chain-off-chain computing system, for example The first heterogeneous computing engine may need to call the calling engine on the first node device to access some on-chain information (block information, contract status, etc.) of the blockchain network when performing the first computing task, but because it does not know The network address and calling rules of the scheduling engine (including the definition of the request structure and the definition of the response structure.
  • the network address and calling rules of the scheduling engine including the definition of the request structure and the definition of the response structure.
  • the first standard engine can first save some on-chain-off-chain computing systems that the first heterogeneous computing engine relies on when executing the first computing task in the future.
  • the data is collected in advance, and after carrying these data in the standard task request as the input data that the first computing task depends on, the standard task request carrying the input data is sent to the conversion module, so that the first heterogeneous computing engine These input data are obtained directly without the need to temporarily obtain them from the on-chain-off-chain computing system during the execution of the first computing task.
  • the first heterogeneous computing engine triggers sending the heterogeneous execution result to the conversion module when the first computing task is completed, and/or, the first heterogeneous computing engine receives Sending the heterogeneous execution results to the conversion module is triggered when a result query request for the first computing task occurs.
  • the first heterogeneous computing engine can send the generated heterogeneous execution results to the conversion module at the moment when the first computing task is completed, and the conversion module returns the standard execution results obtained by the conversion to The first standard computing engine; the first heterogeneous computing engine can also store the corresponding heterogeneous execution results locally in the first heterogeneous computing engine after completing the execution of the first computing task, and wait for the first standard computing engine to send it the corresponding heterogeneous execution results for the first computing task. After a result query request for a computing task is triggered, the heterogeneous execution results are sent to the conversion module, and the conversion module returns the standard execution results obtained by conversion to the first standard computing engine.
  • the embodiments of this specification provide at least two methods of obtaining heterogeneous execution results corresponding to the first computing task, including actively pushing results and waiting for query results, to adapt to a variety of different application scenarios.
  • multiple node devices will be involved in jointly participating in executing the first computing task. Specifically, it refers to being connected to multiple node devices through network connections or directly deployed. Data interaction between multiple computing engines on multiple node devices.
  • the first heterogeneous computing engine that establishes a network connection with the first node device it is also possible to interact with other computing engines in the process of executing the first computing task, and the first heterogeneous computing engine This interaction between computing engines can be achieved in a variety of ways.
  • the first heterogeneous computing engine is configured to: during the process of executing the first computing task in response to the heterogeneous task request, obtain the first data sent by other computing engines for use in executing the first computing task. used in the process of a computing task; and/or, sending the second data generated in the process of executing the first computing task in response to the heterogeneous task request to the other computing engines, so that the other computing engines The second data is used during performance of the first computing task.
  • the first heterogeneous computing engine can know which computing engines participate in the first computing task through the heterogeneous task request.
  • the first heterogeneous computing engine can receive The first data of other computing engines that are also in the process of executing the first computing task, and the first data will be used by the first heterogeneous computing engine in the subsequent process of executing the first computing task.
  • the generated data may also be The second data is sent to other computing engines defined in the first computing task that need to obtain the second data and are executing the first computing task, so that the second data can be used when the other computing engines subsequently perform the first computing task. used in the process.
  • Figure 3 is a schematic diagram of a computing engine interaction scenario provided by an exemplary embodiment. It is assumed that the participant node corresponding to the first computing task includes both a first blockchain node and a second blockchain node. and the third blockchain node, and the computing engines specifically involved in executing the first computing task include the standard computing engine A deployed on the first node device and the off-chain computing engine A deployed on the second node device. B, the standard computing engine B deployed on the third node device, and the heterogeneous computing engine A that has a network connection with the standard computing engine A, and the heterogeneous computing engine B that has a network connection with the standard computing engine B.
  • the definition of the first computing task includes the data interaction process in which heterogeneous computing engine A obtains the first data from off-chain computing engine B and sends the second data generated by itself to heterogeneous computing engine B. Therefore, for As for the heterogeneous computing engine A, it can obtain the second data provided by the off-chain computing engine B through the network connection established with the off-chain computing engine B, and the first data generated by itself can be transmitted to the heterogeneous computing engine B through the network connection established with the off-chain computing engine B. The network connection established between them is sent to heterogeneous computing engine B, thereby completing the data interaction process between computing engines defined by the first computing task, and finally completing the first computing task with the help of collaborative interaction of multiple computing engines. .
  • the first heterogeneous computing engine is configured to: in the process of executing the first computing task in response to the heterogeneous task request, obtain the first data received by the first standard computing engine from other computing engines. data for use in the process of executing the first computing task; and/or sending the second data generated in the process of executing the first computing task in response to the heterogeneous task request through the first standard computing engine to the other computing engines, so that the other computing engines use the second data during execution of the first computing task.
  • this embodiment requires that the heterogeneous computing engine cannot establish network connections with other computing engines participating in the execution of the first computing task, in addition to being able to establish network connections with its corresponding standard computing engines.
  • network connections may not be established properly between heterogeneous computing engines and other off-chain computing engines that conform to the on-chain-off-chain computing system development paradigm (for example, due to inconsistent network protocols). Even if a network connection can be established, it may not be recognized.
  • Each other's interactive data (such as inconsistencies in structure definitions due to inconsistent programming languages), so in order to ensure the effectiveness of interaction, it is necessary to avoid data transmission through direct network connections between heterogeneous computing engines and other computing engines as much as possible. Interaction.
  • the heterogeneous computing engines by establishing network connections between all off-chain computing engines that participate in performing the first computing task and conform to the on-chain-off-chain computing system development paradigm, the heterogeneous computing engines have the ability to interact with other computing engines.
  • the corresponding standard computing engine is used as an intermediary to realize the conversion and forwarding of interactive data, thereby ensuring the effectiveness of the interaction between heterogeneous computing engines and other computing engines.
  • heterogeneous computing engine A does not have a direct network connection with heterogeneous computing engine B, off-chain computing engine A and off-chain computing engine B, and it is also assumed that in the first computing task
  • the definition includes the data interaction process in which heterogeneous computing engine A obtains the first data from off-chain computing engine B and sends the second data generated by itself to heterogeneous computing engine B.
  • the off-chain computing engine B it can send the first data generated when executing the first computing task to the standard computing engine A; for the heterogeneous computing engine A, on the one hand, it receives the standard computing engine A
  • the first data identifiable by heterogeneous computing engine A obtained from off-chain computing engine B and converted through the conversion module on the other hand, the second data generated during its execution of the first computing task is converted through the conversion module It is the second data that can be recognized by the on-chain and off-chain computing systems and is sent to the standard computing engine A.
  • the standard computing engine A forwards the second data that can be recognized by the on-chain and off-chain computing systems to the standard computing engine B.
  • the standard computing engine B passes the second data recognized by the on-chain and off-chain computing systems through the conversion module to obtain the second data recognized by the heterogeneous computing engine B and finally sends it to the heterogeneous computing engine B, thereby completing
  • the data interaction process between the computing engines defined by the first computing task is to finally complete the first computing task with the help of collaborative interaction of multiple computing engines.
  • the other computing engines involved in the embodiments of this specification include: off-chain computing engines other than the first standard computing engine deployed on any node device, or established with the standard computing engine deployed on any node device.
  • Figure 4 is a schematic structural diagram of a device provided by an exemplary embodiment.
  • the device includes a processor 402, an internal bus 404, a network interface 406, a memory 408 and a non-volatile memory 410.
  • the processor 402 reads the corresponding computer program from the non-volatile memory 410 into the memory 408 and then runs it.
  • the execution subject of the following processing flow is not limited to each A logic unit can also be a hardware or logic device.
  • Figure 5 is a block diagram of a data processing device provided in this specification according to an exemplary embodiment.
  • the device can be applied in the equipment shown in Figure 4 to implement the technical solution of this specification; so The device is applied to a first node device deployed with a first blockchain node, and the blockchain network to which the first blockchain node belongs is deployed with an off-chain computing contract.
  • the device includes: an event listening unit 501, used to monitor the task event generated by the off-chain computing contract for the first computing task; a task execution unit 502, used to correspond to the first computing task on the first blockchain node.
  • the first standard computing engine deployed on the first node device is called to perform the first computing task.
  • the first standard computing engine is configured to: send the standard task request corresponding to the first computing task to the conversion module, and the conversion module converts the standard task request into a heterogeneous task identifiable by the first heterogeneous computing engine. request, and receive the standard execution result returned by the conversion module.
  • the standard execution result is obtained by the conversion module converting the heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request.
  • the conversion module is an interface program of the first standard computing engine or the first heterogeneous computing engine.
  • the first standard computing engine is also configured to: obtain input data dependent on the first computing task from the first node device, and carry the input data in the standard task request.
  • the first heterogeneous computing engine triggers sending the heterogeneous execution result to the conversion module when the first computing task is completed, and/or, the first heterogeneous computing engine receives the request for the first computing task.
  • a result query request of a computing task triggers sending of the heterogeneous execution results to the conversion module.
  • the first heterogeneous computing engine is deployed on the first node device, or the first heterogeneous computing engine is not deployed on the first node device and a network is established between the first heterogeneous computing engine and the first standard computing engine. connect.
  • the network connection is established between the first standard computing engine and the first heterogeneous computing engine through a network protocol supported by the first heterogeneous computing engine.
  • the first heterogeneous computing engine is configured to: during the process of executing the first computing task in response to the heterogeneous task request, obtain the first data sent by other computing engines for use in executing the first computing task. used in the process of the task; and/or, sending the second data generated in the process of executing the first computing task in response to the heterogeneous task request to the other computing engines, so that the other computing engines execute The second data is used during the course of the first calculation task.
  • the first heterogeneous computing engine is configured to: during the process of executing the first computing task in response to the heterogeneous task request, obtain the first data received by the first standard computing engine from other computing engines, to For use in the process of executing the first computing task; and/or, sending the second data generated in the process of executing the first computing task in response to the heterogeneous task request to the first standard computing engine. other computing engines, so that the other computing engines use the second data in the process of performing the first computing task.
  • the other computing engines include: off-chain computing engines other than the first standard computing engine deployed on any node device, or those that have a network connection with the standard computing engine deployed on any node device. Heterogeneous computing engines except the first heterogeneous computing engine, wherein the blockchain nodes in the blockchain network deployed on any node device belong to the participant nodes.
  • the off-chain computing contract maintains a task completion status corresponding to the off-chain computing task.
  • the task completion status is used to describe the completion status of each sub-task included in the off-chain computing task; in the first computing task
  • the event listening unit 501 is specifically configured to: monitor the off-chain computing contract to be generated when the task completion status satisfies the execution conditions corresponding to the first computing task. The task event for the first computing task.
  • the task completion status is updated by the off-chain computing contract in response to a transaction corresponding to the off-chain computing task, where the transaction corresponding to the off-chain computing task includes the transaction corresponding to the off-chain computing task.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor. , logic gates, switches, Application Specific Integrated Circuit (ASIC), programmable logic controllers and embedded microcontrollers.
  • controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, For Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller in addition to implementing the controller in the form of pure computer-readable program code, the controller can be completely programmed with logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded logic by logically programming the method steps. Microcontroller, etc. to achieve the same function. Therefore, this controller can be considered as a hardware component, and the devices included therein for implementing various functions can also be considered as structures within the hardware component. Or even, the means for implementing various functions can be considered as structures within hardware components as well as software modules implementing the methods.
  • the systems, devices, modules or units described in the above embodiments may be implemented by computer chips or entities, or by products with certain functions.
  • a typical implementation device is a server system.
  • the computer that implements the functions of the above embodiments may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, or a personal digital assistant. , media player, navigation device, email device, game console, tablet, wearable device, or a combination of any of these devices.
  • the functions are divided into various modules and described separately.
  • the functions of each module can be implemented in the same or multiple software and/or hardware, or the modules that implement the same function can be implemented by a combination of multiple sub-modules or sub-units, etc. .
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart diagram and/or a block or blocks of a block diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-permanent storage in computer-readable media, random access memory (RAM) and/or non-volatile memory in the form of read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • read-only memory read-only memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory CD-ROM
  • DVD digital versatile disc
  • Magnetic tape magnetic tape storage, graphene storage or other magnetic storage devices or any other non-transmission medium can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, one or more embodiments of the present description may employ a computer program implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. Product form.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • program modules may also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.

Abstract

本说明书提供一种数据处理方法、装置、电子设备和存储介质,应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约。所述方法包括:监听链下计算合约生成的针对第一计算任务的任务事件;在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务。其中,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由转换模块将标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收转换模块返回的标准执行结果,标准执行结果由转换模块对第一异构计算引擎基于异构任务请求所生成的异构执行结果进行转换得到。

Description

一种数据处理方法、装置、电子设备和存储介质 技术领域
本说明书实施例属于区块链技术领域,尤其涉及一种数据处理方法、装置、电子设备和存储介质。
背景技术
区块链(Blockchain)是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链系统中按照时间顺序将数据区块以顺序相连的方式组合成链式数据结构,并以密码学方式保证的不可篡改和不可伪造的分布式账本。由于区块链具有去中心化、信息不可篡改、自治性等特性,区块链也受到人们越来越多的重视和应用。
区块链网络可以承担基于智能合约定义的链下计算任务,此时区块链网络中各区块链节点分别所处的各节点设备会在智能合约生成的事件的指导下,调用本地部署的链下计算引擎以用于实现链下计算任务。然而,节点设备上的链下计算引擎的总量有限且需要遵循特定的开发范式,这使得一些不符合相关开发范式的现有计算引擎需要经过较大改造才能被用作实现链下计算任务,算法移植成本较大。
发明内容
本发明的目的在于提供一种数据处理方法、装置、电子设备和存储介质。
根据本说明书一个或多个实施例的第一方面,提出了一种数据处理方法,应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约。所述方法包括:监听所述链下计算合约生成的针对第一计算任务的任务事件;在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务。其中,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
根据本说明书一个或多个实施例的第二方面,提出了一种数据处理装置,应用于部 署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约。所述装置包括:事件监听单元,用于监听所述链下计算合约生成的针对第一计算任务的任务事件;任务执行单元,用于在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务。其中,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
根据本说明书一个或多个实施例的第三方面,提出了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器。其中,所述处理器通过运行所述可执行指令以实现如第一方面所述的方法。
根据本说明书一个或多个实施例的第四方面,提出了一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现如第一方面所述方法的步骤。
在本说明书实施例中,通过在节点设备上部署符合支持执行链下计算任务执行的开发范式的标准计算引擎,同时引入转换模块作为标准计算引擎与异构计算引擎之间的转换媒介,从而可以在不进行改造或仅进行少量改造的情况下,使不符合相关开发范式的异构计算引擎能够支持执行链下计算任务,从而减少了算法移植成本,同时扩展了节点设备有限的计算引擎资源,以支持实现更多类型的链下计算任务。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是一示例性实施例提供的一种数据处理方法的流程图。
图2a是一示例性实施例提供的一种数据处理系统的架构示意图。
图2b是一示例性实施例提供的另一种数据处理系统的架构示意图。
图3是一示例性实施例提供的一种计算引擎交互的场景示意图。
图4是一示例性实施例提供的一种设备的结构示意图。
图5是一示例性实施例提供的一种数据处理装置的框图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。
图1是一示例性实施例提供的一种数据处理方法的流程图。该方法应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约;所述方法包括:
S102:监听所述链下计算合约生成的针对第一计算任务的任务事件;
在本说明书实施例中,链下计算合约是一个用于承载链下计算任务的链上载体,链下计算合约中定义有链下计算任务包含的若干子任务,用于描述一个链下计算任务中的数据流向和各节点设备的计算协作过程。由于链下计算合约部署在区块链网络上,因此限定了链下计算合约所定义的链下计算任务的参与方节点不超过区块链网络中的各区块链节点的范围。显然,同一个区块链网络中可以部署多个链下计算合约,而不同的链下计算合约其所涉及的参与方节点的数量和性能均可以灵活配置,这使得依托于同一个区块链网络可以实现不同任务类型、任务需求和任务规模的链下计算任务的部署。
为了说明链下计算合约如何指导以实现其定义的链下计算任务,下面将通过一个典型的链下计算合约的运作过程来简单介绍链下计算任务的实现逻辑。用户可以通过可视化合约编排系统生成链下计算合约的代码并在区块链网络中部署链下计算合约,从而使得链下计算合约定义了一种类型的链下计算任务的工作流程,它体现为若干个具有执行依赖顺序的子任务。在链下计算合约部署成功后,有权限调用该链下计算合约的用户就可以通过向链下计算合约发起任务创建交易的方式来创建并启动一个链下计算任务,链下计算合约在接收到任务创建交易后会相应地创建一个归属于发起方用户的链下计算任务的任务实例,该任务实例中维护有链下计算任务的任务完成状态,具体体现为链下计算任务下各子任务的任务完成状态。链下计算合约响应于任务创建交易并生成对应的任务实例后,会进一步触发执行该实例对应的第一个子任务,在链下计算合约上体现为生成包含第一个子任务的参与方节点的事件,区块链网络中的各区块链节点都可以监听 该事件,并且那些判定自身属于第一个子任务的参与方节点的区块链节点所处的节点设备会进一步调用匹配于该第一个子任务的链下计算资源和/或链下存储资源来在链下执行该第一个子任务,最后,参与方节点所处的节点设备在执行完毕后,会进一步向链下计算合约发起携带有第一个子任务的执行结果的结果返回交易,从而使得链下计算合约更新对应任务实例的任务完成状态,例如当第一个子任务的执行结果为执行成功时,链下计算合约就会将对应任务实例中第一个子任务的任务完成状态标记为已完成,从而按照预定义的链下计算任务包含的各子任务的依赖顺序触发执行下一批子任务,进而生成包含下一批子任务的参与方节点的事件以供区块链网络中的各区块链节点监听,其后续过程与前述处理第一个子任务的过程类似。由此一来,就形成了一个“链下计算合约更新任务完成状态→链下计算合约生成子任务事件→区块链节点监听子任务事件并由被指定的节点设备执行子任务→节点设备向链下计算合约发起子任务的结果返回交易→链下计算合约更新任务完成状态”的循环,直至链下计算合约中任务实例中所有子任务的任务完成状态均为已完成的情况下,确定该任务实例对应的链下计算任务已经执行完成。
不难发现,链下计算合约在链下计算任务的执行过程中所执行的任务仅包括创建任务实例、接收子任务结果、子任务调度与子任务下发这类调度性任务,实际上并没有真正执行链下计算任务所定义和要求执行的如数据计算、数据转移和数据存储等实际任务,而这些大量消耗资源的任务被调度至各节点设备所对应的链下进行执行,从而通过事件监听机制以及交易回传机制实现了一种基于区块链的分布式计算,使链下计算任务被区块链上的链下计算合约所锚定,在确保任务执行全流程可追踪的前提下充分利用链下资源,同时使得不同节点设备之间依托于区块链实现可信的信息交互与协作计算,另外由于链下计算任务是以合约形式定义且链下计算任务的设计并不受到链上资源的掣肘,这意味着可以通过设计不同的链下计算合约以满足不同的实际需求,通过链下资源扩展了链上协作方式。
在本说明书实施例中,所述链下计算合约维护有链下计算任务对应的任务完成状态,所述任务完成状态用于描述所述链下计算任务包含的各子任务的完成状态;在第一计算任务属于所述链下计算任务的子任务的情况下,所述监听所述链下计算合约生成的针对第一计算任务的任务事件,包括:监听所述链下计算合约在所述任务完成状态满足第一计算任务对应的执行条件的情况下生成的针对第一计算任务的所述任务事件。在本说明书实施例中,链下计算任务在链下计算合约上表现为对应的任务实例,其任务完成状态维护在链下计算合约的相应任务实例中,具体表现为该任务实例中维护有各子任务的完 成状态。由于链下计算任务包含的各子任务的执行依赖顺序已经预先定义,这意味着每个子任务的执行条件也已确定,因此链下计算合约可以依据链下计算任务中包含的各子任务的完成状态来进一步确定接下来所需执行的第一计算任务,从而发起针对第一计算任务的任务事件。进一步的,还包括:在第一计算任务执行完毕的情况下,通过第一区块链节点向所述链下计算合约发起包含第一计算任务对应的执行结果的结果返回交易,以更新所述链下计算合约维护的链下计算任务对应的任务完成状态。如前所述,在节点设备通过调用资源执行第一计算任务并执行完毕的情况下,会通过发起结果返回交易来更新链下计算合约维护的链下计算任务的任务完成状态,从而使得链下计算合约可以进一步根据链下计算任务中各子任务的执行依赖顺序确定接下来所应该执行的下一子任务,并生成针对下一子任务的任务事件。在本说明书实施例中,监听链下计算合约生成的任务事件、向链下计算合约发起结果返回交易的实体具体为第一节点设备上部署的调度引擎。
在本说明书实施例中,节点设备监听到的所述链下计算合约生成的针对第一计算任务的任务事件中记录有第一计算任务的参与方节点的描述信息。任务事件包括第一计算任务的参与方节点的描述信息,是指第一计算任务规定了其所需涉及参与的区块链节点的身份信息。第一节点设备可以在确定所述参与方节点的描述信息中包含自身部署的第一区块链节点的标识信息的情况下,判断自身所部属的第一区块链节点属于第一计算任务的参与方,于是第一节点设备就需要响应并执行第一计算任务;而如果所述参与方节点的描述信息中不包含第一区块链节点的标识信息,则第一节点设备可以判断自身所部署的第一区块链节点不属于第一计算任务的参与方,于是第一节点设备将不响应执行第一计算任务。另外,任务事件中还会记录链下计算任务与第一计算任务的任务标识,从而对不同的任务和子任务进行区分,这主要是方便后续任一节点设备在对第一计算任务执行完毕并回传结果返回交易时能够正确标识是针对链下计算任务中第一计算任务的结果,使得链下计算合约能够通过结果返回交易正确更新对应链下计算任务的任务实例中第一计算任务的完成状态,以应对同一个任务包含多个子任务以及同一个链下计算年合约同时创建多个链下计算任务的任务实例的情况。当然第一计算任务还记录有自身所需执行的计算和数据转移等操作,且指定了所需数据的来源,这些信息是用于告知各节点设备第一计算任务的任务类型及其实现方式,从而指导节点设备在确定第一计算任务的任务类型及其实现方式对应可调用资源后,按照第一计算任务的预期执行第一计算任务。
如前所述,所述任务完成状态由所述链下计算合约响应于所述链下计算任务对应的 交易触发更新,其中,所述链下计算任务对应的交易包括所述链下计算任务对应的任务创建交易,或者任一节点设备在对所述各子任务中任一子任务执行完毕的情况下发起的结果返回交易。
在本说明书实施例中,所述链下计算合约维护有一个或多个链下计算任务分别对应的任务完成状态。通常情况下,一个链下计算合约只会定义一种类型的链下计算任务,但可以创建该链下计算任务对应的多个任务实例,每个任务实例都会记录有该任务实例对应的任务完成状态。因此,链下计算合约上维护的多个任务实例可以是不同用户通过向链下计算合约分别发起任务创建合约而触发创建的,也可以是由同一个用户通过多次发起任务创建合约而触发创建的,但这些任务实例都具有相同的执行逻辑,即链下计算合约维护的各任务的任务类型相同。
S104:在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务。其中,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
在本说明实施例中,第一节点设备在判断出自身部署的第一区块链节点属于第一计算任务对应的参与方节点的情况下,会触发执行第一计算任务,此时第一节点设备会在本地搜索能够支持执行第一计算任务的可调用资源,由于第一节点设备部署的第一标准计算引擎可以支持执行任务事件中记录的第一计算任务的任务类型,因此,第一节点设备可以调用第一标准计算引擎执行第一计算任务。在本说明书实施例中,在本地搜索可调用资源以及调用链下计算引擎执行计算任务的实体具体为第一节点设备上部署的调度引擎。然而需要说明的是,标准计算引擎虽然对第一节点设备宣称自身拥有处理第一计算任务的能力,但其自身并不直接参与第一计算任务的如数据计算等实际执行过程,而仅仅是作为第一节点设备上部署的调度引擎与实际执行第一计算任务的第一异构计算引擎之间的中介,用于传达和转换任务请求与任务的执行结果。
在本说明书实施例中,作为第一节点设备所能够直接调用的链下计算引擎,需要遵从一套规范化的开发范式才能支持针对链下计算合约定义的链下计算任务的执行,即只有按照特定开发范式得到的链下计算引擎才能够支持执行第一计算任务,上述开发范式主要体现为对编写链下计算引擎的编程语言的限定和/或对所支持的网络传输层协议的 限定,而这种开发范式的限制是由这套用于执行链下计算合约所定义的链下计算任务的、包含有部署了区块链节点的各节点设备的调度引擎、计算引擎、数据引擎等可调用资源的链上-链下计算系统仅安装有支持特定编程语言的SDK(Software Development Kit,软件开发工具包)以及特定的网络传输层协议所导致的。因此,如果需要做到兼容其他编程语言的链下计算引擎能够支持执行链下计算任务,常规的做法有两种:一种是将使用其他开发范式编写的计算引擎重新按照链上-链下计算系统所要求的编程语言重新编写,另一种是为链上-链下计算系统安装支持更多编程语言的SDK,前者相当于对已有的计算引擎进行二次开发,对程序改造的程度大,程序移植成本高,而后者则是需要对链上-链下计算系统安装进行底层架构的升级,其同样面临着开发成本较大的问题。
本说明书实施例所涉及的异构计算引擎是指不符合上述开发范式、不能被第一节点设备直接调用的一种链下计算引擎,而本说明书实施例所涉及的标准计算引擎则是指符合上述开发范式、能够被第一节点设备直接调用的一种链下计算引擎,在执行层面体现为:调度引擎在调用第一标准计算引擎时发送至第一标准计算引擎的为第一计算任务的标准任务请求,而该标准任务请求可以被第一标准计算引擎所识别和响应,然而却无法被第一异构计算引擎所识别和响应;对应的,第一异构计算引擎其自身具有一套调用规范,其可以识别第一异构计算引擎所定义的异构任务请求,同时响应于异构任务请求执行对应的计算任务,并在执行完计算任务后生成第一异构计算引擎所定义的异构执行结果,但该异构执行结果无法直接被调度引擎所识别。
由于第一异构计算引擎与调度引擎之间的交互信息无法被彼此识别,因此无法直接将第一异构计算引擎下挂到调度引擎从而由调度引擎直接调用,即无法在不对第一异构计算引擎或对链上-链下计算系统进行大量改造的情况下,将其引入链上-链下计算系统从而参与执行链下计算任务。于是,本说明书实施例通过在节点设备额外设置第一标准计算引擎作为连接现有的第一异构计算引擎的中介,一方面使调度引擎可以像调用符合相关开发范式的链下计算引擎一样无障碍地进行调用,另一方面也可以使不符合相关开发范式的第一异构计算引擎在不做任何改造或仅做少量改造的情况下接入链上-链下计算系统,减小了开发成本。
如前所述,第一节点设备调用第一标准计算引擎执行第一计算任务,具体是指第一计算设备中部署的调度引擎将自身生成的第一计算任务对应的标准任务请求发送至第一标准计算引擎。每一个第一标准计算引擎都唯一对应于一个第一异构计算引擎,在第一标准计算引擎接收到该标准任务请求后,会进一步将其通过转换模块进行转换,将该 标准任务请求转换为其所对应的第一异构计算引擎所能识别的异构任务请求,并使得该异构任务请求被其对应的第一异构计算引擎所接收。在第一异构计算引擎接收到异构任务请求后会执行第一计算任务,即按照异构任务请求中所携带的计算任务类型、输入数据等参数进行对应的计算操作。最终,在第一异构计算引擎对第一计算任务执行完毕的情况下,可以将生成的第一计算任务对应的异构执行结果通过转换模块转换为第一标准计算引擎以及调度引擎能够识别的标准执行结果,以使第一标准计算引擎最终将从转换模块接收到的标准执行结果回传至调度引擎,并由调度引擎将其携带在第一计算任务对应的结果返回交易后向链下计算合约发起该结果返回交易。
在本说明书实施例中,第一异构计算引擎部署于第一节点设备,或者,第一异构计算引擎未部署于第一节点设备且第一异构计算引擎与第一标准计算引擎之间建立有网络连接。第一异构计算引擎可以部署在第一节点设备,此时第一标准计算引擎与第一异构计算引擎之间建立有本地连接,通过本地调用进行交互;或者,第一异构计算引擎也可以部署于第一节点设备之外,从而作为外部设备中的异构计算引擎与第一节点设备上部署的标准计算引擎建立有网络连接。所述网络连接可以采用对等体架构或者客户端-服务器的架构进行建立,其中,在采用客户端-服务器架建立网络连接的情况下,第一异构计算引擎可以作为客户端而第一标准计算引擎作为服务器,或者第一异构计算引擎也可以作为服务器而第一标准计算引擎作为客户端。
进一步的,所述网络连接由第一标准计算引擎与第一异构计算引擎通过第一异构计算引擎支持的网络协议所建立。在本说明书实施例中,异构计算引擎作为现有的计算引擎,其所支持的网络传输层协议已经确定,且第一节点设备中的调度引擎所支持的特定网络传输层协议也已经确定,这意味着即使异构计算引擎其编程语言符合链上-链下计算系统的开发范式,但其对应的网络传输层协议与调度引擎所支持的特定网络传输层协议不一致,也将导致异构计算引擎无法直接接入调度引擎从而无法参与执行链下计算任务。因此,本说明书实施例通过引入标准计算引擎作为中介来做网络传输层协议的转化适配,即通过在第一标准计算引擎与调度引擎之间建立以调度引擎所支持的第一传输层协议的网络连接,而在第一标准计算引擎与第一异构计算引擎之间建立第一异构计算引擎所支持的第二传输层协议,从而突破调度引擎与第一异构计算引擎之间因网络协议不一致所导致的网络传输障碍,通过一个同时支持多个网络协议的标准计算引擎作为连接调度引擎与第一异构计算引擎桥梁,实现在不做任何网络传输方面的改造的情况下,将网络协议异构的异构计算引擎接入链上-链下计算系统。
在本说明书实施例中,所述转换模块为第一标准计算引擎或第一异构计算引擎的接口程序。如图2a或图2b所示,图2a和图2b均是一示例性实施例提供的一种数据处理系统的架构示意图,其描述了在第一异构计算引擎未部署于第一节点设备的情况下的一部分链上-链下计算系统(局限于第一节点设备)的基本架构,可以发现,在图2a中转换模块被作为第一标准计算引擎的接口程序而被部署在第一节点设备上,而在图2b中转换模块则被作为第一异构计算引擎二被部署在第一节点设备外,并与第一异构计算引擎一同部署在外部设备上。
在本说明书实施例中,第一标准计算引擎还用于:从第一节点设备获取第一计算任务依赖的输入数据,并将该输入数据携带在所述标准任务请求中。由于第一异构计算引擎无法直接接入链上-链下计算系统,而是需要通过标准计算引擎作为中介,因此第一异构计算引擎缺少直接调用链上-链下计算系统的能力,例如第一异构计算引擎可能在执行第一计算任务时需要调用第一节点设备上的调用引擎去访问区块链网络的一些链上信息(区块信息、合约状态等),但由于其不知晓调度引擎的网络地址以及调用规则(包括请求结构体的定义和响应结构体的定义,这些未知会导致异构计算引擎与调度引擎即使可以实现交互其彼此传输的交互信息也无法被彼此识别),因此无法实现该需求。而为了避免上述现象,可以使第一标准引擎在接收到调度引擎的调用后,首先将第一异构计算引擎在未来执行第一计算任务时所依赖的一些链上-链下计算系统中的数据进行事先采集,并在将这些数据作为第一计算任务依赖的输入数据携带在标准任务请求中后,再把携带有输入数据的标准任务请求发送给转化模块,从而使第一异构计算引擎直接获取这些输入数据,而不需要在执行第一计算任务的过程中临时从链上-链下计算系统中获取。
在本说明书实施例中,第一异构计算引擎在第一计算任务执行完毕的情况下触发将所述异构执行结果发送至所述转换模块,和/或,第一异构计算引擎在接收到针对第一计算任务的结果查询请求的情况下触发将所述异构执行结果发送至所述转换模块。在本说明书实施例中,第一异构计算引擎可以在对第一计算任务执行完毕的时刻将生成异构执行结果发送至转换模块,并由转换模块将其进行转换得到的标准执行结果返回给第一标准计算引擎;第一异构计算引擎也可以在对第一计算任务执行完毕后在第一异构计算引擎本地存储相应的异构执行结果,等待第一标准计算引擎给它发送针对第一计算任务的结果查询请求后,触发将所述异构执行结果发送至所述转换模块,并由转换模块将其进行转换得到的标准执行结果返回给第一标准计算引擎。本说明书实施例提供了包含主动推送结果和等待查询结果在内的至少两种获取第一计算任务对应的异构执行结果的方 式,以适配于多种不同的应用场景。
在本说明书实施例中,通过在节点设备上部署符合支持执行链下计算任务执行的开发范式的标准计算引擎,同时引入转换模块作为标准计算引擎与异构计算引擎之间的转换媒介,从而可以在不进行改造或仅进行少量改造的情况下,使不符合相关开发范式的异构计算引擎能够支持执行链下计算任务,从而减少了算法移植成本,同时扩展了节点设备有限的计算引擎资源,以支持实现更多类型的链下计算任务。
在第一计算任务对应的参与方节点包含多个的情况下,会涉及多个节点设备共同参与执行该第一计算任务,具体而言是指分别与多个节点设备通过网络连接相连或直接部署在多个节点设备的多个计算引擎之间的数据交互。在本说明书实施例中,作为与第一节点设备建立网络连接的第一异构计算引擎同样有可能在执行第一计算任务的过程中与其他计算引擎进行数据交互,且第一异构计算引擎可以通过多种方式来实现这种计算引擎之间的交互。
在一个实施例中,第一异构计算引擎用于:在响应于所述异构任务请求而执行第一计算任务的过程中,获取其他计算引擎发送的第一数据,以用于在执行第一计算任务的过程中使用;和/或,将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。在本实施例中,第一异构计算引擎通过异构任务请求可以知晓有哪些计算引擎参与第一计算任务,因此第一异构计算引擎一方面可以在执行第一计算任务的过程中,接收同样处于执行第一计算任务过程中的其他计算引擎的第一数据,而第一数据将由第一异构计算引擎在后续执行第一计算任务的过程中被使用,另一方面也可以将生成的第二数据发送给第一计算任务中定义好的那些需要获取该第二数据的正在执行第一计算任务的其他计算引擎,以使第二数据在所述其他计算引擎后续执行第一计算任务的过程中被使用。
如图3所示,图3是一示例性实施例提供的一种计算引擎交互的场景示意图,假设第一计算任务对应的参与方节点同时包含第一区块链节点、第二区块链节点和第三区块链节点,而具体涉及参与执行第一计算任务的计算引擎包括第一节点设备上部署的标准计算引擎A和链下计算引擎A,部署于第二节点设备的链下计算引擎B,部署于第三节点设备的标准计算引擎B,以及与标准计算引擎A建立有网络连接的异构计算引擎A,与标准计算引擎B建立有网络连接的异构计算引擎B。假设在第一计算任务的定义中,包含异构计算引擎A从链下计算引擎B处获取第一数据以及将自身生成的第二数据发 送至异构计算引擎B的数据交互过程,于是,对于异构计算引擎A而言,其可以通过与链下计算引擎B之间建立的网络连接获取链下计算引擎B提供的第二数据,以及将自身生成的第一数据通过与异构计算引擎B之间建立的网络连接发送至异构计算引擎B,从而完成第一计算任务所定义的计算引擎之间的数据交互过程,以在多个计算引擎的协作交互的帮助下最终完成第一计算任务。
在另一个实施例中,第一异构计算引擎用于:在响应于所述异构任务请求而执行第一计算任务的过程中,获取第一标准计算引擎从其他计算引擎接收到的第一数据,以用于在执行第一计算任务的过程中使用;和/或,将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据通过第一标准计算引擎发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。与前述实施例所不同的在于,本实施例要求异构计算引擎除了能与其对应的标准计算引擎之间可以建立网络连接以外,不能与参与执行第一计算任务的其他计算引擎建立网络连接,这是由于异构计算引擎与其他符合链上-链下计算系统开发范式的其他链下计算引擎之间有可能无法正常建立网络连接(例如由于网络协议不一致),即使能够建立网络连接也可能无法识别彼此的交互数据(例如由于编程语言不一致所带来的结构体定义的不一致),因此为了确保交互的有效性需要尽可能避免通过异构计算引擎与其他计算引擎之间直接建立的网络连接进行数据交互。在本实施例中,通过使所有参与执行第一计算任务且符合链上-链下计算系统开发范式的链下计算引擎之间建立网络连接,在异构计算引擎具有与其他计算引擎进行数据交互的需求时,通过对应的标准计算引擎作为中介来实现对交互数据的转化和转发,从而确保异构计算引擎与其他计算引擎之间交互的有效性。
如图3所示,假设异构计算引擎A并未与异构计算引擎B、链下计算引擎A和链下计算引擎B之间建立有直接的网络连接,且同样假设在第一计算任务的定义中,包含异构计算引擎A从链下计算引擎B处获取第一数据以及将自身生成的第二数据发送至异构计算引擎B的数据交互过程。于是,对于链下计算引擎B而言,其可以通过将执行第一计算任务时生成的第一数据发送至标准计算引擎A;对于异构计算引擎A而言,其一方面接收标准计算引擎A从链下计算引擎B处获取并通过转换模块进行转化后得到的异构计算引擎A可识别的第一数据,另一方面将自身执行第一计算任务过程中生成的第二数据通过转换模块转换为链上-链下计算系统所能识别的第二数据并发送至标准计算引擎A,然后由标准计算引擎A将链上-链下计算系统所能识别的第二数据转发至标准计算引擎B,以由标准计算引擎B将链上-链下计算系统所能识别的第二数据通过转换模块,得到异构计算引擎B能识别的第二数据并最终发送至异构计算引擎B,从而完 成第一计算任务所定义的计算引擎之间的数据交互过程,以在多个计算引擎的协作交互的帮助下最终完成第一计算任务。
本说明书实施例所涉及的所述其他计算引擎包括:任一节点设备上部署的除第一标准计算引擎外的链下计算引擎,或者与所述任一节点设备上部署的标准计算引擎建立有网络连接的除第一异构计算引擎外的异构计算引擎,其中,所述任一节点设备上部署的所述区块链网络中的区块链节点属于所述参与方节点。
图4是一示例性实施例提供的一种设备的示意结构图。请参考图4,在硬件层面,该设备包括处理器402、内部总线404、网络接口406、内存408以及非易失性存储器410,当然还可能包括其他功能所需要的硬件。本说明书一个或多个实施例可以基于软件方式来实现,比如由处理器402从非易失性存储器410中读取对应的计算机程序到内存408中然后运行。当然,除了软件实现方式之外,本说明书一个或多个实施例并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
如图5所示,图5是本说明书根据一示例性实施例提供的一种数据处理装置的框图,该装置可以应用于如图4所示的设备中,以实现本说明书的技术方案;所述装置应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约。所述装置包括:事件监听单元501,用于监听所述链下计算合约生成的针对第一计算任务的任务事件;任务执行单元502,用于在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务。其中,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
可选的,所述转换模块为第一标准计算引擎或第一异构计算引擎的接口程序。
可选的,第一标准计算引擎还用于:从第一节点设备获取第一计算任务依赖的输入数据,并将该输入数据携带在所述标准任务请求中。
可选的,第一异构计算引擎在第一计算任务执行完毕的情况下触发将所述异构执行结果发送至所述转换模块,和/或,第一异构计算引擎在接收到针对第一计算任务的结果查询请求的情况下触发将所述异构执行结果发送至所述转换模块。
可选的,第一异构计算引擎部署于第一节点设备,或者,第一异构计算引擎未部署于第一节点设备且第一异构计算引擎与第一标准计算引擎之间建立有网络连接。
可选的,所述网络连接由第一标准计算引擎与第一异构计算引擎通过第一异构计算引擎支持的网络协议所建立。
可选的,第一异构计算引擎用于:在响应于所述异构任务请求而执行第一计算任务的过程中,获取其他计算引擎发送的第一数据,以用于在执行第一计算任务的过程中使用;和/或,将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。
可选的,第一异构计算引擎用于:在响应于所述异构任务请求而执行第一计算任务的过程中,获取第一标准计算引擎从其他计算引擎接收到的第一数据,以用于在执行第一计算任务的过程中使用;和/或,将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据通过第一标准计算引擎发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。
可选的,所述其他计算引擎包括:任一节点设备上部署的除第一标准计算引擎外的链下计算引擎,或者与所述任一节点设备上部署的标准计算引擎建立有网络连接的除第一异构计算引擎外的异构计算引擎,其中,所述任一节点设备上部署的所述区块链网络中的区块链节点属于所述参与方节点。
可选的,所述链下计算合约维护有链下计算任务对应的任务完成状态,所述任务完成状态用于描述所述链下计算任务包含的各子任务的完成状态;在第一计算任务属于所述链下计算任务的子任务的情况下,所述事件监听单元501具体用于:监听所述链下计算合约在所述任务完成状态满足第一计算任务对应的执行条件的情况下生成的针对第一计算任务的所述任务事件。
可选的,所述任务完成状态由所述链下计算合约响应于所述链下计算任务对应的交易触发更新,其中,所述链下计算任务对应的交易包括所述链下计算任务对应的任务创建交易,或者任一节点设备在对所述各子任务中任一子任务执行完毕的情况下发起的结果返回交易。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。 然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为服务器系统。当然,本发明不排除随着未来计算机技术的发展,实现上述实施例功能的计算机例如可以为个人计算 机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本说明书一个或多个实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。例如若使用到第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本说明书一个或多个时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本发明是参照根据本发明实施例的方法、装置(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令 装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储、石墨烯存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本说明书一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本本说明书一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理 设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本说明书一个或多个实施例的实施例而已,并不用于限制本本说明书一个或多个实施例。对于本领域技术人员来说,本说明书一个或多个实施例可以有各种更改和变化。凡在本说明书的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在权利要求范围之内。

Claims (14)

  1. 一种数据处理方法,应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约;所述方法包括:
    监听所述链下计算合约生成的针对第一计算任务的任务事件;
    在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
  2. 根据权利要求1所述的方法,所述转换模块为第一标准计算引擎或第一异构计算引擎的接口程序。
  3. 根据权利要求1所述的方法,第一标准计算引擎还用于:从第一节点设备获取第一计算任务依赖的输入数据,并将该输入数据携带在所述标准任务请求中。
  4. 根据权利要求1所述的方法,第一异构计算引擎在第一计算任务执行完毕的情况下触发将所述异构执行结果发送至所述转换模块,和/或,
    第一异构计算引擎在接收到针对第一计算任务的结果查询请求的情况下触将所述异构执行结果发送至所述转换模块。
  5. 根据权利要求1所述的方法,第一异构计算引擎部署于第一节点设备,或者,第一异构计算引擎未部署于第一节点设备且第一异构计算引擎与第一标准计算引擎之间建立有网络连接。
  6. 根据权利要求5所述的方法,所述网络连接由第一标准计算引擎与第一异构计算引擎通过第一异构计算引擎支持的网络协议所建立。
  7. 根据权利要求1所述的方法,第一异构计算引擎用于:在响应于所述异构任务请求而执行第一计算任务的过程中,获取其他计算引擎发送的第一数据,以用于在执行第一计算任务的过程中使用;和/或,
    将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。
  8. 根据权利要求1所述的方法,第一异构计算引擎用于:
    在响应于所述异构任务请求而执行第一计算任务的过程中,获取第一标准计算引擎从其他计算引擎接收到的第一数据,以用于在执行第一计算任务的过程中使用;和/或,
    将响应于所述异构任务请求而执行第一计算任务的过程中生成的第二数据通过第一标准计算引擎发送至所述其他计算引擎,以使所述其他计算引擎在执行第一计算任务的过程中使用第二数据。
  9. 根据权利要求7或8所述的方法,所述其他计算引擎包括:
    任一节点设备上部署的除第一标准计算引擎外的链下计算引擎,或者与所述任一节点设备上部署的标准计算引擎建立有网络连接的除第一异构计算引擎外的异构计算引擎,其中,所述任一节点设备上部署的所述区块链网络中的区块链节点属于所述参与方节点。
  10. 根据权利要求1所述的方法,所述链下计算合约维护有链下计算任务对应的任务完成状态,所述任务完成状态用于描述所述链下计算任务包含的各子任务的完成状态;在第一计算任务属于所述链下计算任务的子任务的情况下,所述监听所述链下计算合约生成的针对第一计算任务的任务事件,包括:
    监听所述链下计算合约在所述任务完成状态满足第一计算任务对应的执行条件的情况下生成的针对第一计算任务的所述任务事件。
  11. 根据权利要求10所述的方法,所述任务完成状态由所述链下计算合约响应于所述链下计算任务对应的交易触发更新,其中,所述链下计算任务对应的交易包括所述链下计算任务对应的任务创建交易,或者任一节点设备在对所述各子任务中任一子任务执行完毕的情况下发起的结果返回交易。
  12. 一种数据处理装置,应用于部署有第一区块链节点的第一节点设备,第一区块链节点所属的区块链网络部署有链下计算合约;所述装置包括:
    事件监听单元,用于监听所述链下计算合约生成的针对第一计算任务的任务事件;
    任务执行单元,用于在第一区块链节点属于第一计算任务对应的参与方节点的情况下,调用第一节点设备上部署的第一标准计算引擎执行第一计算任务,第一标准计算引擎用于:将第一计算任务对应的标准任务请求发送至转换模块,由所述转换模块将所述标准任务请求转换为第一异构计算引擎可识别的异构任务请求,并接收所述转换模块返回的标准执行结果,所述标准执行结果由所述转换模块对第一异构计算引擎基于所述异构任务请求所生成的异构执行结果进行转换得到。
  13. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器通过运行所述可执行指令以实现如权利要求1至11中任一项所 述的方法。
  14. 一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现如权利要求1至11中任一项所述方法的步骤。
PCT/CN2022/135207 2022-03-31 2022-11-30 一种数据处理方法、装置、电子设备和存储介质 WO2023185044A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210343405.X 2022-03-31
CN202210343405.XA CN114820187A (zh) 2022-03-31 2022-03-31 一种数据处理方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023185044A1 true WO2023185044A1 (zh) 2023-10-05

Family

ID=82531821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/135207 WO2023185044A1 (zh) 2022-03-31 2022-11-30 一种数据处理方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN114820187A (zh)
WO (1) WO2023185044A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348999A (zh) * 2023-12-06 2024-01-05 之江实验室 一种业务执行系统及业务执行方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820187A (zh) * 2022-03-31 2022-07-29 蚂蚁区块链科技(上海)有限公司 一种数据处理方法、装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540969A (zh) * 2020-11-26 2021-03-23 南京纯白矩阵科技有限公司 一种异构区块链间智能合约的数据迁移方法
WO2021184975A1 (zh) * 2020-03-18 2021-09-23 支付宝(杭州)信息技术有限公司 链上数据的链下隐私计算方法及装置
CN113496398A (zh) * 2020-03-19 2021-10-12 中移(上海)信息通信科技有限公司 基于智能合约的数据处理方法、装置、设备及介质
CN114820187A (zh) * 2022-03-31 2022-07-29 蚂蚁区块链科技(上海)有限公司 一种数据处理方法、装置、电子设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021184975A1 (zh) * 2020-03-18 2021-09-23 支付宝(杭州)信息技术有限公司 链上数据的链下隐私计算方法及装置
CN113496398A (zh) * 2020-03-19 2021-10-12 中移(上海)信息通信科技有限公司 基于智能合约的数据处理方法、装置、设备及介质
CN112540969A (zh) * 2020-11-26 2021-03-23 南京纯白矩阵科技有限公司 一种异构区块链间智能合约的数据迁移方法
CN114820187A (zh) * 2022-03-31 2022-07-29 蚂蚁区块链科技(上海)有限公司 一种数据处理方法、装置、电子设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348999A (zh) * 2023-12-06 2024-01-05 之江实验室 一种业务执行系统及业务执行方法
CN117348999B (zh) * 2023-12-06 2024-02-23 之江实验室 一种业务执行系统及业务执行方法

Also Published As

Publication number Publication date
CN114820187A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2023185044A1 (zh) 一种数据处理方法、装置、电子设备和存储介质
TWI696083B (zh) 一種基於區塊鏈的共識方法及裝置
CN107729139B (zh) 一种并发获取资源的方法和装置
US11522951B2 (en) Configuring service mesh networking resources for dynamically discovered peers or network functions
WO2019001074A1 (zh) 一种远程过程调用的方法、装置及计算机设备
TWI679581B (zh) 任務執行的方法及裝置
CN110764752B (zh) 实现Restful服务图形化服务编排的系统及其方法
JP2024512209A (ja) IoT機器に基づく情報処理方法、関連機器及び記憶媒体
CN111405130B (zh) 一种语音交互的系统及方法
WO2023231337A1 (zh) 在区块链中执行交易的方法、区块链的主节点和从节点
WO2023185054A1 (zh) 联盟链中部署链码的方法和系统
WO2022257247A1 (zh) 数据处理方法、装置及计算机可读存储介质
CN111200651A (zh) 定时调用微服务的方法、系统、设备和介质
WO2023185041A1 (zh) 一种数据处理方法、装置、电子设备和存储介质
CN106911784B (zh) 一种执行异步事件的方法和装置
WO2023240933A1 (zh) 一种基于区块链的分布式应用部署方法及装置
TWI698137B (zh) 無線設備的掃描啟停方法及無線設備
WO2023185042A1 (zh) 直连通道的建立方法及装置
WO2024001032A1 (zh) 在区块链系统中执行交易的方法、区块链系统和节点
US11755297B2 (en) Compiling monoglot function compositions into a single entity
CN114896637A (zh) 一种数据处理方法、装置、电子设备和存储介质
US20140280965A1 (en) Software product instance placement
CN114692185A (zh) 数据处理方法及装置
CN114546648A (zh) 任务处理方法及任务处理平台
CN114896636A (zh) 一种数据处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934862

Country of ref document: EP

Kind code of ref document: A1