CN114820187A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114820187A
CN114820187A CN202210343405.XA CN202210343405A CN114820187A CN 114820187 A CN114820187 A CN 114820187A CN 202210343405 A CN202210343405 A CN 202210343405A CN 114820187 A CN114820187 A CN 114820187A
Authority
CN
China
Prior art keywords
task
computing
heterogeneous
engine
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210343405.XA
Other languages
Chinese (zh)
Inventor
谢桂鲁
邓福喜
石柯
王毅飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202210343405.XA priority Critical patent/CN114820187A/en
Publication of CN114820187A publication Critical patent/CN114820187A/en
Priority to PCT/CN2022/135207 priority patent/WO2023185044A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/76Adapting program code to run in a different environment; Porting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present specification provides a data processing method, an apparatus, an electronic device, and a storage medium, which are applied to a first node device deployed with a first block chain node, where a block chain network to which the first block chain node belongs is deployed with a down-link computation contract; the method comprises the following steps: monitoring a task event aiming at a first computing task generated by a down-link computing contract; under the condition that the first block link point belongs to a participant node corresponding to the first computing task, a first standard computing engine deployed on the first node device is called to execute the first computing task, and the first standard computing engine is used for: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the specification belongs to the technical field of block chains, and particularly relates to a data processing method and device, an electronic device and a storage medium.
Background
The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. In the block chain system, data blocks are combined into a chain data structure in a sequential connection mode according to a time sequence, and a distributed account book which is not falsifiable and counterfeitable is ensured in a cryptographic mode. Because the blockchain has the characteristics of decentralization, information non-tampering, autonomy and the like, the blockchain is also paid more and more attention and is applied by people.
The blockchain network can undertake the offline computing task defined based on the intelligent contract, and each node device where each blockchain link point in the blockchain network is respectively located can call a locally-deployed offline computing engine under the guidance of an event generated by the intelligent contract to realize the offline computing task. However, the total amount of the offline computing engines on the node devices is limited and needs to follow a specific development paradigm, so that some existing computing engines which do not conform to the relevant development paradigm need to be greatly modified to be used for realizing the offline computing task, and the algorithm migration cost is large.
Disclosure of Invention
The invention aims to provide a data processing method, a data processing device, electronic equipment and a storage medium.
According to a first aspect of one or more embodiments of the present specification, a data processing method is provided, which is applied to a first node device deployed with a first blockchain node, where a blockchain network to which the first blockchain node belongs is deployed with a down-link computation contract; the method comprises the following steps:
monitoring a task event aiming at a first computing task generated by the down-link computing contract;
under the condition that the first block link point belongs to a participant node corresponding to the first computing task, a first standard computing engine deployed on the first node device is called to execute the first computing task, and the first standard computing engine is used for: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
According to a second aspect of one or more embodiments of the present specification, there is provided a data processing apparatus, which is applied to a first node device in which a first blockchain node is deployed, a blockchain network to which the first blockchain node belongs being deployed with an under-link computation contract; the device comprises:
the event monitoring unit is used for monitoring a task event which is generated by the calculation contract under the chain and aims at a first calculation task;
the task execution unit is configured to, when the first block link point belongs to a participant node corresponding to the first computation task, invoke a first standard computation engine deployed on the first node device to execute the first computation task, where the first standard computation engine is configured to: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
According to a third aspect of one or more embodiments of the present specification, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the first aspect by executing the executable instructions.
According to a fourth aspect of one or more embodiments of the present description, a computer-readable storage medium is presented, having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
In the embodiment of the present specification, by deploying a standard computing engine conforming to a development paradigm supporting execution of a computation task under an execution chain on a node device and introducing a conversion module as a conversion medium between the standard computing engine and a heterogeneous computing engine, a heterogeneous computing engine not conforming to a related development paradigm can support the computation task under the execution chain without modification or with only a small amount of modification, so that the algorithm migration cost is reduced, and meanwhile, limited computing engine resources of the node device are expanded to support implementation of more types of computation tasks under the chain.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and it is obvious for a person skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a data processing method according to an exemplary embodiment.
FIG. 2a is an architectural diagram of a data processing system, according to an exemplary embodiment.
FIG. 2b is an architectural diagram of another data processing system, according to an exemplary embodiment.
FIG. 3 is a scenario diagram of a compute engine interaction provided by an exemplary embodiment.
Fig. 4 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram of a data processing apparatus according to an example embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Fig. 1 is a flowchart of a data processing method according to an exemplary embodiment. The method is applied to first node equipment with a first block chain node, and a block chain network to which the first block chain node belongs is provided with a down-link calculation contract; the method comprises the following steps:
s102: monitoring a task event aiming at a first computing task generated by the down-link computing contract;
in this embodiment of the present specification, a down-link computation contract is an on-link bearer for carrying down-link computation tasks, and a number of subtasks included in the down-link computation contract are defined in the down-link computation contract, and are used to describe a data flow direction in a down-link computation task and a computation cooperation process of each node device. Since the calculation contract under the chain is deployed on the blockchain network, the participant nodes of the calculation task under the chain defined by the calculation contract under the chain are limited not to exceed the range of each blockchain node in the blockchain network. Obviously, a plurality of calculation contracts under the chain can be deployed in the same block chain network, and the number and the performance of the participating party nodes involved in different calculation contracts under the chain can be flexibly configured, so that the deployment of the calculation tasks under the chain with different task types, task requirements and task scales can be realized depending on the same block chain network.
To illustrate how a computation contract under a chain directs to perform its defined computation tasks under the chain, the logic for performing the computation tasks under the chain will be briefly described below by the operation of a typical computation contract under the chain. A user may generate code for a calculation contract under a chain through a visualization contract orchestration system and deploy the calculation contract under the chain in a blockchain network, such that the calculation contract under the chain defines a workflow for a type of calculation task under the chain, which is embodied as a number of subtasks with an execution dependency order. After the calculation contract under the chain is successfully deployed, a user authorized to call the calculation contract under the chain can create and start a calculation task under the chain by initiating a task creation transaction to the calculation contract under the chain, the calculation contract under the chain can correspondingly create a task instance belonging to the calculation task under the chain of an initiator user after receiving the task creation transaction, and the task instance maintains the task completion state of the calculation task under the chain, which is specifically embodied as the task completion state of each subtask under the calculation task under the chain. After the calculation contract under the chain responds to the task creation transaction and generates a corresponding task instance, a first subtask corresponding to the instance is further triggered to be executed, the calculation contract under the chain is embodied to generate an event containing a participant node of the first subtask, each block chain node in the block chain network can monitor the event, and the node equipment where the block chain link point of the participant node which is judged to belong to the first subtask further calls the chain lower calculation resource and/or the chain lower storage resource which are matched with the first subtask to execute the first subtask under the chain, and finally, the node equipment where the participant node is located can further initiate a result carrying the execution result of the first subtask to the calculation contract under the chain to return the transaction after the execution is completed, so that the calculation contract under the chain updates the task completion state of the corresponding task instance, for example, when the execution result of the first subtask is that the execution is successful, the under-chain computation contract marks the task completion status of the first subtask in the corresponding task instance as completed, so that the execution of the next subtask is triggered according to the predefined dependency order of each subtask included in the under-chain computation task, and then an event including a participant node of the next subtask is generated for each block link node in the block chain network to monitor, and the subsequent process is similar to the process for processing the first subtask. Therefore, a cycle of 'updating the task completion state of the calculation contract under the chain → generating the subtask event under the calculation contract under the chain → monitoring the subtask event by the block link point and executing the subtask by the appointed node equipment → sending the result of starting the task to the calculation contract under the chain by the node equipment to return the transaction → updating the task completion state of the calculation contract under the chain' is formed, and the calculation task under the chain corresponding to the task example is determined to be executed and completed under the condition that the task completion states of all the subtasks in the task example in the calculation contract under the chain are completed.
It is not difficult to find that the tasks executed in the execution process of the under-chain computing task only include creating task instances, receiving subtask results, subtask scheduling and subtask issuing such scheduling tasks, and actually, actual tasks defined and required to be executed by the under-chain computing task, such as data computing, data transferring and data storing, are not really executed, and the tasks consuming a large amount of resources are scheduled to be executed under the chain corresponding to each node device, so that a distributed computing based on a block chain is realized through an event monitoring mechanism and a transaction return mechanism, the under-chain computing task is anchored by an under-chain computing contract on the block chain, the under-chain resources are fully utilized on the premise of ensuring that the task execution full flow can be tracked, and meanwhile, reliable information interaction and cooperative computing are realized among different node devices by means of the block chain, in addition, since the calculation tasks under the chain are defined in a contract form and the design of the calculation tasks under the chain is not subject to the toggle of the resources on the chain, the method means that the on-chain cooperation mode can be expanded through the resources under the chain by designing different calculation contracts under the chain to meet different actual requirements.
In an embodiment of the present specification, the calculation contract under the chain maintains a task completion state corresponding to a calculation task under the chain, where the task completion state is used to describe a completion state of each subtask included in the calculation task under the chain; in a case that a first computing task belongs to a subtask of the calculation task under the chain, the monitoring a task event generated by the calculation contract under the chain and aiming at the first computing task includes: and monitoring the task event aiming at the first computing task generated by the under-chain computing contract under the condition that the task completion state meets the execution condition corresponding to the first computing task. In this embodiment of the present specification, a calculation task under a chain is represented as a corresponding task instance on a calculation contract under a chain, and the task completion status of the calculation task under a chain is maintained in the corresponding task instance of the calculation contract under a chain, which is specifically represented as the completion status of each subtask maintained in the task instance. Since the execution dependency order of the subtasks included in the calculation task under the chain is predefined, which means that the execution condition of each subtask is also determined, the calculation contract under the chain can further determine the first calculation task to be executed next according to the completion status of each subtask included in the calculation task under the chain, so as to initiate a task event for the first calculation task. Further, the method also comprises the following steps: and under the condition that the execution of the first calculation task is finished, initiating a result return transaction containing an execution result corresponding to the first calculation task to the under-chain calculation contract through the first block chain node so as to update a task completion state corresponding to the under-chain calculation task maintained by the under-chain calculation contract. As described above, when the node device executes the first computation task by calling the resource and finishes the execution, the task completion status of the calculation task under the chain maintained by the calculation contract under the chain is updated by initiating the result return transaction, so that the calculation contract under the chain can further determine the next subtask to be executed according to the execution dependency sequence of each subtask in the calculation task under the chain, and generate a task event for the next subtask. In an embodiment of the present specification, the entity that monitors the task event generated by the calculation contract under the chain and returns the transaction to the result of the calculation contract under the chain is specifically a scheduling engine deployed on the first node device.
In this specification embodiment, the task event generated by the node device for the first computing task monitored by the down-link computing contract has description information of the participant node of the first computing task recorded therein. The task event comprises description information of the participant node of the first computing task, which means that the first computing task specifies identity information of the blockchain node which the first computing task needs to involve in participation. The first node device may determine that the first block chain link point to which the first node device belongs to a participant of the first computation task, and then the first node device needs to respond and execute the first computation task, when it is determined that the description information of the participant node includes identification information of a first block chain node deployed by the first node device; if the description information of the participant node does not include the identification information of the first blockchain node, the first node device may determine that the first blockchain node deployed by the first node device does not belong to the participant of the first computing task, and then the first node device will not respond to execute the first computing task. In addition, task identifiers of the calculation tasks under the chain and the first calculation task are recorded in the task event, so that different tasks and subtasks are distinguished, and the method is mainly convenient for any subsequent node equipment to correctly identify the result of the first calculation task in the calculation tasks under the chain when the first calculation task is executed completely and the returned result is returned to the transaction, so that the calculation contract under the chain can correctly update the completion state of the first calculation task in the task instance corresponding to the calculation task under the chain through the returned result transaction, and the method is suitable for the situation that the same task comprises a plurality of subtasks and the same calculation year contract under the chain simultaneously creates the task instances of the calculation tasks under the chain. Certainly, the first computing task also records operations such as computing and data transfer required to be executed by the first computing task, and specifies a source of required data, and the information is used for informing each node device of the task type and the implementation manner of the first computing task, so as to guide the node device to execute the first computing task according to the expectation of the first computing task after determining that the task type and the implementation manner of the first computing task correspond to the callable resources.
As described above, the task completion status is updated by the under-chain computation contract in response to a transaction trigger corresponding to the under-chain computation task, where the transaction corresponding to the under-chain computation task includes a task creation transaction corresponding to the under-chain computation task, or a result return transaction initiated by any node device when any one of the subtasks is completely executed.
In an embodiment of the present specification, the calculation contract maintains a task completion status corresponding to each of one or more calculation tasks. In general, a calculation contract under a chain only defines one type of calculation task under the chain, but a plurality of task instances corresponding to the calculation task under the chain can be created, and each task instance records a task completion status corresponding to the task instance. Therefore, the plurality of task instances maintained on the calculation contract under the chain can be created by different users through respectively initiating the task creation contract to the calculation contract under the chain, or can be created by the same user through initiating the task creation contract for multiple times, but the task instances all have the same execution logic, namely the task types of the tasks maintained by the calculation contract under the chain are the same.
S104: under the condition that the first block link point belongs to a participant node corresponding to the first computing task, a first standard computing engine deployed on the first node device is called to execute the first computing task, and the first standard computing engine is used for: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
In this embodiment, when the first node device determines that the first block link point deployed by the first node device belongs to a participant node corresponding to the first computing task, the first node device may trigger execution of the first computing task, and at this time, the first node device may locally search for a callable resource capable of supporting execution of the first computing task. In an embodiment of the present specification, the entity that locally searches for the callable resource and calls the calculation engine to perform the calculation task under the chain is specifically a scheduling engine deployed on the first node device. It should be noted, however, that the standard computing engine, although claiming itself to possess the capability of processing the first computing task to the first node device, does not directly participate in the actual execution process of the first computing task, such as data computing, but merely acts as an intermediary between the scheduling engine deployed on the first node device and the first heterogeneous computing engine that actually executes the first computing task, for communicating and converting the task request and the execution result of the task.
In the embodiment of the present specification, as the down-chain computing engine that can be directly invoked by the first node device, it needs to comply with a set of standardized Development paradigms to support the execution of the down-chain computing task defined for the down-chain computing contract, that is, only the down-chain computing engine obtained according to a specific Development paradigms can support the execution of the first computing task, where the Development paradigms are mainly embodied as a definition of a programming language for writing the down-chain computing engine and/or a definition of a supported network transport layer protocol, and a limitation of such a Development paradigms is that the up-chain computing system including the scheduling engine, the computing engine, the data engine and other tunable resources of each node device in which the block chain node is deployed and which is defined by the execution of the down-chain computing contract is only installed with an SDK (Software Development Kit, software development kit) and specific network transport layer protocols. Therefore, if it is required that the under-chain computing engine compatible with other programming languages can support the execution of the under-chain computing task, there are two conventional approaches: one is to rewrite the computing engine written by other development paradigms according to the programming language required by the chain up-chain down computing system, and the other is to install the SDK supporting more programming languages for the chain up-chain down computing system, wherein the former is equivalent to the secondary development of the existing computing engine, the degree of program transformation is large, and the program transplantation cost is high, while the latter needs to upgrade the bottom-layer architecture of the chain up-chain down computing system installation, and the problem of higher development cost is also faced.
The heterogeneous computing engine according to the embodiment of the present specification refers to a type of an under-chain computing engine that does not conform to the development paradigm and cannot be directly invoked by the first node device, and the standard computing engine according to the embodiment of the present specification refers to an under-chain computing engine that conforms to the development paradigm and can be directly invoked by the first node device, and is embodied in an execution level as: the scheduling engine sends a standard task request which is a first computing task to the first standard computing engine when the first standard computing engine is called, and the standard task request can be identified and responded by the first standard computing engine but cannot be identified and responded by the first heterogeneous computing engine; correspondingly, the first heterogeneous computing engine has a set of call specifications, and can identify the heterogeneous task request defined by the first heterogeneous computing engine, execute the corresponding computing task in response to the heterogeneous task request, and generate a heterogeneous execution result defined by the first heterogeneous computing engine after the computing task is executed, but the heterogeneous execution result cannot be directly identified by the scheduling engine.
Since the mutual information between the first heterogeneous compute engine and the scheduler engine cannot be identified from each other, the first heterogeneous compute engine cannot be directly hung down to the scheduler engine and thus directly called by the scheduler engine, that is, the first heterogeneous compute engine cannot be introduced into the on-chain and off-chain compute system to participate in executing the off-chain compute task without greatly modifying the first heterogeneous compute engine or the on-chain and off-chain compute system. Therefore, in the embodiments of the present specification, by additionally providing the first standard computing engine as an intermediary connected to the existing first heterogeneous computing engine in the node device, on one hand, the scheduling engine can be called without any obstacle as a result of calling the under-chain computing engine conforming to the relevant development paradigm, and on the other hand, the first heterogeneous computing engine not conforming to the relevant development paradigm can also be accessed to the on-chain-under-chain computing system without any or only a small amount of modification, so that the development cost is reduced.
As described above, the first node device invokes the first standard computing engine to execute the first computing task, specifically, the scheduling engine deployed in the first computing device sends the standard task request corresponding to the first computing task generated by the scheduling engine to the first standard computing engine. Each first standard computing engine uniquely corresponds to one first heterogeneous computing engine, and after the first standard computing engine receives the standard task request, the first standard computing engine further converts the standard task request into a heterogeneous task request which can be identified by the corresponding first heterogeneous computing engine, and enables the heterogeneous task request to be received by the corresponding first heterogeneous computing engine. After receiving the heterogeneous task request, the first heterogeneous computing engine executes the first computing task, that is, performs corresponding computing operation according to the computing task type, input data and other parameters carried in the heterogeneous task request. Finally, when the first heterogeneous computation engine finishes executing the first computation task, the generated heterogeneous execution result corresponding to the first computation task may be converted into a standard execution result that can be recognized by the first standard computation engine and the scheduling engine through the conversion module, so that the first standard computation engine finally transmits the standard execution result received from the conversion module back to the scheduling engine, and the scheduling engine returns the result corresponding to the first computation task to the transaction and then initiates the result return transaction to the offline computation contract.
In an embodiment of the present specification, the first heterogeneous compute engine is deployed in the first node device, or the first heterogeneous compute engine is not deployed in the first node device and a network connection is established between the first heterogeneous compute engine and the first standard compute engine. The first heterogeneous computing engine can be deployed on the first node device, and a local connection is established between the first standard computing engine and the first heterogeneous computing engine at the moment, and interaction is performed through local calling; alternatively, the first heterogeneous compute engine may be deployed outside the first node device, so that a network connection is established between the heterogeneous compute engine as an external device and a standard compute engine deployed on the first node device. The network connection may be established using a peer-to-peer architecture or a client-server architecture, wherein the first heterogeneous compute engine may act as a client and the first standard compute engine as a server, or the first heterogeneous compute engine may act as a server and the first standard compute engine as a client, in the case of establishing a network connection using a client-server rack.
Further, the network connection is established by the first standard compute engine and the first heterogeneous compute engine through a network protocol supported by the first heterogeneous compute engine. In the embodiment of the present specification, the heterogeneous computing engine, as an existing computing engine, has determined the supported network transport layer protocol, and also has determined the specific network transport layer protocol supported by the scheduling engine in the first node device, which means that even if the programming language of the heterogeneous computing engine conforms to the development paradigm of the uplink-downlink computing system, the corresponding network transport layer protocol is not consistent with the specific network transport layer protocol supported by the scheduling engine, which will result in that the heterogeneous computing engine cannot directly access the scheduling engine and thus cannot participate in executing the downlink computing task. Therefore, in the embodiments of the present disclosure, a standard computing engine is introduced as an intermediary to perform a conversion adaptation of a network transport layer protocol, that is, a network connection of a first transport layer protocol supported by a scheduling engine is established between a first standard computing engine and the scheduling engine, and a second transport layer protocol supported by a first heterogeneous computing engine is established between the first standard computing engine and a first heterogeneous computing engine, so as to break through a network transmission barrier between the scheduling engine and the first heterogeneous computing engine due to inconsistency of the network protocols, and a standard computing engine supporting multiple network protocols at the same time is used as a bridge connecting the scheduling engine and the first heterogeneous computing engine, so that the heterogeneous computing engines with heterogeneous network protocols are accessed to an uplink-downlink computing system without any modification in network transmission.
In an embodiment of the present specification, the conversion module is an interface program of a first standard computing engine or a first heterogeneous computing engine. As shown in fig. 2a or fig. 2b, fig. 2a and fig. 2b are schematic architecture diagrams of a data processing system according to an exemplary embodiment, which describe basic architectures of a part of an on-chain/off-chain computing system (limited to a first node device) in a case where a first heterogeneous computing engine is not deployed on the first node device, and it can be found that, in fig. 2a, a conversion module is deployed on the first node device as an interface program of a first standard computing engine, and in fig. 2b, the conversion module is deployed outside the first node device as a second first heterogeneous computing engine and is deployed on an external device together with the first heterogeneous computing engine.
In an embodiment of the present specification, the first standard calculation engine is further configured to: and acquiring input data which is depended by the first computing task from the first node equipment, and carrying the input data in the standard task request. Since the first heterogeneous compute engine cannot directly access the on-chain-down computing system, but rather needs to be mediated by a standard compute engine, the first heterogeneous compute engine lacks the ability to directly invoke the on-chain-down computing system, e.g., the first heterogeneous compute engine may need to invoke a call engine on the first node device to access some of the on-chain information (block information, contract state, etc.) of the blockchain network while performing the first compute task, but cannot fulfill this requirement because it does not know the network address of the scheduling engine and the call rules (including the definition of the request structure and the definition of the response structure, which may cause the heterogeneous compute engine and the scheduling engine to not be able to identify each other even though they may be able to interact with the interaction information they transmit to each other). In order to avoid the above phenomenon, after receiving the call of the scheduling engine, the first standard engine may first collect data in some uplink-downlink computing systems that the first heterogeneous computing engine depends on when executing the first computing task in the future in advance, and after carrying the data as input data that the first computing task depends on in the standard task request, send the standard task request carrying the input data to the conversion module, so that the first heterogeneous computing engine directly obtains the input data without temporarily obtaining the input data from the uplink-downlink computing systems in the process of executing the first computing task.
In an embodiment of the present specification, the first heterogeneous computing engine triggers sending of the heterogeneous execution result to the conversion module when the first computing task is completely executed, and/or triggers sending of the heterogeneous execution result to the conversion module when the first heterogeneous computing engine receives a result query request for the first computing task. In this embodiment of the present specification, the first heterogeneous computing engine may send the generated heterogeneous execution result to the conversion module at the time when the execution of the first computing task is completed, and the conversion module returns the standard execution result obtained by conversion thereof to the first standard computing engine; the first heterogeneous computing engine may also locally store a corresponding heterogeneous execution result in the first heterogeneous computing engine after the first computing task is executed, and after the first standard computing engine sends a result query request for the first computing task to the first heterogeneous computing engine, the first heterogeneous computing engine triggers to send the heterogeneous execution result to the conversion module, and the conversion module returns a standard execution result obtained by conversion to the first standard computing engine. The embodiment of the specification provides at least two ways of obtaining heterogeneous execution results corresponding to a first computing task, including an active pushing result and a waiting query result, so as to adapt to various different application scenarios.
In the embodiment of the present specification, by deploying a standard computing engine conforming to a development paradigm supporting execution of a computation task under an execution chain on a node device and introducing a conversion module as a conversion medium between the standard computing engine and a heterogeneous computing engine, a heterogeneous computing engine not conforming to a related development paradigm can support the computation task under the execution chain without modification or with only a small amount of modification, so that the algorithm migration cost is reduced, and meanwhile, limited computing engine resources of the node device are expanded to support implementation of more types of computation tasks under the chain.
When the number of the participant nodes corresponding to the first computing task is multiple, the multiple node devices participate in executing the first computing task together, and specifically, the data interaction is performed between the multiple node devices and multiple computing engines connected to the multiple node devices through network connections or directly deployed among the multiple node devices. In the embodiments of the present specification, it is also possible for the first heterogeneous compute engine, which is establishing a network connection with the first node device, to interact with other compute engines in the course of performing the first compute task, and the first heterogeneous compute engine may implement the interaction between such compute engines in a variety of ways.
In one embodiment, the first heterogeneous compute engine is to: in the process of responding to the heterogeneous task request and executing the first computing task, acquiring first data sent by other computing engines for use in the process of executing the first computing task; and/or sending second data generated in the process of executing the first computing task in response to the heterogeneous task request to the other computing engines so that the other computing engines use the second data in the process of executing the first computing task. In this embodiment, the first heterogeneous computing engine may know which computing engines participate in the first computing task through the heterogeneous task request, so that the first heterogeneous computing engine may receive, during the execution of the first computing task, first data of other computing engines that are also in the execution of the first computing task, where the first data is to be used by the first heterogeneous computing engine in the subsequent execution of the first computing task, and may send, on the other hand, the generated second data to other computing engines that are defined in the first computing task and are required to obtain the second data and are executing the first computing task, so that the second data is used by the other computing engines in the subsequent execution of the first computing task.
As shown in fig. 3, fig. 3 is a schematic view of a scenario of computing engine interaction provided in an exemplary embodiment, and it is assumed that a participant node corresponding to a first computing task simultaneously includes a first blockchain node, a second blockchain node, and a third blockchain node, and a computing engine specifically involved in executing the first computing task includes a standard computing engine a and a down-link computing engine a deployed on a first node device, a down-link computing engine B deployed on a second node device, a standard computing engine B deployed on a third node device, a heterogeneous computing engine a established with a network connection with the standard computing engine a, and a heterogeneous computing engine B established with a network connection with the standard computing engine B. Assuming that the definition of the first computing task includes a data interaction process in which the heterogeneous computing engine a acquires first data from the down-chain computing engine B and transmits second data generated by itself to the heterogeneous computing engine B, then, for the heterogeneous computing engine a, it may acquire the second data provided by the down-chain computing engine B through a network connection established with the down-chain computing engine B, and transmit the first data generated by itself to the heterogeneous computing engine B through a network connection established with the heterogeneous computing engine B, thereby completing the data interaction process between the computing engines defined by the first computing task, so as to finally complete the first computing task with the help of cooperative interaction of the plurality of computing engines.
In another embodiment, the first heterogeneous compute engine is to: in the process of executing the first computing task in response to the heterogeneous task request, acquiring first data received by the first standard computing engine from other computing engines for use in the process of executing the first computing task; and/or sending second data generated in the process of executing the first computing task in response to the heterogeneous task request to the other computing engine through the first standard computing engine so as to enable the other computing engine to use the second data in the process of executing the first computing task. Different from the foregoing embodiment, the present embodiment requires that the heterogeneous computing engine cannot establish network connections with other computing engines participating in the first computing task, in addition to the network connections with the corresponding standard computing engines, because the network connections may not be normally established between the heterogeneous computing engine and other under-chain computing engines conforming to the on-chain and under-chain computing system development paradigm (for example, due to inconsistent network protocols), and even if the network connections can be established, the interaction data of each other may not be identified (for example, inconsistent structure definitions due to inconsistent programming languages), so that it is necessary to avoid data interaction through the network connections directly established between the heterogeneous computing engine and other computing engines as much as possible in order to ensure the effectiveness of the interaction. In this embodiment, by establishing a network connection between all the under-chain computing engines participating in the execution of the first computing task and conforming to the development paradigm of the on-chain under-chain computing system, when a heterogeneous computing engine has a requirement for data interaction with other computing engines, conversion and forwarding of interaction data are implemented by using a corresponding standard computing engine as an intermediary, so as to ensure the effectiveness of interaction between the heterogeneous computing engine and other computing engines.
As shown in fig. 3, it is assumed that the heterogeneous computing engine a does not establish a direct network connection with the heterogeneous computing engine B, the down-link computing engine a, and the down-link computing engine B, and it is also assumed that the definition of the first computing task includes a data interaction process in which the heterogeneous computing engine a obtains first data from the down-link computing engine B and sends second data generated by the heterogeneous computing engine a to the heterogeneous computing engine B. Thus, for the down-link compute engine B, it may be able to perform the first compute task by sending the first data generated when it is executed to the standard compute engine a; for the heterogeneous computing engine a, on one hand, first data which can be recognized by the heterogeneous computing engine a and is obtained by the standard computing engine a after being obtained from the down-chain computing engine B and converted by the conversion module is received, on the other hand, second data generated in the process of executing the first computing task by the standard computing engine a is converted into second data which can be recognized by the up-down-chain computing system through the conversion module and is sent to the standard computing engine a, then the second data which can be recognized by the up-down-chain computing system is forwarded to the standard computing engine B by the standard computing engine a, so that the second data which can be recognized by the up-down-chain computing system is obtained by the standard computing engine B through the conversion module and is finally sent to the heterogeneous computing engine B, thereby completing a data interaction process between the computing engines defined by the first computing task, to finalize the first computing task with the help of the collaborative interaction of the plurality of computing engines.
The other calculation engines referred to in the embodiments of the present specification include: the node device comprises a down-link computing engine, except a first standard computing engine, deployed on any node device, or a heterogeneous computing engine, except a first heterogeneous computing engine, which establishes network connection with the standard computing engine deployed on any node device, wherein the block link points in the block link network deployed on any node device belong to the participant nodes.
FIG. 4 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 4, at the hardware level, the apparatus includes a processor 402, an internal bus 404, a network interface 406, a memory 408, and a non-volatile memory 410, but may also include hardware required for other functions. One or more embodiments of the present description may be implemented in software, such as by processor 402 reading corresponding computer programs from non-volatile storage 410 into memory 408 and then executing. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Fig. 5 is a block diagram of a data processing apparatus provided in the present specification according to an exemplary embodiment, which may be applied to the device shown in fig. 4 to implement the technical solution of the present specification; the device is applied to first node equipment with a first block chain node, and a block chain network to which the first block chain node belongs is provided with a down-link calculation contract; the device comprises:
an event monitoring unit 501, configured to monitor a task event for a first computing task generated by the calculation contract under the link;
a task executing unit 502, configured to, when the first block link point belongs to a participant node corresponding to the first computing task, invoke a first standard computing engine deployed on the first node device to execute the first computing task, where the first standard computing engine is configured to: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
Optionally, the conversion module is an interface program of a first standard computing engine or a first heterogeneous computing engine.
Optionally, the first standard calculation engine is further configured to: and acquiring input data which is depended by the first computing task from the first node equipment, and carrying the input data in the standard task request.
Optionally, the first heterogeneous computing engine triggers sending of the heterogeneous execution result to the conversion module when the first computing task is executed, and/or,
the first heterogeneous compute engine triggers sending of the heterogeneous execution results to the conversion module upon receiving a result query request for a first compute task.
Optionally, the first heterogeneous computing engine is deployed in the first node device, or the first heterogeneous computing engine is not deployed in the first node device and a network connection is established between the first heterogeneous computing engine and the first standard computing engine.
Optionally, the network connection is established by the first standard compute engine and the first heterogeneous compute engine through a network protocol supported by the first heterogeneous compute engine.
Optionally, the first heterogeneous computing engine is configured to: in the process of responding to the heterogeneous task request and executing the first computing task, acquiring first data sent by other computing engines for use in the process of executing the first computing task; and/or the presence of a gas in the gas,
sending second data generated during execution of the first computing task in response to the heterogeneous task request to the other computing engines to cause the other computing engines to use the second data during execution of the first computing task.
Optionally, the first heterogeneous computing engine is configured to:
in the process of executing the first computing task in response to the heterogeneous task request, acquiring first data received by the first standard computing engine from other computing engines for use in the process of executing the first computing task; and/or the presence of a gas in the gas,
sending second data generated during execution of the first computing task in response to the heterogeneous task request to the other computing engine through the first standard computing engine to cause the other computing engine to use the second data during execution of the first computing task.
Optionally, the other calculation engines include:
the node device comprises a down-link computing engine, except a first standard computing engine, deployed on any node device, or a heterogeneous computing engine, except a first heterogeneous computing engine, which establishes network connection with the standard computing engine deployed on any node device, wherein the block link points in the block link network deployed on any node device belong to the participant nodes.
Optionally, the calculation contract under the chain maintains a task completion state corresponding to the calculation task under the chain, where the task completion state is used to describe a completion state of each subtask included in the calculation task under the chain; in a case that the first computation task belongs to a subtask of the computation task under the chain, the event monitoring unit 501 is specifically configured to:
and monitoring the task event aiming at the first computing task generated by the under-chain computing contract under the condition that the task completion state meets the execution condition corresponding to the first computing task.
Optionally, the task completion state is updated by the under-chain computation contract in response to a transaction trigger corresponding to the under-chain computation task, where the transaction corresponding to the under-chain computation task includes a task creation transaction corresponding to the under-chain computation task, or a result initiated by any node device when any one of the subtasks is completely executed returns a transaction.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a server system. Of course, the present invention does not exclude that as future computer technology develops, the computer implementing the functionality of the above described embodiments may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or end product executes, it may execute sequentially or in parallel (e.g., parallel processors or multi-threaded environments, or even distributed data processing environments) according to the method shown in the embodiment or the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, if the terms first, second, etc. are used to denote names, they do not denote any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.

Claims (14)

1. A data processing method is applied to first node equipment with a first block chain node, and a block chain network to which the first block chain node belongs is provided with a calculation contract under a chain; the method comprises the following steps:
monitoring a task event aiming at a first computing task generated by the down-link computing contract;
under the condition that the first block link point belongs to a participant node corresponding to the first computing task, a first standard computing engine deployed on the first node device is called to execute the first computing task, and the first standard computing engine is used for: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
2. The method of claim 1, the conversion module being an interface program of a first standard compute engine or a first heterogeneous compute engine.
3. The method of claim 1, the first criteria calculation engine further to: and acquiring input data which is depended by the first computing task from the first node equipment, and carrying the input data in the standard task request.
4. The method of claim 1, the first heterogeneous compute engine triggering sending of the heterogeneous execution results to the translation module upon completion of execution of the first compute task, and/or,
the first heterogeneous computation engine triggers sending of the heterogeneous execution results to the conversion module upon receiving a result query request for a first computation task.
5. The method of claim 1, wherein the first heterogeneous compute engine is deployed on the first node device, or wherein the first heterogeneous compute engine is not deployed on the first node device and a network connection is established between the first heterogeneous compute engine and the first standard compute engine.
6. The method of claim 5, the network connection established by the first standard compute engine and the first heterogeneous compute engine via a network protocol supported by the first heterogeneous compute engine.
7. The method of claim 1, the first heterogeneous compute engine to: in the process of responding to the heterogeneous task request and executing the first computing task, acquiring first data sent by other computing engines for use in the process of executing the first computing task; and/or the presence of a gas in the gas,
sending second data generated during execution of the first computing task in response to the heterogeneous task request to the other computing engines to cause the other computing engines to use the second data during execution of the first computing task.
8. The method of claim 1, the first heterogeneous compute engine to:
in the process of executing the first computing task in response to the heterogeneous task request, acquiring first data received by the first standard computing engine from other computing engines for use in the process of executing the first computing task; and/or the presence of a gas in the gas,
sending second data generated during execution of the first computing task in response to the heterogeneous task request to the other computing engine through the first standard computing engine to cause the other computing engine to use the second data during execution of the first computing task.
9. The method of claim 7 or 8, the other compute engine comprising:
the node device comprises a down-link computing engine, except a first standard computing engine, deployed on any node device, or a heterogeneous computing engine, except a first heterogeneous computing engine, which establishes network connection with the standard computing engine deployed on any node device, wherein the block link points in the block link network deployed on any node device belong to the participant nodes.
10. The method according to claim 1, wherein the calculation contract maintains a task completion status corresponding to a calculation task under the link, and the task completion status is used for describing a completion status of each subtask included in the calculation task under the link; in a case that a first computing task belongs to a subtask of the calculation task under the chain, the monitoring a task event generated by the calculation contract under the chain and aiming at the first computing task includes:
and monitoring the task event aiming at the first computing task, which is generated by the under-chain computing contract under the condition that the task completion state meets the execution condition corresponding to the first computing task.
11. The method according to claim 10, wherein the task completion status is updated by the under-chain computation contract in response to a transaction trigger corresponding to the under-chain computation task, wherein the transaction corresponding to the under-chain computation task includes a task creation transaction corresponding to the under-chain computation task, or a result return transaction initiated by any node device when any of the subtasks is completely executed.
12. A data processing device is applied to first node equipment with a first block chain node, and a block chain network to which the first block chain node belongs is provided with a calculation contract under a chain; the device comprises:
the event monitoring unit is used for monitoring a task event which is generated by the calculation contract under the chain and aims at a first calculation task;
the task execution unit is configured to, when the first block link point belongs to a participant node corresponding to the first computation task, invoke a first standard computation engine deployed on the first node device to execute the first computation task, where the first standard computation engine is configured to: the method comprises the steps of sending a standard task request corresponding to a first computing task to a conversion module, converting the standard task request into a heterogeneous task request which can be identified by a first heterogeneous computing engine by the conversion module, and receiving a standard execution result returned by the conversion module, wherein the standard execution result is obtained by converting a heterogeneous execution result generated by the first heterogeneous computing engine based on the heterogeneous task request by the conversion module.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-11 by executing the executable instructions.
14. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 11.
CN202210343405.XA 2022-03-31 2022-03-31 Data processing method and device, electronic equipment and storage medium Pending CN114820187A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210343405.XA CN114820187A (en) 2022-03-31 2022-03-31 Data processing method and device, electronic equipment and storage medium
PCT/CN2022/135207 WO2023185044A1 (en) 2022-03-31 2022-11-30 Data processing method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210343405.XA CN114820187A (en) 2022-03-31 2022-03-31 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114820187A true CN114820187A (en) 2022-07-29

Family

ID=82531821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210343405.XA Pending CN114820187A (en) 2022-03-31 2022-03-31 Data processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114820187A (en)
WO (1) WO2023185044A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023185044A1 (en) * 2022-03-31 2023-10-05 蚂蚁区块链科技(上海)有限公司 Data processing method and apparatus, and electronic device and storage medium
CN118041928A (en) * 2023-12-28 2024-05-14 上海佰莫瑟企业管理咨询有限公司 Construction method of block chain processing architecture based on multipath chip expansion

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348999B (en) * 2023-12-06 2024-02-23 之江实验室 Service execution system and service execution method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047450A (en) * 2020-03-18 2020-04-21 支付宝(杭州)信息技术有限公司 Method and device for calculating down-link privacy of on-link data
CN113496398A (en) * 2020-03-19 2021-10-12 中移(上海)信息通信科技有限公司 Data processing method, device, equipment and medium based on intelligent contract
CN112540969B (en) * 2020-11-26 2023-07-14 南京纯白矩阵科技有限公司 Data migration method of intelligent contracts among heterogeneous block chains
CN114820187A (en) * 2022-03-31 2022-07-29 蚂蚁区块链科技(上海)有限公司 Data processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023185044A1 (en) * 2022-03-31 2023-10-05 蚂蚁区块链科技(上海)有限公司 Data processing method and apparatus, and electronic device and storage medium
CN118041928A (en) * 2023-12-28 2024-05-14 上海佰莫瑟企业管理咨询有限公司 Construction method of block chain processing architecture based on multipath chip expansion

Also Published As

Publication number Publication date
WO2023185044A1 (en) 2023-10-05

Similar Documents

Publication Publication Date Title
CN114820187A (en) Data processing method and device, electronic equipment and storage medium
EP3975474B1 (en) Methods and apparatuses for chaining service data
CN111768303A (en) Transaction processing method, device, equipment and system
CN108055296B (en) Transaction processing method and device based on micro-service architecture
CN113867600A (en) Development method and device for processing streaming data and computer equipment
Kouicem et al. Dynamic services selection approach for the composition of complex services in the web of objects
CN114936092A (en) Method for executing transaction in block chain and main node of block chain
CN109343970B (en) Application program-based operation method and device, electronic equipment and computer medium
CN113643030B (en) Transaction processing method, device and equipment
CN114896637A (en) Data processing method and device, electronic equipment and storage medium
CN114726858B (en) Data processing method and device, electronic equipment and storage medium
WO2024001032A1 (en) Method for executing transaction in blockchain system, and blockchain system and nodes
CN114785800B (en) Cross-link communication method, device, storage medium and computing equipment
CN115983997A (en) Block chain-based collection management method, block chain node and system
CN116366666A (en) Chain state updating method and block link point in block chain system
CN115098114A (en) Block chain-based distributed application deployment method and device
CN114710350A (en) Allocation method and device for callable resources
CN114780243A (en) Service updating method and device
CN114416311A (en) Method and device for managing message queue based on Go language
CN114860400A (en) Under-link processing method and device for block chain task
EP3916540A1 (en) Compiling monoglot function compositions into a single entity
CN114692185A (en) Data processing method and device
CN114968422A (en) Method and device for automatically executing contracts based on variable state
CN114896636A (en) Data processing method and device, electronic equipment and storage medium
CN114896638A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination