CN111224822A - Node scheduling method, system, server and storage medium of data flow graph - Google Patents

Node scheduling method, system, server and storage medium of data flow graph Download PDF

Info

Publication number
CN111224822A
CN111224822A CN202010004641.XA CN202010004641A CN111224822A CN 111224822 A CN111224822 A CN 111224822A CN 202010004641 A CN202010004641 A CN 202010004641A CN 111224822 A CN111224822 A CN 111224822A
Authority
CN
China
Prior art keywords
computing
node
data
nodes
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010004641.XA
Other languages
Chinese (zh)
Inventor
黄雪辉
熊超
牛昕宇
蔡权雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Corerain Technologies Co Ltd
Original Assignee
Shenzhen Corerain Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Corerain Technologies Co Ltd filed Critical Shenzhen Corerain Technologies Co Ltd
Priority to CN202010004641.XA priority Critical patent/CN111224822A/en
Publication of CN111224822A publication Critical patent/CN111224822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a method, a system, a server and a storage medium for node scheduling of a data flow graph. Wherein the method comprises the following steps: receiving data based on a plurality of first compute nodes of a dataflow graph; confirming at least two first computing nodes with a logic relationship from the plurality of first computing nodes, combining the at least two first computing nodes with the logic relationship according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes; and sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units. The embodiment of the invention achieves the technical effects of improving the node calculation efficiency and simplifying the IO operation by combining the nodes which need to carry out data processing in the same equipment and have logical relations with each other.

Description

Node scheduling method, system, server and storage medium of data flow graph
Technical Field
The embodiments of the present invention relate to a node scheduling technique for software and hardware, and for example, to a node scheduling method, system, server, and storage medium for a dataflow graph.
Background
A Network Topology (Network Topology) structure refers to a physical layout that interconnects various devices using a transmission medium, and constitutes a particular physical (i.e., real) or logical (i.e., virtual) arrangement among the members of a Network. If the connection structure of two networks is the same, i.e. their network topologies are the same, although their respective internal physical connections, inter-node distances may be different.
The node calculation in the network topology is sequentially completed on the corresponding devices according to the topology sequence, and each node needs to complete data exchange between the host and the device through Input/Output (I/O) in the process. This makes the node computation less efficient and the I/O operation more complex.
Disclosure of Invention
The invention provides a node scheduling method, a node scheduling system, a server and a storage medium of a data flow graph, and aims to achieve the technical effect of improving the node calculation efficiency.
In a first aspect, an embodiment of the present invention provides a method for scheduling nodes in a dataflow graph, including:
receiving data based on a plurality of first compute nodes of a dataflow graph;
confirming at least two first computing nodes with a logic relationship from the plurality of first computing nodes, combining the at least two first computing nodes with the logic relationship according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes;
and sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
Further, the first computing unit comprises a field-programmable gate array (FPGA), an image processor (GPU), a Network Interface Controller (NIC) or a display adapter (VGA), and the second computing unit comprises a Central Processing Unit (CPU).
Further, the second computation node includes at least one set of a first front node and a first back node, and the computation of the data of the first back node refers to the computation result of the data of the first front node.
Further, the calculation result of the data of the first front node and the calculation result of the data of the first rear node are stored in the on-chip memory through the first bus.
Furthermore, the data of the third computing node is extracted from the off-chip memory through the second bus, and the computing result of the data of the third computing node is stored in the off-chip memory through the second bus.
Further, the method also comprises the following steps:
the calculation result of the first calculation unit and the calculation result of the second calculation unit are received to synchronize data.
In a second aspect, an embodiment of the present invention further provides a node scheduling system for a dataflow graph, including:
a receiving module to receive data based on a plurality of first compute nodes of a dataflow graph;
the merging module is used for confirming at least two first computing nodes with a logical relation from the plurality of first computing nodes, merging the at least two first computing nodes with the logical relation according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not merged in the plurality of first computing nodes as third computing nodes;
and the sending module is used for sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, and the first computing unit and the second computing unit are different types of computing units.
In a third aspect, an embodiment of the present invention further provides a server, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method in any one of the foregoing embodiments when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method in any one of the foregoing embodiments.
The invention solves the technical problem that each node needs to complete data exchange between the host and the equipment through IO in the prior art by combining the nodes which need to perform data processing in the same equipment and have logical relations with each other, and achieves the technical effects of improving the node calculation efficiency and simplifying the IO operation.
Drawings
Fig. 1 is a flowchart of a node scheduling method of a dataflow graph according to an embodiment of the present invention;
fig. 2 is a flowchart of a node scheduling method of a dataflow graph according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a node scheduling system of a dataflow graph according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The invention is described below with reference to the accompanying drawings and examples. It is to be understood that the embodiments described herein are illustrative only and are not limiting upon the present invention. For the purpose of illustration, only some structures relevant to the present invention are shown in the drawings, and not all structures are shown.
Some example embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. Further, the order of the steps may be rearranged. The process may be terminated when the various step operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first computing unit may be referred to as a second computing unit, and similarly, a second computing unit may be referred to as a first computing unit, without departing from the scope of the present invention. Both the first computing unit and the second computing unit are computing units, but the first computing unit and the second computing unit are not the same computing unit. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless expressly defined otherwise.
Example one
Fig. 1 is a flowchart of a node scheduling method of a data flow graph according to an embodiment of the present invention, where this embodiment is applicable to a situation of node calculation in a network topology, and the method may be executed by a host, as shown in fig. 1, a node scheduling method of a data flow graph according to an embodiment of the present invention includes S110 to S130.
And S110, receiving data of a plurality of first computing nodes based on the data flow graph.
In one embodiment, a dataflow graph is a tool used in a structured analysis method that graphically depicts the flow and processing of data through a system, and is a functional model in that it reflects only the logical functions that the system must perform. In the structured development approach, the dataflow graph is the result produced by the demand analysis phase. A dataflow graph may depict the flow of information and the transformations that data undergoes as it moves from input to output.
The dataflow graph graphically depicts the process of moving a dataflow from input to output from the perspective of data transfer and processing. The dataflow graph has two typical structures, namely a transformation type structure, and the described work can be expressed as input, main processing and output, and is in a data flow state of a line transformation type structure. The other is a transaction type structure, the data flow diagram is in a bundle shape, namely a bundle of data flows flow in or out in parallel, and the transaction type structure data flow diagram can have several transaction requirements to process at the same time. In this embodiment, a node is a terminal of a leg in the network or an interconnecting common point of two or more legs in the network.
In one embodiment, a compute node is used to perform an operation or function, representing an operation or related operation on data, such as may be understood as a node in a neural network.
S120, confirming at least two first computing nodes with logical relations from the plurality of first computing nodes, combining the at least two first computing nodes with logical relations according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes.
In this embodiment, the at least two first computing nodes having a logical relationship refer to that a subsequent first computing node receives a data processing result of a previous first computing node when performing data computing, the preset rule includes that when a plurality of first computing nodes all have a logical relationship with each other and need to be sent to the same device for data processing, the first computing nodes can be merged first to obtain a second computing node, and the second computing node is equivalent to a set of screened first computing nodes satisfying conditions, that is, for the second computing node, output data of the previous first computing node completing computing is taken as input data of the subsequent first computing node, so that data processing time caused by data exchange between each first computing node and a host when processing the second computing node can be reduced, the efficiency of data processing is improved. In this embodiment, by merging the first computing nodes having the logical relationship, the original plurality of first computing nodes are divided into a merged second computing node and a third computing node which is not merged, and the second computing node and the third computing node are respectively sent to different processing devices for data processing.
S130, sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units. In an embodiment, the second computing node and the third computing node of the present embodiment are obtained through S120, where the first computing unit is configured to perform data processing on data of the second computing node, and the second computing unit is configured to perform data processing on data of the third computing node. In this embodiment, the first computing Unit may include one of a Field-programmable gate Array (FPGA), a Graphics Processing Unit (GPU), a Network Interface Controller (NIC), and a Video Graphics Adapter (VGA), and the second computing Unit may include a Central Processing Unit (CPU).
The first embodiment of the invention has the advantages that the problem that each node in the related technology needs to complete data exchange between the host and the equipment through I/O is solved by combining the nodes which have logical relations with each other and process data in the same equipment, and the technical effects of improving the node calculation efficiency and simplifying I/O operation are achieved.
Example two
The second embodiment of the invention is an optional implementation mode based on the first embodiment. Fig. 2 is a flowchart of a node scheduling method of a data flow diagram according to a second embodiment of the present invention. As shown in fig. 2, the method for scheduling nodes of a dataflow graph includes S210 to S240.
S210, receiving data of a plurality of first computing nodes based on the data flow graph.
In this embodiment, a dataflow graph is a tool used in a structured analysis method, which graphically depicts the flow and processing of data in a system, and is a functional model because it reflects only the logical functions that the system must perform. In the structured development approach, the dataflow graph is the result produced by the demand analysis phase. A dataflow graph may depict the flow of information and the transformations that data undergoes as it moves from input to output. The dataflow graph graphically depicts the process of moving a dataflow from input to output from the perspective of data transfer and processing. The dataflow graph has two typical structures, namely a transformation type structure, and the described work can be expressed as input, main processing and output, and is in a data flow state of a line transformation type structure. The other is a transaction type structure, the data flow diagram is in a bundle shape, namely a bundle of data flows flow in or out in parallel, and the transaction type structure data flow diagram can have several transaction requirements to process at the same time. In this embodiment, a node is a terminal of a leg in the network or an interconnecting common point of two or more legs in the network.
S220, confirming at least two first computing nodes with logical relations from the plurality of first computing nodes, combining the at least two first computing nodes with logical relations according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes.
In this embodiment, the at least two first computing nodes having a logical relationship refer to that a subsequent first computing node needs to receive a data processing result of a previous first computing node first when performing data computing, the preset rule refers to that when a plurality of first computing nodes all have a logical relationship with each other and need to be sent to the same device for data processing, the first computing nodes can be merged first to obtain a second computing node, and the second computing node is equivalent to a set of screened first computing nodes meeting conditions, that is, for the second computing node, output data of the previous first computing node completing computing is taken as input data of the subsequent first computing node, so that data processing time caused by data exchange between each first computing node and a host when processing the second computing node can be reduced, the efficiency of data processing is improved. In this embodiment, by merging the first computing nodes having the logical relationship, the original plurality of first computing nodes are divided into a merged second computing node and a third computing node which is not merged, and the second computing node and the third computing node are respectively sent to different processing devices for data processing.
And S230, sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
In this embodiment, the second computing node and the third computing node of this embodiment are obtained through S220, where the first computing unit is configured to perform data processing on data of the second computing node, and the second computing unit is configured to perform data processing on data of the third computing node.
In this embodiment, the first computing unit includes one of an FPGA, a GPU, a NIC, and a VGA, and the second computing unit includes a CPU.
In this embodiment, the FPGA refers to a Field-Programmable Gate Array (FPGA), the GPU refers to a Graphics Processing Unit (GPU), the NIC refers to a Network Interface Controller (NIC), the VGA refers to a Video Graphics Adapter (VGA), and the CPU refers to a Central Processing Unit (CPU).
In this embodiment, the second computing node includes at least one set of a first front node and a first back node, and the computation of the data of the first back node refers to the computation result of the data of the first front node.
In this embodiment, the calculation result of the data of the first front node and the calculation result of the data of the first rear node are stored in the on-chip memory through the first bus.
In this embodiment, the data of the third computing node is extracted from the off-chip memory through the second bus, and the computing result of the data of the third computing node is stored in the off-chip memory through the second bus.
And S240, receiving the calculation result of the first calculation unit and the calculation result of the second calculation unit to synchronize data.
In this embodiment, in S230, the data of the second computing node is sent to the first computing unit for data processing to obtain a second computing result, the data of the third computing node is sent to the second computing unit for data processing to obtain a third computing result, and after the second computing result and the third computing result are obtained, the computing result of the first computing unit and the computing result of the second computing unit can be sent to the host computer, so that synchronization of data on the host computer and the computing modules or devices such as the first computing unit and the second computing unit can be achieved.
The second embodiment of the invention has the advantages that the problem that each node in the related technology needs to complete data exchange between the host and the equipment through I/O in the calculation process is solved by combining the nodes which need to perform data processing in the same equipment and have logical relations with each other and synchronizing the data of the host and the equipment after the calculation is finished, and the effects of improving the calculation efficiency of the nodes and simplifying the I/O operation are achieved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a node scheduling system of a data flow diagram according to a third embodiment of the present invention. As shown in fig. 3, the node scheduling system 300 of the dataflow graph of this embodiment includes: a receiving module 310, a combining module 320, and a transmitting module 330.
A receiving module 310 for receiving data based on a plurality of first compute nodes of a dataflow graph;
a merging module 320, configured to determine at least two first computing nodes having a logical relationship from the plurality of first computing nodes, merge the at least two first computing nodes having the logical relationship according to a preset rule to obtain a second computing node, and define, as a third computing node, a first computing node that is not merged in the plurality of first computing nodes;
the sending module 330 is configured to send data of a second computing node to the first computing unit for computing, and send data of a third computing node to the second computing unit for data processing, where the first computing unit and the second computing unit are different types of computing units. In this embodiment, the first computing unit is one of an FPGA, a GPU, a NIC, and a VGA, and the second computing unit is a CPU.
In this embodiment, the second computing node includes at least one set of a first front node and a first back node, and the computation of the data of the first back node refers to the computation result of the data of the first front node.
In this embodiment, the calculation result of the data of the first front node and the calculation result of the data of the first rear node are stored in the on-chip memory through the first bus.
In this embodiment, the data of the third computing node is extracted from the off-chip memory through the second bus, and the computing result of the data of the third computing node is stored in the off-chip memory through the second bus.
In this embodiment, the node scheduling system 300 of the dataflow graph further includes: a synchronization module 340.
A synchronization module 340, configured to receive the calculation result of the first calculation unit and the calculation result of the second calculation unit to synchronize data.
The node scheduling system of the dataflow graph provided by the third embodiment of the invention can execute the method provided by any embodiment of the invention, and has corresponding functional modules and effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention, as shown in fig. 4, the server includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the server may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 410, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the methods provided in the embodiments of the present invention (e.g., a receiving module, a combining module, a transmitting module, and a synchronizing module in a node scheduling system of a dataflow graph). The processor 410 executes various functional applications of the server and data processing by executing software programs, instructions and modules stored in the memory 420, that is, implements the above-described method.
The method comprises the following steps:
receiving data based on a plurality of first compute nodes of a dataflow graph;
confirming at least two first computing nodes with a logic relationship from the plurality of first computing nodes, combining the at least two first computing nodes with the logic relationship according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes;
and sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
The memory 420 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may include memory located remotely from processor 410, which may be connected to a server over a network. Examples of such networks include the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method provided by the foregoing embodiment, where the method includes:
receiving data based on a plurality of first compute nodes of a dataflow graph;
confirming at least two first computing nodes with a logic relationship from the plurality of first computing nodes, combining the at least two first computing nodes with the logic relationship according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes;
and sending the data of the second computing node to the first computing unit for computing, and sending the data of the third computing node to the second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
The storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the above method operations, and may also execute the method provided by any embodiments of the present invention.
In the above description of the embodiments, the present invention can be implemented by software, general hardware, or hardware. Technical solution substantially or contributing to the related art may be embodied in the form of a software product, where the computer software product may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes a plurality of instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In the embodiment of the node scheduling system of the data flow graph, a plurality of units and modules are only divided according to functional logic, but are not limited to the above division as long as corresponding functions can be realized; in addition, the names of the plurality of functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.

Claims (9)

1. A method for scheduling nodes of a data flow graph is characterized by comprising the following steps:
receiving data based on a plurality of first compute nodes of a dataflow graph;
confirming at least two first computing nodes with a logical relationship from the plurality of first computing nodes, combining the at least two first computing nodes with the logical relationship according to a preset rule to obtain a second computing node, and defining the first computing nodes which are not combined in the plurality of first computing nodes as third computing nodes;
and sending the data of the second computing node to a first computing unit for computing, and sending the data of the third computing node to a second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
2. The method of claim 1, wherein the first computational unit comprises a field-editable gate array (FPGA), an image processor (GPU), a Network Interface Controller (NIC), or a display adapter (VGA), and the second computational unit comprises a Central Processing Unit (CPU).
3. The method of claim 1, wherein the second computing node comprises at least one set of a first front node and a first back node, and wherein the computation of the data of the first back node references the computation of the data of the first front node.
4. The method of claim 3, wherein the calculation of the data of the first front node and the calculation of the data of the first back node are stored in an on-chip memory through a first bus.
5. The method of claim 4, wherein the data of the third computing node is fetched from an off-chip memory through a second bus, and the computation result of the data of the third computing node is stored in the off-chip memory through the second bus.
6. The method of claim 5, further comprising:
receiving the calculation result of the first calculation unit and the calculation result of the second calculation unit to synchronize data.
7. A system for scheduling nodes of a dataflow graph, comprising:
a receiving module to receive data based on a plurality of first compute nodes of a dataflow graph;
a merging module, configured to determine at least two first computing nodes having a logical relationship from the plurality of first computing nodes, merge the at least two first computing nodes having the logical relationship according to a preset rule to obtain a second computing node, and define, as a third computing node, a first computing node that is not merged from the plurality of first computing nodes;
and the sending module is used for sending the data of the second computing node to a first computing unit for computing, and sending the data of the third computing node to a second computing unit for data processing, wherein the first computing unit and the second computing unit are different types of computing units.
8. A server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-6 when executing the computer program.
9. A computer-readable storage medium, having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1-6.
CN202010004641.XA 2020-01-03 2020-01-03 Node scheduling method, system, server and storage medium of data flow graph Pending CN111224822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010004641.XA CN111224822A (en) 2020-01-03 2020-01-03 Node scheduling method, system, server and storage medium of data flow graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010004641.XA CN111224822A (en) 2020-01-03 2020-01-03 Node scheduling method, system, server and storage medium of data flow graph

Publications (1)

Publication Number Publication Date
CN111224822A true CN111224822A (en) 2020-06-02

Family

ID=70829333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010004641.XA Pending CN111224822A (en) 2020-01-03 2020-01-03 Node scheduling method, system, server and storage medium of data flow graph

Country Status (1)

Country Link
CN (1) CN111224822A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298649A (en) * 2020-07-01 2021-08-24 阿里巴巴集团控股有限公司 Transaction data processing method and device, and data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391541A (en) * 2017-05-16 2017-11-24 阿里巴巴集团控股有限公司 A kind of real time data merging method and device
CN109376137A (en) * 2018-12-17 2019-02-22 中国人民解放军战略支援部队信息工程大学 A kind of document handling method and device
CN110019207A (en) * 2017-11-02 2019-07-16 阿里巴巴集团控股有限公司 Data processing method and device and script display methods and device
US20190230035A1 (en) * 2018-01-25 2019-07-25 Cisco Technology, Inc. Network flow stitching using middle box flow stitching

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391541A (en) * 2017-05-16 2017-11-24 阿里巴巴集团控股有限公司 A kind of real time data merging method and device
CN110019207A (en) * 2017-11-02 2019-07-16 阿里巴巴集团控股有限公司 Data processing method and device and script display methods and device
US20190230035A1 (en) * 2018-01-25 2019-07-25 Cisco Technology, Inc. Network flow stitching using middle box flow stitching
CN109376137A (en) * 2018-12-17 2019-02-22 中国人民解放军战略支援部队信息工程大学 A kind of document handling method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298649A (en) * 2020-07-01 2021-08-24 阿里巴巴集团控股有限公司 Transaction data processing method and device, and data processing method and device

Similar Documents

Publication Publication Date Title
JP7433373B2 (en) Distributed training method, device, electronic device, storage medium and computer program for deep learning models
Montresor et al. Distributed k-core decomposition
CN101236511B (en) Method and system for optimizing global reduction treatment
CN112149808B (en) Method, system and medium for expanding stand-alone graph neural network training to distributed training
CN107193672B (en) Cross-block asynchronous contract calling system
CN111859832B (en) Chip simulation verification method and device and related equipment
Verbeek et al. Hunting deadlocks efficiently in microarchitectural models of communication fabrics
CN112100450A (en) Graph calculation data segmentation method, terminal device and storage medium
CN114356578A (en) Parallel computing method, device, equipment and medium for natural language processing model
CN111224822A (en) Node scheduling method, system, server and storage medium of data flow graph
CN111984833B (en) High-performance graph mining method and system based on GPU
CN109446146B (en) State transition sequence generation method of application layer communication protocol
CN113703955A (en) Data synchronization method in computing system and computing node
CN113691403B (en) Topology node configuration method, related device and computer program product
CN115408568B (en) Method for fusing operators of neural network and related products
CN111309265B (en) Node storage method, system, server and storage medium based on neural network
CN111030863B (en) Node topology information determination method, device, equipment and storage medium
Braberman et al. Issues in distributed timed model checking: Building Zeus
CN112866041B (en) Adaptive network system training method
CN117744553B (en) Method, device, equipment and storage medium for modeling field programmable gate array
Ganai et al. Efficient distributed SAT and SAT-based distributed bounded model checking
WO2022135599A1 (en) Device, board and method for merging branch structures, and readable storage medium
WO2022222944A1 (en) Method and apparatus for adaptating to quantum computing platform, and quantum computer operating system
CN112685206B (en) Interactive data correctness judging method and device, electronic equipment and medium
CN110489607B (en) Isomorphic subgraph query method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602