WO2021129645A1 - 数据并行化处理方法、系统、设备和存储介质 - Google Patents

数据并行化处理方法、系统、设备和存储介质 Download PDF

Info

Publication number
WO2021129645A1
WO2021129645A1 PCT/CN2020/138539 CN2020138539W WO2021129645A1 WO 2021129645 A1 WO2021129645 A1 WO 2021129645A1 CN 2020138539 W CN2020138539 W CN 2020138539W WO 2021129645 A1 WO2021129645 A1 WO 2021129645A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
nodes
tensor
input
parallel
Prior art date
Application number
PCT/CN2020/138539
Other languages
English (en)
French (fr)
Inventor
马恺
熊超
蔡权雄
牛昕宇
Original Assignee
深圳鲲云信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳鲲云信息科技有限公司 filed Critical 深圳鲲云信息科技有限公司
Priority to US17/789,280 priority Critical patent/US20230035910A1/en
Publication of WO2021129645A1 publication Critical patent/WO2021129645A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7821Tightly coupled to memory, e.g. computational memory, smart memory, processor in memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline or look ahead using instruction pipelines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of the present application relate to the technical field of network topology graph reasoning, for example, to a data parallel processing method, system, device, and storage medium.
  • Deep learning networks are usually trained by algorithms.
  • algorithm developers tend to use existing public deep learning frameworks for model training, and most public deep learning frameworks are aimed at Central Processing Unit/Graphics Processing Unit, CPU/GPU )
  • CPU/GPU Central Processing Unit/Graphics Processing Unit
  • the CPU/GPU adopts the traditional instruction set architecture, which has lower efficiency and higher flexibility.
  • the requirements for computing power are getting higher and higher.
  • the structural efficiency defects of the instruction set in the related technology can no longer meet the requirements of application scenarios.
  • the data flow architecture is more efficient and is more suitable for the development trend of deep learning technology from the perspective of the technical route.
  • the data stream chip is only suitable for deep learning operators, and normal use still requires the CPU to assist in data transmission and processing.
  • the processed data is transferred from the memory to the on-chip memory, and the result is retrieved after the computing card is finished running, and post-processing is performed to complete the entire graph inference process.
  • This application provides a data parallelization processing method, system, equipment, and storage medium to realize that graph reasoning for multiple inputs can be overlapped, so as to make full use of the technical effects of CPU and computing card resources.
  • an embodiment of the present application provides a data parallelization processing method, including:
  • At least three first computing nodes having a logical relationship are confirmed from a plurality of first computing nodes, and the at least three first computing nodes having a logical relationship are defined as a first parallel node group, and the first parallel node group includes a first parallel node group.
  • the second output tensors of the at least two first rear nodes are respectively calculated to obtain the first calculation result of the first parallel node group.
  • the embodiment of the present application further provides a data parallelization processing system, including:
  • the screening module is configured to confirm at least three first computing nodes having a logical relationship from a plurality of first computing nodes, define the at least three first computing nodes having a logical relationship as a first parallel node group, and
  • the parallel node group includes a first front node and at least two first rear nodes;
  • the first obtaining module is configured to obtain the first input data model of the first front node and generate the first input tensor of the first front node;
  • the first calculation module is configured to calculate the first output tensor of the first front node according to the first input data model and the first input tensor;
  • the second acquisition module is configured to acquire the second input data models of at least two first rear nodes and use the first output tensor as the second input tensor;
  • the second calculation module is configured to calculate the second output tensors of at least two first rear nodes according to the second input data model and the second input tensor to obtain the first calculation result of the first parallel node group.
  • an embodiment of the present application further provides a device, and the device includes:
  • One or more processors are One or more processors;
  • Storage device set to store one or more programs
  • the one or more processors implement the data parallelization processing method in any of the foregoing embodiments.
  • the embodiment of the present application also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • FIG. 1 is a flowchart of a data parallelization processing method provided by an embodiment of the application
  • FIG. 3 is a schematic structural diagram of a data parallel processing system provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a device provided by an embodiment of the application.
  • Some exemplary embodiments are described as processes or methods depicted as flowcharts. Although the flowchart describes one or more steps as sequential processing, many of the steps can be implemented in parallel, concurrently, or simultaneously. In addition, the order of one or more steps can be rearranged. The processing may be terminated when its operations are completed, but may also have additional steps not included in the drawings. Processing can correspond to methods, functions, procedures, subroutines, subroutines, and so on.
  • first”, “second”, etc. may be used herein to describe various directions, actions, steps or elements, etc., but these directions, actions, steps or elements are not limited by these terms. These terms are only used to distinguish a first direction, action, step or element from another direction, action, step or element.
  • the first acquisition module may be referred to as the second acquisition module, and similarly, the second acquisition module may be referred to as the first acquisition module. Both the first acquisition module and the second acquisition module are acquisition modules, but they are not the same acquisition module.
  • the terms “first”, “second”, etc. cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • “plurality” means at least two, such as two, three, etc., unless specifically defined otherwise.
  • FIG. 1 is a flowchart of a data parallelization processing method provided in Embodiment 1 of this application. This embodiment is applicable to multiple graph reasoning situations with logical relationships, and the method can be executed by the host. As shown in Fig. 1, a data parallelization processing method includes S110 to S150.
  • At least three first computing nodes having a logical relationship means that the three first computing nodes include at least one first front node located upstream in the logical relationship and at least two first front nodes that are directly connected to the first front node.
  • the first rear node located downstream of the first front node in a logical relationship the first rear node first receives the data processing result of the first front node as the input data of the first rear node when performing data calculations, that is, in the first After the calculation of the front node is completed, the obtained calculation result of the first front node is transmitted to at least two first rear nodes directly associated with the first front node as input data of the first rear node.
  • the neural network generally has multiple layers, that is, multiple computing nodes connected in logical order.
  • the first pre-node may refer to the first calculation of the neural network model calculation.
  • the first back node may refer to the level calculated by the neural network model that starts calculation immediately after the first front node is calculated.
  • the first front node may also refer to the nth level calculated by the neural network model, and the first rear node may refer to the n+1 level calculated by the neural network model.
  • the neural network types in this embodiment include feedforward neural networks, radial basis neural networks, deep feedforward neural networks, recurrent neural networks, etc. There is no restriction on the first neural network type here.
  • the input data model refers to a data model used for data input, such as a calculation formula or calculation model for each layer of a neural network
  • the input tensor refers to a vector with input data
  • S130 Calculate the first output tensor of the first front node according to the first input data model and the first input tensor.
  • the output tensor refers to a vector with output data. After the first input data model and the first input tensor of the first front node are obtained through S120, the first front node is calculated to obtain the first The data calculation result of the previous node, this data calculation result is the first output tensor in this embodiment.
  • the second input data model of the first rear node is obtained, and the first output tensor is used as the input data of the first rear node to obtain the second input data model. the amount.
  • S150 Calculate the second output tensors of the at least two first rear nodes according to the second input data model and the second input tensor to obtain the first calculation result of the first parallel node group.
  • each first post node is calculated according to the number of first post nodes, so as to obtain the value of each first post node
  • the second output tensor is generated according to the data calculation result of each first post node, so as to obtain the total calculation result of the first parallel node group, that is, the first calculation result.
  • the first embodiment of the present application solves the problem that the computing resources of the CPU and the computing card cannot be fully utilized in the related technology by processing multiple nodes with logical relationships in parallel, and realizes the full overlap operation of the graph inference through multiple inputs.
  • FIG. 2 is a flowchart of a data parallelization processing method provided in the second embodiment of the application. As shown in FIG. 2, the data parallelization processing method of this embodiment includes S201 to S216.
  • S201 Determine whether the at least three first computing nodes include a first front node and at least two first rear nodes that have a logical relationship.
  • At least three first computing nodes are selected from these first computing nodes, and it is determined whether the three computing nodes have a logical relationship, that is, whether there is at least one first computing node.
  • One front node and at least two first back nodes Take the calculation of neural network computing nodes as an example.
  • the neural network generally has multiple layers, that is, multiple computing nodes connected in logical order.
  • the first pre-node may refer to the first calculation of the neural network model calculation.
  • the first back node may refer to the level calculated by the neural network model that starts calculation immediately after the first front node is calculated.
  • the first front node may also refer to the nth level calculated by the neural network model, and the first rear node may refer to the n+1 level calculated by the neural network model.
  • This embodiment is not limited.
  • the neural network types in this embodiment include feedforward neural networks, radial basis neural networks, deep feedforward neural networks, recurrent neural networks, etc. There is no restriction on the first neural network type here.
  • At least three first computing nodes include a first front node and at least two second back nodes having a logical relationship, define the at least three first computing nodes as a first parallel node group.
  • the three first computing nodes when the at least three first computing nodes include a first front node and at least two second rear nodes having a logical relationship, that is, the three first computing nodes constitute a minimum logical relationship At this time, the three first computing nodes can be defined as the first parallel node group.
  • S203 Confirm the number of references of the first front node according to the number of at least two first back nodes.
  • the number of references to the first front node may be determined according to the number of first back nodes that have a direct logical relationship existing downstream of each first front node. For example, when the logical downstream of a first front node is directly associated with three first rear nodes, that is, the calculation result obtained by the first front node through calculation, that is, the first output tensor is directly transmitted to the third A post node, these three first post nodes receive the first output tensor in parallel as their input data, that is, the second input tensor. At this time, the first pre node is quoted three times, then the first pre node is in progress The number of citations before calculation is set to three.
  • the first front node and the first rear node in the first parallel node group may be locked.
  • the purpose of the lock is to ensure that the calculation is performed. They are all different nodes to prevent memory waste caused by recalculating the same node.
  • S205 Obtain a first input data model of the first front node and generate a first input tensor of the first front node.
  • the input data model refers to a data model used for data input, such as a calculation formula or calculation model for each layer of a neural network
  • the input tensor refers to a vector with input data
  • S206 Calculate the first output tensor of the first front node according to the first input data model and the first input tensor.
  • the output tensor refers to a vector with output data. After the first input data model and the first input tensor of the first front node are obtained through S205, the first front node is calculated to obtain the first The data calculation result of the previous node, this data calculation result is the first output tensor in this embodiment.
  • the second input data model of the first rear node is obtained, and the first output tensor is received as the input data of the first rear node to obtain the second input data. the amount.
  • S208 Calculate the second output tensors of the at least two first rear nodes respectively according to the second input data model and the second input tensor to obtain the first calculation result of the first parallel node group.
  • each first post node is calculated according to the number of first post nodes.
  • a post node so as to obtain the data calculation result of each first post node, and generate a second output tensor according to the data calculation result of each first post node, so as to obtain the total calculation result of the first parallel node group. It is the first calculation result.
  • the number of references of the first front node is reduced by one.
  • the first parallel node group includes a first front node and two first rear nodes, according to S203, the number of references of the first front node is two, and the first front node is calculated after the first front node is completed.
  • Output tensor and transmit the first output tensor to the two first post nodes respectively.
  • the first post node calculation is completed, they can be generated separately
  • a calculated feedback instruction is sent to the reference count counter of the first previous node. After the counter receives the calculated feedback instruction, the number of references in the counter is reduced by one, thereby updating the first previous node in the first parallel node group.
  • the number of references of the first front node is updated through S210, and it is determined in real time whether the number of references of the first front node is zero, so that the first back node that has a direct logical relationship with the first front node can be detected Whether all calculations have been completed.
  • the number of references of the first pre-node is zero, that is to say, all the first post-nodes downstream that have a direct logical relationship with the first pre-node have completed the calculation and obtained the second Output tensor, you can delete the first output tensor stored in the cache, and store the first output tensor in the memory.
  • the first input data model of the first previous node stored in the cache is deleted, thereby saving storage resources.
  • S214 Determine whether there is a second parallel node group in which any one of the at least two first rear nodes is the second front node downstream of the at least two first rear nodes of the first parallel node group, and the second parallel node group It includes a second front node and at least two second rear nodes.
  • the first parallel node group after the first parallel node group has completed the calculation of all the first front nodes and the first back nodes included, it is determined whether there is a second parallel node group located downstream of the logical relationship of the first parallel node group, here Both the second parallel node group and the first parallel node group include at least three first computing nodes.
  • the second parallel node group when there is a second parallel node group downstream of the first rear node, the second parallel node group may be calculated according to the method of calculating the first parallel node group when calculating the second parallel node group to obtain the second calculation result.
  • a calculation completion instruction is generated and sent to the host to notify the host to end the calculation operation.
  • the second embodiment of the present application solves the problem that the computing resources of the CPU and computing card cannot be fully utilized in the related technology by performing data processing on multiple nodes with logical relationships in parallel, and performing locking operations on the nodes in each parallel node group.
  • the problem is to achieve the effect of overlapping operation of graph inference through multiple inputs and locking the nodes in the calculation graph to make full use of the resources of the CPU and computing card.
  • FIG. 3 is a schematic structural diagram of a data parallel processing system provided in Embodiment 3 of this application.
  • the data parallel processing system 300 of this embodiment includes: a screening module 310, a first acquisition module 320, a first calculation module 330, a second acquisition module 340, and a second calculation module 350.
  • the screening module 310 is configured to confirm at least three first computing nodes having a logical relationship from a plurality of first computing nodes, define the at least three first computing nodes having a logical relationship as a first parallel node group, and A parallel node group includes a first front node and at least two first rear nodes;
  • the first obtaining module 320 is configured to obtain the first input data model of the first front node and generate the first input tensor of the first front node;
  • the first calculation module 330 is configured to calculate the first output tensor of the first front node according to the first input data model and the first input tensor;
  • the second obtaining module 340 is configured to obtain the second input data models of at least two first post nodes and use the first output tensor as the second input tensor;
  • the second calculation module 350 is configured to calculate the second output tensors of at least two first rear nodes according to the second input data model and the second input tensor to obtain the first calculation result of the first parallel node group.
  • the screening module 310 includes:
  • the first determining unit is configured to determine whether the at least three first computing nodes include a first front node and at least two first rear nodes having a logical relationship;
  • the first definition unit is configured to define the at least three first computing nodes as the first parallel node group if the at least three first computing nodes include a first front node and at least two second back nodes having a logical relationship.
  • the data parallelization processing system 300 further includes:
  • the quotation module is set to confirm the number of quotations of the first front node according to the number of at least two first back nodes.
  • the data parallelization processing system 300 further includes:
  • the locking module is configured to lock the first front node and at least two first rear nodes of the first parallel node group.
  • the data parallelization processing system 300 further includes:
  • the first judgment module is set to judge whether each first post node has completed the calculation:
  • the update module is set to reduce the number of references to the first front node by one after each first back node completes the calculation
  • the second judgment module is configured to judge whether the number of references of the first previous node is zero
  • the first deletion module is configured to delete the first output tensor in the on-chip storage and store the first output tensor in the off-chip memory when the number of references of the first previous node is zero.
  • the data parallelization processing system 300 further includes:
  • the second deletion module is configured to delete the calculation graph corresponding to the first previous node in the on-chip storage when the number of references of the first previous node is zero.
  • the data parallelization processing system 300 further includes:
  • the third judgment module is configured to judge whether there is a second parallel node group in which any one of the at least two first rear nodes is the second front node exists downstream of at least two first rear nodes of the first parallel node group,
  • the second parallel node group includes a second front node and at least two second rear nodes;
  • the third calculation module is configured to obtain the second front node when there is a second parallel node group in which any one of the at least two first back nodes is the second front node downstream of the first back node
  • the third input data model of the second front node is generated and the third input tensor of the second front node is generated; the third output tensor of the second front node is calculated according to the third input data model and the third input tensor; at least two second back nodes are obtained
  • the fourth input data model of the node and the third output tensor as the fourth input tensor; according to the fourth input data model and the fourth input tensor, the fourth output tensors of at least two second post nodes are calculated respectively to obtain The second calculation result of the second parallel node group;
  • the end module is configured to receive a calculation completion instruction to end the calculation when there is no second parallel node group downstream of the first post node.
  • the data parallelization processing device provided in the embodiment of the present application can execute the method provided in any embodiment of the present application, and has functional modules and effects corresponding to the execution method.
  • FIG. 4 is a schematic structural diagram of a device provided in Embodiment 5 of this application.
  • FIG. 4 shows a block diagram of an exemplary computer device 12 (ie, the computer system/server in FIG. 4) suitable for implementing the embodiments of the present application.
  • the computer device 12 shown in FIG. 4 is only an example, and should not limit the functions and scope of use of the embodiments of the present application.
  • the computer device 12 is represented in the form of a general-purpose computing device.
  • the components of the computer device 12 may include: one or more processors or processing units 16, a system memory 28, and a bus 18 connecting different system components (including the system memory 28 and the processing unit 16).
  • the bus 18 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any bus structure among multiple bus structures.
  • these architectures include Industry Standard Architecture (ISA) bus, MicroChannel Architecture (MCA) bus, enhanced ISA bus, Video Electronics Standard Association (VESA) Local bus and Peripheral Component Interconnect (PCI) bus.
  • the computer device 12 includes a variety of computer system readable media. These media may be usable media that can be accessed by the computer device 12, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 28 may include a computer system readable medium in the form of a volatile memory, such as a random access memory (RAM) 30 and/or a cache 32.
  • the computer device 12 may include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 4, usually referred to as a "hard drive").
  • each drive can be connected to the bus 18 through one or more data media interfaces.
  • the memory 28 may include at least one program product, the program product having a set (for example, at least one) program modules, and these program modules are configured to perform the functions of multiple embodiments of the present application.
  • a program/utility tool 40 having a set of (at least one) program module 42 may be stored in, for example, the memory 28.
  • Such program module 42 includes an operating system, one or more application programs, other program modules, and program data. These examples Each of these or a certain combination may include the realization of a network environment.
  • the program module 42 usually executes the functions and/or methods in the embodiments described in this application.
  • the computer device 12 may also communicate with one or more external devices 14 (such as a keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable a user to interact with the computer device 12, and/or communicate with Any device (such as a network card, modem, etc.) that enables the computer device 12 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 22.
  • the computer device 12 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 20. As shown in the figure, the network adapter 20 communicates with other modules of the computer device 12 through the bus 18.
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • the processing unit 16 executes a variety of functional applications and data processing by running programs stored in the system memory 28, for example, to implement the methods provided in the embodiments of the present application:
  • At least three first computing nodes having a logical relationship are confirmed from a plurality of first computing nodes, and the at least three first computing nodes having a logical relationship are defined as a first parallel node group, and the first parallel node group Including a first front node and at least two first rear nodes;
  • the fifth embodiment of the present application also provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and when the program is executed by a processor, the method as provided in all the application embodiments of the present application is implemented:
  • At least three first computing nodes having a logical relationship are confirmed from a plurality of first computing nodes, and the at least three first computing nodes having a logical relationship are defined as a first parallel node group, and the first parallel node group Including a first front node and at least two first rear nodes;
  • the computer storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may include, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above.
  • Examples of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (Read- Only Memory, ROM), Erasable Programmable Read-Only Memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical Storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by a suitable medium, including wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
  • a suitable medium including wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Devices For Executing Special Programs (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种数据并行化处理方法、系统、设备和存储介质。所述数据并行化处理方法包括:从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,第一并行节点组包括第一前节点和至少两个第一后节点(S110);获取第一前节点的第一输入数据模型并生成第一前节点的第一输入张量(S120);根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量(S130);获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为第二输入张量(S140);根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果(S150)。

Description

数据并行化处理方法、系统、设备和存储介质
本申请要求在2019年12月27日提交中国专利局、申请号为201911373599.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及网络拓扑图推理技术领域,例如涉及一种数据并行化处理方法、系统、设备和存储介质。
背景技术
深度学习网络通常由算法训练得来。多数情况下,算法开发人员倾向于使用已有的公开深度学习框架进行模型训练,而大部分公开深度学习框架是针对于中央处理器/图形处理器(Central Processing Unit/Graphics Processing Unit,CPU/GPU)这类计算设备设计的。CPU/GPU采用传统的指令集架构,架构效率较低,灵活性较高。随着深度学习相关技术的发展,对于算力的要求越来越高。相关技术中的指令集的架构效率缺陷已经不能满足应用场景的需求。对比而言,数据流架构效率更高,从技术路线来看更加适合深度学习技术的发展趋势。然而,数据流芯片只适用于深度学习算子,正常使用仍然需要CPU协助进行数据传输和处理。运行过程中,将处理好的数据从内存中搬运到片上内存中,等待计算卡运行完成之后将结果拿回,进行后处理,完成整个图推理过程。
相关技术中采用的图推理方法多是单线程运行带有异步计算,而这种图推理方法容易导致不能充分利用CPU和计算卡的运算资源。
发明内容
本申请提供一种数据并行化处理方法、系统、设备和存储介质,以实现针对多个输入的图推理可以重叠运行,从而充分利用CPU和运算卡的资源的技术效果。
在一实施例中,本申请实施例提供了一种数据并行化处理方法,包括:
从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,第一并行节点组包括第一前节点和至少两个第一后节点;
获取第一前节点的第一输入数据模型并生成第一前节点的第一输入张量;
根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量;
获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为成第二输入张量;
根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果。
在一实施例中,本申请实施例还提供了一种数据并行化处理系统,包括:
筛选模块,设置为从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,第一并行节点组包括第一前节点和至少两个第一后节点;
第一获取模块,设置为获取第一前节点的第一输入数据模型并生成第一前节点的第一输入张量;
第一计算模块,设置为根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量;
第二获取模块,设置为获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为第二输入张量;
第二计算模块,设置为根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果。
在一实施例中,本申请实施例还提供了一种设备,设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述实施例中任一的数据并行化处理方法。
在一实施例中,本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现上述实施例中任一的数据并行化处理方法。
附图说明
图1为本申请实施例提供的一种数据并行化处理方法的流程图;
图2为本申请实施例提供的另一种数据并行化处理方法的流程图;
图3为本申请实施例提供的一种数据并行化处理系统的结构示意图;
图4为本申请实施例提供的一种设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。此处所描述的实施例仅仅用于解释本申请,而非对本申请的限定。为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将一个或多个步骤描述成顺序的处理,但是其中的许多步骤可以被并行地、并发地或者同时实施。此外,一个或多个步骤的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
此外,术语“第一”、“第二”等可在本文中用于描述多种方向、动作、步骤或元件等,但这些方向、动作、步骤或元件不受这些术语限制。这些术语仅用于将第一个方向、动作、步骤或元件与另一个方向、动作、步骤或元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一获取模块称为第二获取模块,且类似地,可将第二获取模块称为第一获取模块。第一获取模块和第二获取模块两者都是获取模块,但其不是同一获取模块。术语“第一”、“第二”等而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确的限定。
实施例一
图1为本申请实施例一提供的一种数据并行化处理方法的流程图。本实施例可适用于多个具有逻辑关系的图推理情况,该方法可以由主机来执行。如图1所示,一种数据并行化处理方法,包括S110至S150。
S110、从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,第一并行节点组包括第一前节点和至少两个第一后节点。
在本实施例中,具有逻辑关系的至少三个第一计算节点是指这三个第一计算节点至少包括一个位于逻辑关系中上游的第一前节点和至少两个与第一前节点存在直接逻辑关系的位于第一前节点下游的第一后节点,第一后节点在进行数据计算时先接收第一前节点的数据处理结果作为第一后节点的输入数据,也就是说,在第一前节点计算完成后,得到的第一前节点的计算结果传输到与第一前节点直接关联的至少两个第一后节点作为第一后节点的输入数据。
以神经网络的计算节点计算为例,神经网络一般有多层,也就是多个逻辑性顺序连接的计算节点,一实施例中,第一前节点可以是指神经网络模型计算的最先开始计算的层级,第一后节点可以是指神经网络模型计算的在第一前节点计算之后马上开始计算的层级。第一前节点也可以是指神经网络模型计算的第n层级,第一后节点可以是指神经网络模型计算的n+1层级。本实施例不做限制。本实施例的神经网络类型包括前馈神经网络、径向基神经网络、深度前馈神经网络、递归神经网络等,此处对于第一神经网络类型不作限制。
S120、获取第一前节点的第一输入数据模型并生成第一前节点的第一输入张量。
在本实施例中,输入数据模型是指一种用于数据输入的数据模型,例如神经网络每层节点的计算公式或计算模型,输入张量是指一种带有输入数据的矢量。
S130、根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量。
在本实施例中,输出张量是指一种带有输出数据的矢量,在通过S120得到第一前节点的第一输入数据模型和第一输入张量后,计算第一前节点得到第一前节点的数据计算结果,这个数据计算结果就是本实施例中的第一输出张量。
S140、获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为第二输入张量。
在一实施例中,在完成了第一前节点的计算后,获取第一后节点的第二输入数据模型,并将第一输出张量作为第一后节点的输入数据从而得到第二输入张量。
S150、根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果。
在一实施例中,基于在S140中得到的第二输入数据模型和第二输入张量,根据第一后节点的个数分别计算每个第一后节点,从而得到每个第一后节点的数据计算结果,并根据每个第一后节点的数据计算结果生成第二输出张量,从而得到第一并行节点组的总的计算结果,也就是第一计算结果。
本申请实施例一通过将具有逻辑关系的多个节点并行进行数据处理,解决了相关技术中无法充分利用CPU和计算卡的运算资源的问题,实现了通过多个输入的图推理的重叠运行充分利用CPU和运算卡的资源的效果。
实施例二
本申请实施例二是在实施例一的基础上的可选实施例。图2为本申请实施例二提供的一种数据并行化处理方法的流程图。如图2所示,本实施例的数据并行化处理方法,包括S201至S216。
S201、判断至少三个第一计算节点是否包括具有逻辑关系的第一前节点和至少两个第一后节点。
在一实施例中,在接收到多个第一计算节点后,在这些第一计算节点中选取至少三个第一计算节点并判断这三个计算节点是否存在逻辑关系,即是否存在至少一个第一前节点和至少两个第一后节点。以神经网络的计算节点计算为例,神经网络一般有多层,也就是多个逻辑性顺序连接的计算节点,一实施例中,第一前节点可以是指神经网络模型计算的最先开始计算的层级,第一后节点可以是指神经网络模型计算的在第一前节点计算之后马上开始计算的层级。第一前节点也可以是指神经网络模型计算的第n层级,第一后节点可以是指神经网络模型计算的n+1层级。本实施例不做限制。本实施例的神经网络类型包括前馈神经网络、径向基神经网络、深度前馈神经网络、递归神经网络等,此处对于第一神经网络类型不作限制。
S202、若至少三个第一计算节点包括具有逻辑关系的第一前节点和至少两个第二后节点,则将至少三个第一计算节点定义为第一并行节点组。
在一实施例中,当至少三个第一计算节点包括具有逻辑关系的第一前节点和至少两个第二后节点时,也就是说,这三个第一计算节点构成了一个最小逻辑关系组,这时可以将这三个第一计算节点定义为第一并行节点组。
S203、根据至少两个第一后节点的数量确认第一前节点的引用次数。
在一实施例中,可以根据每个第一前节点的下游存在的具有直接逻辑关系的第一后节点的个数,确定第一前节点的引用次数。举例来说,当一个第一前节点的逻辑下游处直接关联三个第一后节点,也就是说,第一前节点通过计算得到的计算结果,即第一输出张量直接传输到三个第一后节点,这三个第一后节点并行接收第一输出张量作为自己的输入数据,即第二输入张量,这时,第一前节点被引用了三次,那么第一前节点在进行计算前的引用次数被设置为三。
S204、对第一并行节点组的第一前节点和至少两个第一后节点进行加锁。
在一实施例中,在本实施例中,在进行计算前,可以先对第一并行节点组中的第一前节点和第一后节点进行加锁操作,加锁的目的在于保证进行计算的都是不同的节点,防止因重复计算相同的节点而导致的内存浪费问题。
S205、获取第一前节点的第一输入数据模型并生成第一前节点的第一输入 张量。
在本实施例中,输入数据模型是指一种用于数据输入的数据模型,例如神经网络每层节点的计算公式或计算模型,输入张量是指一种带有输入数据的矢量。
S206、根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量。
在本实施例中,输出张量是指一种带有输出数据的矢量,在通过S205得到第一前节点的第一输入数据模型和第一输入张量后,计算第一前节点得到第一前节点的数据计算结果,这个数据计算结果就是本实施例中的第一输出张量。
S207、获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为第二输入张量。
在一实施例中,在完成了第一前节点的计算后,获取第一后节点的第二输入数据模型,并接收第一输出张量作为第一后节点的输入数据从而得到第二输入张量。
S208、根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果。
在一实施例中,基于在S207中得到的第二输入数据模型例如神经网络每层节点的计算公式或计算模型,和第二输入张量,根据第一后节点的个数分别计算每个第一后节点,从而得到每个第一后节点的数据计算结果,并根据每个第一后节点的数据计算结果生成第二输出张量,从而得到第一并行节点组的总的计算结果,也就是第一计算结果。
S209、判断每个第一后节点是否完成计算。
在一实施例中,在完成S208,即得到第一计算结果后,还判断第一并行节点组中的每个第一后节点是否都完成了计算,这样做能确保第一计算结果的准确性。
S210、在每个第一后节点完成计算后,分别对第一前节点的引用次数减一。
在一实施例中,在对每个第一后节点并行进行数据处理时,当第一并行节点组中的任一个第一后节点完成计算后,将第一前节点的引用次数减一。举例来说,当第一并行节点组中包括一个第一前节点和两个第一后节点时,根据S203得到第一前节点的引用次数为二,第一前节点的计算完成后得到第一输出张量,并将第一输出张量分别传输到两个第一后节点中,这时对两个第一后节点进行并行的数据处理计算,当第一后节点计算完成时,可以分别生成一个计算完成 的反馈指令发送到第一前节点的引用次数计数器中,计数器接收到计算完成的反馈指令后将计数器中的引用次数分别减一,从而更新了第一并行节点组中第一前节点和第一后节点之间逻辑关系里的引用关系。
S211、判断第一前节点的引用次数是否为零。
在一实施例中,通过S210对第一前节点的引用次数进行更新,并实时判断第一前节点的引用次数是否为零,这样能检测与第一前节点存在直接逻辑关系的第一后节点是否全部完成了计算。
S212、当第一前节点的引用次数为零时,删除片上存储内的第一输出张量并将第一输出张量存储到片外存储器中。
在一实施例中,当第一前节点的引用次数为零时,也就是说这时与第一前节点存在直接逻辑关系的位于下游的所有第一后节点全部完成了计算并得到了第二输出张量,可以删除存储在缓存内的第一输出张量,并把第一输出张量存储到内存中。
S213、当第一前节点的引用次数为零时,删除片上存储内的与第一前节点对应的计算图。
在一实施例中,当第一前节点的引用次数为零时,将存储在缓存中的第一前节点的第一输入数据模型删除,从而节约存储资源。
S214、判断第一并行节点组的至少两个第一后节点的下游是否存在以至少两个第一后节点中的任一节点为第二前节点的第二并行节点组,第二并行节点组包括第二前节点和至少两个第二后节点。
在一实施例中,当第一并行节点组完成了包括的所有第一前节点和第一后节点的计算后,判断是否有第二并行节点组位于第一并行节点组的逻辑关系下游,这里的第二并行节点组与第一并行节点组都是一种包括至少三个第一计算节点的。
S215、在第一后节点的下游存在以所述至少两个第一后节点中的任一节点为第二前节点的第二并行节点组的情况下,获取第二前节点的第三输入数据模型并生成第二前节点的第三输入张量;根据第三输入数据模型和第三输入张量计算第二前节点的第三输出张量;获取至少两个第二后节点的第四输入数据模型并将第三输出张量作为第四输入张量;根据第四输入数据模型和第四输入张量分别计算至少两个第二后节点的第四输出张量,得到第二并行节点组的第二计算结果。
在一实施例中,当第一后节点的下游还存在第二并行节点组时,在计算第二并行节点组时可以按照计算第一并行节点组的方法进行计算得到第二计算结 果。
S216、在第一后节点的下游不存在第二并行节点组的情况下,接收计算完成指令以结束计算。
在一实施例中,当第一后节点的下游不存在第二并行节点组时,生成一个计算完成指令并发送到主机,通知主机结束计算操作。
本申请实施例二通过将具有逻辑关系的多个节点并行进行数据处理,并针对每个并行节点组中的节点进行加锁操作,解决了相关技术中无法充分利用CPU和计算卡的运算资源的问题,实现了通过多个输入的图推理的重叠运行并对计算图中的节点分别加锁,充分利用CPU和运算卡的资源的效果。
实施例三
图3为本申请实施例三提供的一种数据并行化处理系统的结构示意图。如图3所示,本实施例的数据并行化处理系统300,包括:筛选模块310、第一获取模块320、第一计算模块330、第二获取模块340以及第二计算模块350。
筛选模块310,设置为从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,第一并行节点组包括第一前节点和至少两个第一后节点;
第一获取模块320,设置为获取第一前节点的第一输入数据模型并生成第一前节点的第一输入张量;
第一计算模块330,设置为根据第一输入数据模型和第一输入张量计算第一前节点的第一输出张量;
第二获取模块340,设置为获取至少两个第一后节点的第二输入数据模型并将第一输出张量作为第二输入张量;
第二计算模块350,设置为根据第二输入数据模型和第二输入张量分别计算至少两个第一后节点的第二输出张量,得到第一并行节点组的第一计算结果。
在本实施例中,筛选模块310包括:
第一判断单元,设置为判断至少三个第一计算节点是否包括具有逻辑关系的第一前节点和至少两个第一后节点;
第一定义单元,设置为若至少三个第一计算节点包括具有逻辑关系的第一前节点和至少两个第二后节点,则将至少三个第一计算节点定义为第一并行节点组。
在本实施例中,数据并行化处理系统300还包括:
引用模块,设置为根据至少两个第一后节点的数量确认第一前节点的引用次数。
在本实施例中,数据并行化处理系统300还包括:
加锁模块,设置为对第一并行节点组的第一前节点和至少两个第一后节点进行加锁。
在本实施例中,数据并行化处理系统300还包括:
第一判断模块,设置为判断每个第一后节点是否完成计算:
更新模块,设置为在每个第一后节点完成计算后,分别对第一前节点的引用次数减一;
第二判断模块,设置为判断第一前节点的引用次数是否为零;
第一删除模块,设置为当第一前节点的引用次数为零时,删除片上存储内的第一输出张量并将第一输出张量存储到片外存储器中。
在本实施例中,数据并行化处理系统300还包括:
第二删除模块,设置为当第一前节点的引用次数为零时,删除片上存储内的与第一前节点对应的计算图。
在本实施例中,数据并行化处理系统300还包括:
第三判断模块,设置为判断第一并行节点组的至少两个第一后节点的下游是否存在以至少两个第一后节点中的任一节点为第二前节点的第二并行节点组,第二并行节点组包括第二前节点和至少两个第二后节点;
第三计算模块,设置为在第一后节点的下游存在以所述至少两个第一后节点中的任一节点为第二前节点的第二并行节点组的情况下,获取第二前节点的第三输入数据模型并生成第二前节点的第三输入张量;根据第三输入数据模型和第三输入张量计算第二前节点的第三输出张量;获取至少两个第二后节点的第四输入数据模型并将第三输出张量作为第四输入张量;根据第四输入数据模型和第四输入张量分别计算至少两个第二后节点的第四输出张量,得到第二并行节点组的第二计算结果;
结束模块,设置为在第一后节点的下游不存在第二并行节点组的情况下,接收计算完成指令以结束计算。
本申请实施例所提供的数据并行化处理装置可执行本申请任意实施例所提供的方法,具备执行方法相应的功能模块和效果。
实施例四
图4为本申请实施例五提供的一种设备的结构示意图。图4示出了适于用来实现本申请实施方式的示例性计算机设备12(即图4中的计算机系统/服务器)的框图。图4显示的计算机设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来限制。
如图4所示,计算机设备12以通用计算设备的形式表现。计算机设备12的组件可以包括:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括工业标准体系结构(Industry Standard Architecture,ISA)总线,微通道体系结构(MicroChannel Architecture,MCA)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standard Association,VESA)局域总线以及外围组件互连(Peripheral Component Interconnect,PCI)总线。
计算机设备12包括多种计算机系统可读介质。这些介质可以是能够被计算机设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28(即图4中的内存)可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory,RAM)30和/或高速缓存32。计算机设备12可以包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图4未显示,通常称为“硬盘驱动器”)。尽管图4中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,CD-ROM),数字多功能盘只读存储器(Digital Video Disk Read Only Memory,DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请多个实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括操作系统、一个或者多个应用程序、其它程 序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。
计算机设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机设备12交互的设备通信,和/或与使得该计算机设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,计算机设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与计算机设备12的其它模块通信。尽管图中未示出,可以结合计算机设备12使用其它硬件和/或软件模块,包括:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、磁盘阵列(Redundant Arrays of Independent Drives,RAID)系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行多种功能应用以及数据处理,例如实现本申请实施例所提供的方法:
从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,所述第一并行节点组包括第一前节点和至少两个第一后节点;
获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量;
根据所述第一输入数据模型和所述第一输入张量计算所述第一前节点的第一输出张量;
获取所述至少两个第一后节点的第二输入数据模型并将所述第一输出张量作为第二输入张量;
根据所述第二输入数据模型和所述第二输入张量分别计算所述至少两个第一后节点的第二输出张量,得到所述第一并行节点组的第一计算结果。
实施例五
本申请实施例五还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现如本申请所有申请实施例提供的方法:
从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,所述第一并行节点组包括第一前节点和至少两个第一后节点;
获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量;
根据所述第一输入数据模型和所述第一输入张量计算所述第一前节点的第一输出张量;
获取所述至少两个第一后节点的第二输入数据模型并将所述第一输出张量作为第二输入张量;
根据所述第二输入数据模型和所述第二输入张量分别计算所述至少两个第一后节点的第二输出张量,得到所述第一并行节点组的第一计算结果。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以包括电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、可擦式可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用适当的介质传输,包括无线、电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算 机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。

Claims (10)

  1. 一种数据并行化处理方法,包括:
    从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,所述第一并行节点组包括第一前节点和至少两个第一后节点;
    获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量;
    根据所述第一输入数据模型和所述第一输入张量计算所述第一前节点的第一输出张量;
    获取所述至少两个第一后节点的第二输入数据模型并将所述第一输出张量作为第二输入张量;
    根据所述第二输入数据模型和所述第二输入张量分别计算所述至少两个第一后节点的第二输出张量,得到所述第一并行节点组的第一计算结果。
  2. 根据权利要求1所述的方法,其中,所述从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,包括:
    判断所述至少三个第一计算节点是否包括具有逻辑关系的所述第一前节点和所述至少两个第一后节点;
    响应于所述至少三个第一计算节点包括具有逻辑关系的所述第一前节点和所述至少两个第二后节点,将所述至少三个第一计算节点定义为所述第一并行节点组。
  3. 根据权利要求1所述的方法,在所述获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量之前,还包括:
    根据所述至少两个第一后节点的数量确认所述第一前节点的引用次数。
  4. 根据权利要求1所述的方法,在所述获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量之前,还包括:
    对所述第一并行节点组的所述第一前节点和所述至少两个第一后节点进行加锁。
  5. 根据权利要求3所述的方法,其中在所述根据所述第二输入数据模型和所述第二输入张量分别计算所述至少两个第一后节点的第二输出张量,得到所述第一并行节点组的第一计算结果之后,还包括:
    判断每个第一后节点是否完成计算:
    在每个第一后节点完成计算后,分别对所述第一前节点的引用次数减一;
    判断所述第一前节点的引用次数是否为零;
    在所述第一前节点的引用次数为零的情况下,删除片上存储内的所述第一输出张量并将所述第一输出张量存储到片外存储器中。
  6. 根据权利要求5所述的方法,在所述第一前节点的引用次数为零的情况下,删除片上存储内的所述第一输出张量并将所述第一输出张量存储到片外存储器中之后,还包括:
    删除所述片上存储内的与所述第一前节点对应的计算图。
  7. 根据权利要求5所述的方法,所述在所述第一前节点的引用次数为零的情况下,删除片上存储内的所述第一输出张量并将所述第一输出张量存储到片外存储器中之后,还包括:
    判断所述第一并行节点组的所述至少两个第一后节点的下游是否存在以所述至少两个第一后节点中的任一节点为第二前节点的第二并行节点组,所述第二并行节点组包括所述第二前节点和至少两个第二后节点;
    在所述至少两个第一后节点的下游存在以所述至少两个第一后节点中的任一节点为第二前节点的第二并行节点组的情况下,获取所述第二前节点的第三输入数据模型并生成所述第二前节点的第三输入张量;根据所述第三输入数据模型和所述第三输入张量计算所述第二前节点的第三输出张量;获取所述至少两个第二后节点的第四输入数据模型并将所述第三输出张量作为第四输入张量;根据所述第四输入数据模型和所述第四输入张量分别计算所述至少两个第二后节点的第四输出张量,得到所述第二并行节点组的第二计算结果;
    在所述第一后节点的下游不存在所述第二并行节点组的情况下,接收计算完成指令以结束计算。
  8. 一种数据并行化处理系统,包括:
    筛选模块,设置为从多个第一计算节点中确认具有逻辑关系的至少三个第一计算节点,将所述具有逻辑关系的至少三个第一计算节点定义为第一并行节点组,所述第一并行节点组包括第一前节点和至少两个第一后节点;
    第一获取模块,设置为获取所述第一前节点的第一输入数据模型并生成所述第一前节点的第一输入张量;
    第一计算模块,设置为根据所述第一输入数据模型和所述第一输入张量计算所述第一前节点的第一输出张量;
    第二获取模块,设置为获取所述至少两个第一后节点的第二输入数据模型 并将所述第一输出张量作为第二输入张量;
    第二计算模块,设置为根据所述第二输入数据模型和所述第二输入张量分别计算所述至少两个第一后节点以生成第二输出张量,得到所述第一并行节点组的第一计算结果。
  9. 一种设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一项所述的数据并行化处理方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的数据并行化处理方法。
PCT/CN2020/138539 2019-12-27 2020-12-23 数据并行化处理方法、系统、设备和存储介质 WO2021129645A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/789,280 US20230035910A1 (en) 2019-12-27 2020-12-23 Method, system and device for parallel processing of data, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911373599.2 2019-12-27
CN201911373599.2A CN111145076B (zh) 2019-12-27 2019-12-27 数据并行化处理方法、系统、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021129645A1 true WO2021129645A1 (zh) 2021-07-01

Family

ID=70520909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138539 WO2021129645A1 (zh) 2019-12-27 2020-12-23 数据并行化处理方法、系统、设备和存储介质

Country Status (3)

Country Link
US (1) US20230035910A1 (zh)
CN (1) CN111145076B (zh)
WO (1) WO2021129645A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114218152A (zh) * 2021-12-06 2022-03-22 海飞科(南京)信息技术有限公司 流处理方法、处理电路和电子设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145076B (zh) * 2019-12-27 2023-04-07 深圳鲲云信息科技有限公司 数据并行化处理方法、系统、设备及存储介质
CN112231330A (zh) * 2020-10-15 2021-01-15 中体彩科技发展有限公司 一种防止彩票游戏并发重复计奖的控制方法及系统
CN112799845A (zh) * 2021-02-02 2021-05-14 深圳计算科学研究院 一种基于grape框架的图算法并行加速方法和装置
CN113836386B (zh) * 2021-11-25 2022-03-25 之江实验室 一种并行模式搜索空间构造系统和方法
CN114035968B (zh) * 2022-01-10 2022-03-18 北京一流科技有限公司 用于多流并行的冲突处理系统及其方法
CN114429051B (zh) * 2022-04-01 2022-07-01 深圳鲲云信息科技有限公司 数据流芯片的建模方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140252B2 (en) * 2017-02-28 2018-11-27 Microsoft Technology Licensing, Llc Hardware node with matrix-vector multiply tiles for neural network processing
CN109740916A (zh) * 2018-12-28 2019-05-10 中科驭数(北京)科技有限公司 基于计算流图的时间序列处理方法、装置和存储介质
CN110321210A (zh) * 2019-06-28 2019-10-11 京东数字科技控股有限公司 数据处理方法、装置、计算机可读介质及电子设备
CN110321999A (zh) * 2018-03-30 2019-10-11 北京深鉴智能科技有限公司 神经网络计算图优化方法
CN110383206A (zh) * 2017-04-07 2019-10-25 英特尔公司 用于利用硬件加速来生成高斯随机数的系统和方法
CN111145076A (zh) * 2019-12-27 2020-05-12 深圳鲲云信息科技有限公司 数据并行化处理方法、系统、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018217863A1 (en) * 2017-05-23 2018-11-29 Intel Corporation Methods and apparatus for enhancing a binary weight neural network using a dependency tree
CN107563512B (zh) * 2017-08-24 2023-10-17 腾讯科技(上海)有限公司 一种数据处理方法、装置以及存储介质
CN110377429A (zh) * 2019-07-24 2019-10-25 深圳乐信软件技术有限公司 一种实时任务计算的控制方法、装置、服务器及存储介质
CN110413675A (zh) * 2019-07-24 2019-11-05 深圳乐信软件技术有限公司 一种实时任务计算的控制方法、装置、服务器及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140252B2 (en) * 2017-02-28 2018-11-27 Microsoft Technology Licensing, Llc Hardware node with matrix-vector multiply tiles for neural network processing
CN110383206A (zh) * 2017-04-07 2019-10-25 英特尔公司 用于利用硬件加速来生成高斯随机数的系统和方法
CN110321999A (zh) * 2018-03-30 2019-10-11 北京深鉴智能科技有限公司 神经网络计算图优化方法
CN109740916A (zh) * 2018-12-28 2019-05-10 中科驭数(北京)科技有限公司 基于计算流图的时间序列处理方法、装置和存储介质
CN110321210A (zh) * 2019-06-28 2019-10-11 京东数字科技控股有限公司 数据处理方法、装置、计算机可读介质及电子设备
CN111145076A (zh) * 2019-12-27 2020-05-12 深圳鲲云信息科技有限公司 数据并行化处理方法、系统、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114218152A (zh) * 2021-12-06 2022-03-22 海飞科(南京)信息技术有限公司 流处理方法、处理电路和电子设备
CN114218152B (zh) * 2021-12-06 2023-08-15 海飞科(南京)信息技术有限公司 流处理方法、处理电路和电子设备

Also Published As

Publication number Publication date
US20230035910A1 (en) 2023-02-02
CN111145076B (zh) 2023-04-07
CN111145076A (zh) 2020-05-12

Similar Documents

Publication Publication Date Title
WO2021129645A1 (zh) 数据并行化处理方法、系统、设备和存储介质
CN112560496B (zh) 语义分析模型的训练方法、装置、电子设备及存储介质
US9613185B2 (en) Influence filtering in graphical models
US20190324888A1 (en) Data flow graph computation using exceptions
US10642606B2 (en) Re-use of code
CN108628605A (zh) 流式数据处理方法、装置、服务器和介质
WO2021136512A1 (zh) 基于深度学习节点计算的调度方法、设备及存储介质
KR102452159B1 (ko) 정황상 인식되는 다이나믹 그룹 형성
US20190065284A1 (en) Hybrid acceleration in a processing environment
WO2021259041A1 (zh) Ai计算图的排序方法、装置、设备及存储介质
US11386507B2 (en) Tensor-based predictions from analysis of time-varying graphs
WO2019232980A1 (zh) 节点配置方法及装置、计算机可读存储介质和电子设备
WO2022083093A1 (zh) 图谱中的概率计算方法、装置、计算机设备及存储介质
WO2021139633A1 (zh) 深度学习模型的转化方法、装置、服务器及存储介质
CN111985831A (zh) 云计算资源的调度方法、装置、计算机设备及存储介质
WO2021259098A1 (zh) 一种基于卷积神经网络的加速系统、方法及存储介质
CN116360735A (zh) 一种表单生成方法、装置、设备和介质
CN115034379A (zh) 一种因果关系确定方法及相关设备
WO2020107264A1 (zh) 神经网络架构搜索的方法与装置
WO2024051655A1 (zh) 全视野组织学图像的处理方法、装置、介质和电子设备
US11514318B2 (en) Multi-source transfer learning from pre-trained networks
WO2023197857A1 (zh) 一种模型切分方法及其相关设备
WO2021068249A1 (zh) 运行时硬件模拟仿真方法、装置、设备及存储介质
CN115828269A (zh) 源代码漏洞检测模型的构建方法、装置、设备及存储介质
CN112740200A (zh) 用于基于共指消解的端到端深度强化学习的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905045

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20905045

Country of ref document: EP

Kind code of ref document: A1