WO2021147876A1 - 内存资源原地共享决策系统及其方法 - Google Patents

内存资源原地共享决策系统及其方法 Download PDF

Info

Publication number
WO2021147876A1
WO2021147876A1 PCT/CN2021/072785 CN2021072785W WO2021147876A1 WO 2021147876 A1 WO2021147876 A1 WO 2021147876A1 CN 2021072785 W CN2021072785 W CN 2021072785W WO 2021147876 A1 WO2021147876 A1 WO 2021147876A1
Authority
WO
WIPO (PCT)
Prior art keywords
shared
sharing
conflict
output data
task
Prior art date
Application number
PCT/CN2021/072785
Other languages
English (en)
French (fr)
Inventor
李新奇
柳俊丞
袁进辉
Original Assignee
北京一流科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京一流科技有限公司 filed Critical 北京一流科技有限公司
Publication of WO2021147876A1 publication Critical patent/WO2021147876A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Definitions

  • the present disclosure relates to a memory resource allocation technology. More specifically, the present disclosure relates to a decision-making system and method for in-situ sharing of memory resources for saving memory resources.
  • the static distributed deep learning system proposed by the applicant of the present disclosure has attracted more and more attention in the field of deep learning.
  • the static distributed learning system starts from the overall business processing, combines the overall computing resources of the static distributed system and the topological relationship between the various computing resources, statically arranges the data processing tasks to be processed on the entire data processing, and is composed of many execution bodies.
  • the executive body and the executive body form a data production and consumption relationship based on the upstream and downstream relationships in the network, so as to streamline the input data.
  • the execution bodies rely on messages to coordinate progress. An execution body will receive messages from upstream producers and downstream consumers. When the trigger condition (finite state automata control) is met, the execution body will issue an instruction to The coprocessor executes.
  • the execution experience sends a message to the downstream consumer to inform it that there is new data to consume.
  • the message communication method between the executive body and the adjacent executive body the relationship between data production and consumption is realized, thereby eliminating the need for real-time central scheduling of data output and realizing the decentralization of data processing.
  • the purpose of the present disclosure is to solve at least the above-mentioned problems, specifically, to reduce the use of memory resources as much as possible.
  • One solution is to allow more executive bodies to share memory resources with each other without affecting their respective functions.
  • the present disclosure provides a memory resource in-situ sharing decision-making system for a data processing system, including: an original shared relationship tree generation component, based on the sharing of all task nodes during the task topology generation process of the data processing system The tag generates one or more shared relationship trees.
  • Each shared relationship pair of each shared relationship tree includes a logical output data cache of the current task node, a logical output data cache of the upstream task node, and is used to connect the two to represent the two
  • the shared connection edge of the shared relationship so that each shared relationship tree enables all logical output data caches to share the same designated output data cache through all the shared connection edges, and the shared connection edge corresponds to the shared label of the current task node Indicate that it is a shared connection edge that can be modified or a shared connection edge that cannot be modified;
  • a sharing conflict identification component is used to identify a shared relationship tree in which there is a first sharing conflict state or a second sharing conflict state as containing a conflicting shared connection edge Conflict sharing relationship tree, the first sharing conflict state is that the execution timing of the task node corresponding to at least one modifiable shared connection edge is earlier than the upstream logic output data cache in the shared branch where it is located or any logic in other shared branches
  • the sharing conflict elimination component further includes a disconnection mode selection unit, which calculates the shared disconnection of each shared disconnection method that can eliminate sharing conflicts. The number of conflicting shared connection edges, so that the least number of shared connection edges that have shared conflicts is disconnected, the sharing disconnection method that eliminates the sharing conflicts is selected.
  • the shared conflict identification component directly connects the two shared branches in the second shared conflict state to the shared connection edge that can be modified.
  • the relationship tree is identified as a conflict sharing relationship tree containing conflicting sharing connection edges.
  • the memory resource in-situ sharing decision-making system for a data processing system further includes: a shared label rewriting component, based on the shared relationship tree after eliminating the sharing conflict, and the original corresponding to the disconnected shared connection edge
  • the shared label of the corresponding task node in the task topology is set to invalid.
  • a method for in-situ sharing of memory resources for a data processing system includes: generating a component through an original shared relationship tree during the task topology generation process of the data processing system based on all tasks
  • the shared label of the node generates one or more shared relationship trees.
  • Each shared relationship pair of each shared relationship tree includes a logical output data cache of the current task node, a logical output data cache of the upstream task node, and a connection between the two.
  • the shared connection edge of the shared relationship between the two Represents the shared connection edge of the shared relationship between the two, so that each shared relationship tree enables all logical output data caches to share the same designated output data cache through all shared connection edges, and the shared connection edge corresponds to the current task node through it
  • the sharing label of indicates that it is a modifiable shared connection edge or an unmodifiable shared connection edge; the shared conflict identification component is used to identify a shared relationship tree in which there is a first sharing conflict state or a second sharing conflict state as containing conflict sharing
  • the conflict sharing relationship tree of the connection edge, the first sharing conflict state is that the execution timing of the task node corresponding to the at least one modifiable shared connection edge is earlier than the upstream logic output data cache in the shared branch where it is located or other shared branches
  • the second shared conflict state is the execution of at least one task node corresponding to the modif
  • the method for in-situ sharing of memory resources for a data processing system further includes: calculating, through the disconnection mode selection unit of the sharing conflict elimination component, the disconnection of each shared disconnection method capable of eliminating sharing conflicts. The number of shared connection edges in the sharing conflict is selected, and the sharing disconnection method that eliminates the sharing conflict with the smallest number of shared connection edges disconnected with the sharing conflict is selected.
  • the sharing conflict identification component directly connects the two sharing branches in the second sharing conflict state to the sharing relationship that can modify the shared connection edge
  • the tree is identified as a conflict-sharing relationship tree with conflict-sharing connection edges.
  • the method further includes: using a shared label rewriting component based on a new shared relationship tree after the elimination of sharing conflicts, corresponding to the disconnected shared connection edge
  • the shared label of the corresponding task node in the original task topology is set to invalid.
  • the data processing system can eliminate various sharing conflicts that may exist between various task nodes in advance based on various attributes of the nodes, thereby eliminating The errors in the actual execution of data processing by the data processing system and the need to increase the waiting processing mechanism for each other, so as to make more reasonable use of memory resource sharing and improve the running fluency of the streaming data processing system.
  • Fig. 1 shows a schematic diagram of the principle structure of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure.
  • Fig. 2 shows a schematic diagram of a first operation example of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure.
  • FIG. 3 shows a schematic diagram of an embodiment in which a sharing relationship tree with a sharing conflict exists and the sharing conflict is eliminated.
  • FIG. 4 shows a schematic diagram of a second operation example of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure.
  • FIG. 5 shows a schematic diagram of another embodiment in which a sharing relationship tree with a sharing conflict exists and the sharing conflict is eliminated.
  • FIG. 6 shows a schematic diagram of a third operation example of the in-situ shared decision-making system for memory resources of the data processing system according to the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • one of the two possible devices may be referred to as the first task node or the second task node, and similarly, the other of the two possible devices may be What is referred to as the second task node may also be referred to as the first task node.
  • the word "if” as used herein can be interpreted as "when” or "when” or "in response to determination”.
  • Fig. 1 shows a schematic diagram of the principle structure of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure.
  • the data processing system when it performs a specific job task, it will decompose the job to be completed into a series of tasks to be executed, and while decomposing the job, it is based on all the tasks.
  • the inherent relationship between the decomposed tasks is as shown in Figure 1 on the left side of the generated task relationship topology diagram. Specifically, in order to perform continuous processing on data, it is necessary to decompose the job into simple tasks suitable for the arithmetic unit of the CPU or GPU to perform arithmetic or other operations.
  • the job is decomposed into a multi-layer neural network structure according to the process to be processed.
  • a job is decomposed into a series of interdependent tasks (Task), this dependency is usually a directed acyclic graph (Directed acyclic graph, DAG), each node represents a task, and the connection line between the nodes represents a data dependency (producer and consumer relationship).
  • DAG directed acyclic graph
  • each task node in the task topology map needs to be generated while giving each node all the node attributes required to perform the corresponding task.
  • the all node attributes include resource attributes such as resource attributes that specify the resources required by the task corresponding to the node and condition attributes that trigger the execution of the task. It is precisely because each node in the task topology diagram of the present disclosure contains all node attributes, it immediately automatically has all the resources and all attributes for executing the task when the execution body is subsequently created. It is in a fully configured state, and there is no need to Specific data is performed when specific tasks are performed, such as dynamic allocation of environmental resources and dynamic configuration of trigger conditions, and so on.
  • the resource attribute includes the computing device where the logical output data cache of each task node is arranged, and whether there is a situation of output data cache sharing between adjacent task nodes.
  • the logical output data cache of each task node in the formed task topology graph is labeled whether it is shared and what kind of sharing.
  • task node 03, task node 04, and task node 05 have shared labels 04 and 05 among each other; task node 01, task node 00, and task node 02 have shared labels 00 and 02.
  • there are two ways to share tags that is, to modify the shared tags and the unmodifiable shared tags, where the modifiable shared tags are represented by black triangles, and the unmodifiable shared tags are represented by black squares.
  • the logical output data cache 05 of task node 05 can be unmodifiable and shared with the logical output data cache 04 of task node 04, which means that the executive body corresponding to task node 05
  • the output data in the output data buffer 04 (the output data buffer 05, because the two are the same) will not be modified in any way.
  • the corresponding executor of task node 04 will not make any changes to the data input after the execution of the task by the executor corresponding to task node 05. Therefore, task nodes 03, 04, and 05 can share a memory unit, which is the output data buffer.
  • the logical output data cache 00 of task node 00 and the logical output data cache 01 of task node 01 belong to a modifiable sharing relationship. Specifically, the executive body corresponding to task node 01 outputs the result data to the logical output data cache 01 after executing the specific operation task, and task node 00 reads the logical output data cache after obtaining the message that task node 01 has completed the specific task. 01, perform specific tasks, and output the result data of specific tasks to logic output data buffer 00. Since logic output data buffer 00 and logic output data buffer 01 are shared memory resources, they share memory resources The result data originally written by task node 01 is overwritten by the result data generated by task node 00.
  • task node 00 has modified the result data of task node 01 stored in logical output data buffer 01, so that another task node would have used the task node when executing the task 01
  • the result data stored in the logic output data buffer 01 is used as its input data, but the result data overwritten by task node 00 in the logic output data buffer 01 can only be used as its input data, which will lead to the result of another task node
  • the data is a wrong result data, or a specific task cannot be performed at all. This leads to a sharing conflict.
  • the above is just a scenario assuming a sharing conflict. In actual situations, there will be more specific situations of sharing conflicts.
  • the present disclosure provides a memory sharing decision system 100.
  • the memory sharing decision system 100 includes an original sharing relationship tree generation component 110, a sharing conflict identification component 120, a sharing conflict elimination component 130 and a shared tag rewriting component 140.
  • the original shared relationship tree generation system 110 obtains all node attributes of all task nodes with shared label attributes in the task topology map, and forms multiple shared relationship trees based on the sharing relationship between the task nodes, and these shared relationship trees constitute a sharing relationship. Forest. Each shared relationship tree usually has multiple shared relationship branches.
  • the logical output data cache of each task node in the sharing relationship constitutes a node in the sharing relationship tree, and each pair of sharing relationships forms a connection edge of the nodes in the sharing relationship tree.
  • the shared connection edge expresses the operation process of the downstream logic output data buffer task node connected to the shared connection edge on the data in the upstream logic output data buffer and the downstream logic output data buffer in the shared state, that is, whether the data will be processed Do not modify or modify. Therefore, in the present disclosure, the shared connection edge in a pair of shared connection relationship corresponds to the task node of the downstream logical output data cache.
  • the upstream shared connection edge to which the downstream logic output data buffer is connected is usually referred to as the shared connection edge to which the downstream logic output buffer belongs.
  • the sharing between logic output data buffer 00 and logic output data buffer 01 The connection edge is called "the shared connection edge to which the output data buffer 00 belongs", and so on.
  • each connection side contains a label symbol indicating whether the connection relationship is modifiable sharing or unmodifiable sharing. For example, a black triangle represents that the shared connection side is in modifiable sharing, and a black square represents unmodifiable sharing.
  • the shared nodes formed by the logical output data caches that have a shared relationship with each other and the connection edges formed by the sharing relationship form a shared relationship tree, as shown in Figure 1.
  • the shared relationship tree n These shared relationship trees are brought together to form a shared relationship forest.
  • the original shared relation tree generating component 110 transforms the task topology from the task domain to a shared relation domain.
  • FIG. 2 shows a schematic diagram of a first operation example of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure. As shown in Figure 2, there is a modifiable sharing relationship between task node 04 and task node 03, and a modifiable sharing relationship exists between task node 00 and task node 03. Therefore, after the original shared relationship tree generation component 110 is transformed The original shared relationship tree is shown on the right side of Figure 2.
  • this situation will be a sharing conflict situation by the sharing conflict identification component 120, because task node 04 and task node 00 are the direct downstream task nodes of task node 03, and both use the data in the output data cache of task node 04. . Therefore, in the original shared relationship tree species shown in Figure 2, two shared branches are formed, namely the shared branch formed by the logical output buffer 04 and the logical output buffer 05 and the shared branch formed by the logical output buffer 00 and the logical output buffer 02 .
  • the shared connection edge between the logic output data buffer 03 and the logic output data buffer 04 is shown as a modifiable shared, and the shared connection between the logic output data buffer 03 and the logic output data buffer 00 The edge is displayed as a modifiable share. Therefore, one of task node 04 and task node 00 will modify the data written by task node 03 in the shared output data cache after executing the calculation task first, which will result in task node 04 and task node 00. The input data of the other calculation task is changed, so that the calculation task may not be executed or the output result produced is incorrect. Therefore, this shared relationship tree is a situation that contains sharing conflicts.
  • the shared conflict identification component 120 branches a logical output data cache into at least two shared connection edges, and at least one of the shared connection edges is a modifiable shared connection edge.
  • the original shared relationship tree is identified as containing conflicting shared connection edges.
  • Conflict sharing relationship tree is identified as containing conflicting shared connection edges.
  • the sharing conflict elimination component 130 eliminates a pair of sharing conflicts with each other in a manner that retains the maximum shared resources.
  • One of the shared connection edges of the connection edges such as disconnecting the shared connection edge between the logic output data buffers 03 and 04 or the shared connection edge between the logic output data buffers 03 and 00, thereby retaining the other shared connection edges that conflict with each other A shared connection edge.
  • the broken shared connection edge is represented by a dotted line. If there are modifiable shared connection edges between three or more logic output data buffers and logic output data buffer 03, disconnect other modifiable shared connection edges until only one modifiable shared connection edge remains.
  • the shared conflict elimination component 130 eliminates one of a pair of shared connection edges that conflict with each other, that is, disconnects at least one modifiable shared connection edge, so as to eliminate the sharing conflict in the shared relationship tree that is identified as a conflict, thereby obtaining FIG. 2
  • the modified shared relationship tree on the lower right side disconnects the shared connection edge between the logical output data caches 03 and 00, the sharing conflict elimination component 130 can also disconnect the logical output data cache for the sharing conflict situation in FIG. 2
  • the shared connection edge between 03 and 04 is reserved while the shared connection edge between 03 and 00 is reserved for the logic output data buffer.
  • the shared label rewriting component 140 correspondingly modifies the shared label of the corresponding task node in the original task topology based on the shared connection edges contained in the shared relationship forest formed by the modified shared relationship tree. Specifically, the shared label rewriting component 140 modifies the logical value of the shared label of the corresponding task node in the original task topology graph corresponding to the disconnected shared connection edge from the original "1" to "0", that is, the corresponding The corresponding logic output data cache of the task node does not have a sharing relationship with other logic output data caches.
  • the logic output data cache 00 of task node 00 in the original task topology diagram indicates that there is a shared shared label between the logic output data cache 03 of task node 03 and the original "1" is overwritten. Is "0".
  • the modified task topology diagram will correspondingly eliminate the sharing conflict situation, so it can effectively realize memory sharing and effectively improve the use efficiency of memory resources, and at the same time improve the data of the data processing system Processing efficiency.
  • FIG. 2 only shows the structure of a shared relationship tree with a relatively small number of shares.
  • FIG. 3 shows a schematic diagram of an embodiment in which a sharing relationship tree with a sharing conflict exists and the sharing conflict is eliminated.
  • the original relational tree contains two shared branches that branch to the logical output buffer 00, namely, shared branch 1 and shared branch 2.
  • Each shared branch contains at least one modifiable shared connection edge, for example, the shared connection edge to which the logic output data buffer 12 belongs and the shared connection edge to which the logic output data buffer 24 belongs.
  • FIG. 4 shows a schematic diagram of a second operation example of the in-situ shared memory resource decision-making system of the data processing system according to the present disclosure.
  • the second running example shown in FIG. 4 is different from the first running example shown in FIG. 2 in that the logic output data cache 00 of task node 00 and the logic of task node 03 in the original task topology in the second running example
  • the sharing relationship between the output data cache 03 is an unmodifiable sharing relationship. Therefore, in the original shared relationship tree, the shared connection edge between the logic output data buffer 00 and the logic output data buffer 03 is also an unmodifiable shared connection edge.
  • the shared connection edge between the logic output data buffer 03 and the logic output data buffer 04 is a modifiable shared connection edge
  • the logic output data buffer 04 belongs to the shared connection edge corresponding to the task node 04
  • the logic output data buffer 04 belongs to The execution of the task node 04 corresponding to the shared connection edge may precede the execution of the task node 00 corresponding to the shared connection edge to which the logical output data buffer 00 belongs. This will result in the logic output data buffer 00 corresponding to the shared connection edge.
  • the downstream task node of task node 00 cannot obtain the correct input data when using the logical output data cache 00, resulting in that subsequent task nodes in the data processing path where task node 00 is located will obtain the wrong input data to obtain the wrong output result data or Unable to perform data processing. Therefore, when there is no correlation between the execution timing of the task node 04 of the logic output data cache 04 and the execution timing of the task node 00 of the logic output data cache 00, the situation of the original shared relationship tree shown in FIG. 4 is A potential conflict sharing relationship tree is a sharing relationship that needs to be eliminated. To this end, the sharing conflict identification component 120 will have a modifiable shared connection edge and an unmodifiable shared connection edge that are connected to the same logic output data cache in parallel.
  • the original shared relationship tree of is identified as a conflict sharing relationship tree containing conflicting shared connection edges, wherein the execution timing of the task nodes corresponding to the modifiable shared connection edges that are connected in parallel is the same as the execution of the task nodes corresponding to the unmodifiable shared connection edges Timing is not relevant.
  • the shared conflict elimination component 130 eliminates one of a pair of shared connection edges that conflict with each other, thereby obtaining a modified shared relationship tree as shown in the lower right side of FIG. 4.
  • the sharing conflict recognition component 120 directly recognizes it as a sharing conflict situation.
  • the sharing conflict identification component 120 directly identifies it as a sharing conflict situation.
  • FIG. 5 shows a schematic diagram of another embodiment in which a sharing relationship tree with a sharing conflict exists and the sharing conflict is eliminated.
  • the original relationship tree contains multiple shared branches branching to the logical output buffer 00.
  • the logical output data buffer 01 and the logical output data buffer 22 are also shared branches.
  • the shared branch 2 includes at least one modifiable shared connection edge, for example, the shared connection edge to which the logic output data buffer 22 belongs.
  • the logical output data cache 07 of task node 07 does not belong to the shared relationship tree, but task node 07 belongs to the downstream task node of the logical output data cache 12, so its corresponding executive body will use the shared relationship tree in the entire shared relationship tree during execution.
  • the logic outputs the data in the data buffer.
  • the execution timing of task node 07 in FIG. 5 and the execution timing of task node 22 corresponding to the shared connection edge to which the logic output data buffer 22 belongs, there is a batch of data execution
  • the execution of the task node 22 corresponding to the shared connection edge to which the logical output data cache 22 belongs may precede the execution of the task node 07, which will cause the task node 07 to not get the correct input when using the logical output data cache 12.
  • Data, the subsequent task nodes of the data processing path where task node 07 is located will all get wrong input data and thus get wrong output result data or cannot perform data processing.
  • the situation of the original sharing relationship tree shown in FIG. 5 is a potential conflict sharing
  • the relationship tree is a shared relationship that needs to be eliminated.
  • the logic output data buffer 08 of the downstream task node 08 does not belong to the shared relationship tree, but its task
  • the execution body corresponding to node 08 will use the data in the logical output data cache (for example, the logical output data cache 21) shared in the entire shared relationship tree during execution.
  • the execution of the task node 22 corresponding to the shared connection edge to which the logic output data cache 22 belongs may precede the execution of the task node 08, which will cause the task node 08 to fail to obtain the correct input when using the logical output data cache 12.
  • the subsequent task nodes in the data processing path where task node 08 is located will all get wrong input data and thus get wrong output result data or cannot perform data processing. Therefore, when there is no correlation between the execution timing of the task node 08 and the execution timing of the task node 22 of the logic output data buffer 22.
  • any modifiable shared relationship connection edge in a shared relationship tree it is necessary to determine one by one that all upstream logic output data caches on the shared branch where the modifiable connection edge is located and all logic output data caches on other shared branches are shared The timing used by other task nodes outside the relation tree. If other task nodes other than the shared relationship tree use the shared logic output data cache timing of the shared relationship tree and the execution timing of the task node corresponding to the modifiable connection edge in the shared relationship tree is not related or later, then the sharing conflict The identification component 120 determines the sharing relationship tree with this state as a sharing relationship tree containing a sharing conflict. In this case, the sharing conflict elimination component 130 usually directly disconnects the modifiable connection edge to eliminate the sharing conflict.
  • FIG. 6 shows a schematic diagram of a third operation example of the in-situ shared decision-making system for memory resources of the data processing system according to the present disclosure.
  • the shared relationship between the logical output data cache 00 of task node 00 and the logical output data cache 03 of task node 03 in the original task topology in the third running example is an unmodifiable sharing relationship, and the logical output data cache 00 of task node 00 There is also an unmodifiable sharing relationship between the logical output data cache of its downstream task node (for example, the logical output data cache 02 of task node 02).
  • the execution timing of the task node 04 of the logic output data buffer 04 is earlier than that of the task node 00's logic output data cache 00 is in the case of the execution timing of any downstream task node (such as task node 02) in a shared relationship, or when the execution timing of task node 04 of the logic output data cache 04 is the same as the logic output data of task node 00 Cache 00 If the execution timing of any downstream task node in a shared relationship (such as task node 02) is not related, there is a batch of data execution.
  • the execution of task node 04 of logical output data cache 04 is before the execution of task node 04 of logic output data cache 04.
  • the execution of task node 02 of logical output data cache 02 is possible, which will cause the downstream task node of task node 02 of logical output data cache 02 to fail to obtain the correct input data when using logical output data cache 02, resulting in task node 02.
  • Subsequent task nodes of the data processing path will get the wrong input data to obtain the wrong output result data or perform data processing. Therefore, when the execution timing of the task node 04 of the logic output data buffer 04 is earlier than the execution timing of the task node 02 of the logic output data buffer 02, the situation of the original shared relationship tree shown in Figure 6 is a potential conflict.
  • the shared relationship tree is a shared relationship that needs to be eliminated.
  • the sharing conflict identification component 120 will have one modifiable shared connection edge in parallel connected to the same logic output data cache and a plurality of unmodifiable shared connection edges connected in series.
  • the shared relationship tree is identified as a conflicting sharing relationship tree containing conflicting shared connection edges, wherein the execution timing of the task node corresponding to the modifiable shared connection edge that is connected in parallel corresponds to one of the plurality of unmodifiable shared connection edges connected in series.
  • the execution timing of the task node is unrelated or the execution timing of the task node corresponding to the modifiable shared connection edge connected in parallel is earlier than the execution timing of the task node corresponding to one of the multiple unmodifiable shared connection edges connected in series.
  • the shared conflict elimination component 130 eliminates one of a pair of shared connection edges that conflict with each other, thereby obtaining a modified shared relationship tree as shown in the lower right side of FIG. 6.
  • the task node 02 of the logical output data cache 02 is the direct downstream task node of task node 00
  • the task node whose execution sequence is later than task node 04 may be the downstream task node of task node 02.
  • the node and task node 02 are in an unmodifiable sharing relationship.
  • the disconnection mode selection unit 131 in the sharing conflict elimination component 130 can eliminate all sharing if only one of a pair of conflicting shared connection edges needs to be disconnected in many conflict situations. Conflict, choose to disconnect one of the conflicting shared connection sides.
  • the method for in-situ sharing of memory resources for a data processing system includes: using the original shared relationship tree generating component 110 to generate a task topology map of the data processing system based on a shared label of a current task node.
  • the shared relationship between a logical output data cache of the current task node and a logical output data cache of the upstream task node is generated to include the logical output data cache of the current task node, the logical output data cache of the upstream task node, and The original shared relationship tree used to connect the two to represent the shared connection edge of the shared relationship between the two, wherein the shared connection edge includes the modifiable shared connection edge and the unmodifiable shared connection edge specified by the sharing label;
  • the conflict identification component 120 connects a logic output data buffer with at least two shared connection edges in parallel, and at least one of the shared connection edges is an original shared connection tree that can be modified and is identified as a conflict shared relationship tree that contains conflicting shared connection edges.
  • the sharing conflict elimination component 130 disconnects at least one modifiable sharing connection edge, thereby eliminating the sharing conflict in the shared relationship tree that is identified as a conflict.
  • the shared label redistribution component 140 modifies the shared label corresponding to the original task topology map to form a new task topology map, in which some task nodes are shared The label is reset to "0" by the initial value "1".
  • the purpose of the present disclosure can also be realized by running a program or a group of programs on any computing device.
  • the computing device may be a well-known general-purpose device. Therefore, the purpose of the present disclosure can also be achieved only by providing a program product containing program code for implementing the method or device. That is, such a program product also constitutes the present disclosure, and a storage medium storing such a program product also constitutes the present disclosure.
  • the storage medium may be any well-known storage medium or any storage medium developed in the future.
  • each component or each step can be decomposed and/or recombined.
  • These decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.
  • the steps of executing the above-mentioned series of processing can naturally be executed in chronological order in the order of description, but they do not necessarily need to be executed in chronological order. Some steps can be performed in parallel or independently of each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种用于数据处理系统的内存资源原地共享决策系统,包括:原始共享关系树生成组件(110),在数据处理系统的任务拓扑图生成过程中,基于所有任务节点(00,01,02,03,04,05)的共享标签(00,……,04,05)生成一个或多个共享关系树(00,01,……,n);共享冲突识别组件(120),用于将其中存在第一共享冲突状态或第二共享冲突状态的一株共享关系树识别为含有冲突共享连接边的冲突共享关系树;以及共享冲突消除组件(130),用于通过在第一共享冲突状态下断开所述至少一个可修改共享连接边或在第二共享冲突状态下断开彼此之间存在共享冲突的一个或多个共享连接边,从而消除被识别为冲突共享关系树中的共享冲突。

Description

内存资源原地共享决策系统及其方法 技术领域
本公开涉及一种内存资源分配技术。更具体地说,本公开涉及一种用于为了节省内存资源的内存资源原地共享决策系统及其方法。
背景技术
本公开的申请人提出的静态分布式深度学习系统已经越来越受到深度学习领域的关注。
静态分布式学习系统从业务处理的全局出发,结合静态分布式系统的整体计算资源以及各个计算资源之间的拓扑关系,对所要处理的数据处理任务静态布置到整个数据处理上,由众多执行体静态组成业务处理网络,形成彼此关联的数据处理路径。执行体与执行体之间基于在网络中的上下游关系形成数据生产和消费关系,从而对输入数据进行流水化处理。尤其是,执行体之间靠消息(message)来协调进度,一个执行体会收到上游生产者和下游消费者的消息,当满足触发条件(有限状态自动机控制)时,执行体会发射一条指令给协处理器去执行,执行完毕之后,该执行体会给下游消费者发消息通知它有新的数据可以消费。通过执行体与相邻执行体之间的消息通讯方式来是实现彼此从数据生产和消费关系,由此消除了对数据出的实时的中心调度需求,实现了数据处理的去中心化。
然而,在静态布置系统下,需要预先为所有静态布置的执行单元(例如执行体)预先分配内存资源,因此,如何在静态分布式环境下合理地为各个执行单元预先分配内存资源,以便在内存资源量有限的情况下,保证数据处理时间效率的前提下提高内存使用效率成为一个所需面对的难题。为此,人们为了提高内存使用效率,提出了内存共享策略,这为数据处理系统充分利用内存资源提供了可能。但是在很多情况下,内存共享安排会导致一些数据处理环节之间彼此掣肘,影响到系统整体数据处理效率。
技术问题
因此,如何在实现内存共享的情况下提升内存资源的利用率又能够使得内存共享不影响系统整体数据处理效率,成为人们急需解决的问题。
技术解决方案
本公开的目的在于解决至少上述问题,具体而言,就是尽可能减少内存资源的使用。一种解决方式是在不影响执行体各自的功能的情况下,使得更多的执行体之间彼此共享内存资源。为此,本公开提供了一种用于数据处理系统的内存资源原地共享决策系统,包括:原始共享关系树生成组件,在数据处理系统的任务拓扑图生成过程中,基于所有任务节点的共享标签生成一个或多个共享关系树,每株共享关系树的每个共享关系对包括一个当前任务节点的逻辑输出数据缓存、上游任务节点的逻辑输出数据缓存以及用于连接两者以表示两者之间共享关系的共享连接边,从而每株共享关系树通过所有共享连接边使得所有逻辑输出数据缓存能够共享同一个指定输出数据缓存,并且所述共享连接边通过其对应当前任务节点的共享标签指明其为可修改共享连接边或不可修改共享连接边;共享冲突识别组件,用于将其中存在第一共享冲突状态或第二共享冲突状态的一株共享关系树识别为含有冲突共享连接边的冲突共享关系树,所述第一共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其所在的共享分支中的上游逻辑输出数据缓存或其他共享分支中的任何逻辑输出数据缓存的位于所述共享关系树之外的下游任务节点的执行时序或与其不相关,以及所述第二共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其他共享分支中的任何共享连接边所对应的任务节点的执行时序或与其不相关;以及共享冲突消除组件,用于通过在第一共享冲突状态下断开所述至少一个可修改共享连接边或在第二共享冲突状态下断开彼此之间存在共享冲突的一个或多个共享连接边,从而消除被识别为冲突共享关系树中的共享冲突。
根据本公开的用于数据处理系统的内存资源原地共享决策系统,其中所述共享冲突消除组件还包括断开方式选择单元,计算每种能够消除共享冲突的共享断开方式所断开存在共享冲突的共享连接边数量,从而断开存在共享冲突的共享连接边的数量最小的消除共享冲突的共享断开方式得以被选择。
根据本公开的用于数据处理系统的内存资源原地共享决策系统,其中所所述共享冲突识别组件直接将所述第二共享冲突状态下两个共享分支中都含有可修改共享连接边的共享关系树识别为含有冲突共享连接边的冲突共享关系树。
根据本公开的用于数据处理系统的内存资源原地共享决策系统,其中还包括:共享标签重写组件,基于消除共享冲突后的共享关系树,将被断开的共享连接边所对应的原始任务拓扑图中对应任务节点的共享标签设置为无效。
根据本公开的另一个方面,提供了一种用于数据处理系统的内存资源原地共享决策方法,包括:通过原始共享关系树生成组件在数据处理系统的任务拓扑图生成过程中,基于所有任务节点的共享标签生成一个或多个共享关系树,每株共享关系树的每个共享关系对包括一个当前任务节点的逻辑输出数据缓存、上游任务节点的逻辑输出数据缓存以及用于连接两者以表示两者之间共享关系的共享连接边,从而每株共享关系树通过所有共享连接边使得所有逻辑输出数据缓存能够共享同一个指定输出数据缓存,并且所述共享连接边通过其对应当前任务节点的共享标签指明其为可修改共享连接边或不可修改共享连接边;通过共享冲突识别组件用于将其中存在第一共享冲突状态或第二共享冲突状态的一株共享关系树识别为含有冲突共享连接边的冲突共享关系树,所述第一共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其所在的共享分支中的上游逻辑输出数据缓存或其他共享分支中的任何逻辑输出数据缓存的位于所述共享关系树之外的下游任务节点的执行时序或与其不相关,以及所述第二共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其他共享分支中的任何共享连接边所对应的任务节点的执行时序或与其不相关;以及通过共享冲突消除组件在第一共享冲突状态下断开所述至少一个可修改共享连接边或在第二共享冲突状态下断开彼此之间存在共享冲突的一个或多个共享连接边,从而消除被识别为冲突共享关系树中的共享冲突。
根据本公开的用于数据处理系统的内存资源原地共享决策方法,还包括:通过所述共享冲突消除组件的断开方式选择单元计算每种能够消除共享冲突的共享断开方式所断开存在共享冲突的共享连接边数量,从而选择断开存在共享冲突的共享连接边的数量最小的消除共享冲突的共享断开方式。
根据本公开的用于数据处理系统的内存资源原地共享决策方法,其中所述共享冲突识别组件直接将所述第二共享冲突状态下两个共享分支中都含有可修改共享连接边的共享关系树识别为含有冲突共享连接边的冲突共享关系树。
根据本公开的用于数据处理系统的内存资源原地共享决策方法,其中还包括:通过共享标签重写组件基于消除共享冲突后的新共享关系树,将被断开的共享连接边所对应的原始任务拓扑图中对应任务节点的共享标签设置为无效。
有益效果
采用本公开的用于数据处理系统的内存资源原地共享决策系统,能够在数据处理系统基于节点所具有的各种属性预先消除各个任务节点之间可能存在的各种共享冲突的情形,从而消除了数据处理系统在实际执行数据处理过程中会产生的错误以及需要增加彼此等待处理机制的情形,从而能够更合理地利用内存资源共享和提高流式数据处理系统的运行流畅性。
本发明的其它优点、目标和特征将部分通过下面的说明体现,部分还将通过对本发明的研究和实践而为本领域的技术人员所理解。
附图说明
图1所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的原理结构示意图。
图2所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第一运行实例示意图。
图3所示的是一种存在共享冲突的共享关系树并被消除共享冲突的实施例的示意图。
图4所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第二运行实例示意图。
图5所示的是一种存在共享冲突的共享关系树并被消除共享冲突的另一个实施例的示意图。
图6所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第三运行实例示意图。
本发明的实施方式
下面结合实施例和附图对本发明做进一步的详细说明,以令本领域技术人员参照说明书文字能够据以实施。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,在下文中,两个可能设备之一可以被称为第一任务节点也可以被称为第二任务节点,类似地,两个可能设备的另一个可以被称为第二任务节点也可以被称为第一任务节点。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了使本领域技术人员更好地理解本公开,下面结合附图和具体实施方式对本公开作进一步详细说明。
图1所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的原理结构示意图。如图1所示,如图1所示,数据处理系统在执行具体的作业任务时,会将所需完成的作业分解为一系列将要被执行的任务,并在进行作业分解的同时,基于所分解的任务之间的固有关系,如图1左侧所显示的生成任务关系拓扑图。具体而言,为了对数据进行连续的处理,需要将作业分解成为适合CPU或GPU的运算单元执行运算或其他操作的简单任务。具体而言,就是将作业分解成为彼此相关联的任务。具体而言,按照对作业任务要求的描述,对作业按照将要处理的过程,进行分层分解成多层神经网络结构。一个作业(Job)被分解成一系列互相依赖的任务(Task),这种依赖关系通常用有向无环图(Directed acyclic graph, DAG)来描述,每个节点表示一个任务,节点之间的连接线表示一个数据依赖关系(生产者和消费者关系)。在此不具体描述作业分解后任务关系图的情形。
为了实现计算任务的静态布置并形成去中心调度的流式数据处理系统,需要在生成任务拓扑图的每个任务节点同时,赋予每个节点执行对应任务所需的全部节点属性。所述全部节点属性包含了诸如指明节点所对应的任务所需的资源的资源属性以及触发任务执行的触发条件的条件属性等等。正是由于本公开的任务拓扑图中的每个节点包含了全部节点属性,因此其在随后创建执行体时立即自动具备了执行任务所有的资源和所有属性,处于完全配置状态,不需要在对具体数据执行具体任务时进行诸如对环境资源进行动态分配以及动态配置触发条件等等。对于基于本公开的任务拓扑图以及含有全部节点属性的节点所创建的每个执行体而言,其在对具体数据进行处理过程中,本身处于静态状态,变化的仅仅是输入数据的不同。所述资源属性中包含了每个任务节点的逻辑输出数据缓存所布置的计算设备,以及相邻任务节点之间是否存在输出数据缓存共享的情形。为了实现内存共享,在所形成的任务拓扑图的每个任务节点的逻辑输出数据缓存都被打上是是否为共享以及何种共享的标签。如图1所述,在任务拓扑图中,任务节点03、任务节点04以及任务节点05彼此之间存在共享标签04以及05;任务节点01、任务节点00以及任务节点02之间在存在共享标签00以及02。如图1所示,共享标签存在两种方式,即可修改共享标签和不可修改共享标签,其中可修改共享标签采用黑色三角形代表,而不可修改共享标签采用黑色正方形表示。如图1所示,在原始的任务拓扑图中,任务节点05的逻辑输出数据缓存05可与任务节点04的逻辑输出数据缓存04进行不可修改共享,这意味着,任务节点05对应的执行体在针对上游任务节点04的对应执行体的输出数据缓存04中的数据执行完任务并不会对输出数据缓存04(输出数据缓存05,因为两者同一)中的输出数据进行任何修改。同样,任务节点04的对应执行体也不会对任务节点05对应的执行体执行任务完成后输入的数据本身进行任何改变。因此,任务节点03、04、05可以共用一个内存单元,也就是输出数据缓存。 任务节点00的逻辑输出数据缓存00与任务节点01的逻辑输出数据缓存01之间属于可修改共享关系。具体而言,任务节点01对应的执行体在执行完具体操作任务之后,输出结果数据到逻辑输出数据缓存01,任务节点00获得任务节点01执行完具体任务的消息后,读取逻辑输出数据缓存01中的结果数据,并执行具体任务,并将执行具体任务的结果数据输出到逻辑输出数据缓存00中,由于逻辑输出数据缓存00与逻辑输出数据缓存01为共享内存资源,因此,共享内存资源内的原来被任务节点01所写入的结果数据被任务节点00所产生的结果数据所覆写。
但是,在任务拓扑图中,各种任务节点之间的内存共享可能导致在后期的数据处理过程中存在共享冲突的情形。例如,图1所示的任务拓扑图中,任务节点00与任务节点01所可修改共享的逻辑输出数据缓存01(或00)在被另一个任务节点(未示出)不可修改共享的情况下,如果在另一任务节点没有执行任务的情况下,任务节点00已经修改了任务节点01存储在逻辑输出数据缓存01中的结果数据,这样另一任务节点在执行任务时本来是要使用任务节点01存储在逻辑输出数据缓存01中的结果数据作为其输入数据,但是只能使用任务节点00所覆写在逻辑输出数据缓存01中结果数据作为其输入数据,这样会导致另一任务节点的结果数据是一个错误的结果数据,或者完全不能执行具体的任务。这样就导致了共享冲突。以上仅仅是一种假设共享冲突的情形。在实际情形下,会存在更多共享冲突的具体情形。
为了消除这种共享冲突以及类似情形的存在,本公开提供了一种内存共享决策系统100。如图1所示,内存共享决策系统100包括原始共享关系树生成组件110、共享冲突识别组件120、共享冲突消除组件130以及共享标签重写组件140。原始共享关系树生成系统110获取任务拓扑图中的所有具有共享标签属性的任务节点的全部节点属性,并基于任务节点之间的共享关系,形成多个共享关系树,这些共享关系树构成共享关系森林。每株共享关系树通常会具有多个共享关系分支。具体而言,每个处于共享关系中的任务节点的逻辑输出数据缓存构成共享关系树中的一个节点,每对共享关系形成共享关系树中的节点的连接边。通常共享连接边表达连接在该共享连接边上的下游逻辑输出数据缓存的任务节点对处于共享状态上游逻辑输出数据缓存和下游逻辑输出数据缓存内的数据的操作过程,即,是否会对该数据进行不修改操作还是进行修改操作。因此,在本公开中,一对共享连接关系中的共享连接边对应的是下游逻辑输出数据缓存的任务节点。为了描述方便,通常将下游逻辑输出数据缓存所连接的上游的共享连接边称为该下游逻辑输出缓存所属的共享连接边,例如,将逻辑输出数据缓存00与逻辑输出数据缓存01之间的共享连接边称为“输出数据缓存00所属的共享连接边”,以此类推。
因此,每个连接边上包含有表示这种连接关系是可修改共享或不可修改共享的标签符号,例如黑色三角形代表共享连接边处于可修改共享,而黑色方形代表不可修改共享。由于并不是所有逻辑输出数据缓存都存在共享关系,因此,彼此之间存在共享关系的逻辑输出数据缓存构成的共享节点以及共享关系形成的连接边构成了一株株共享关系树,例如图1中的共享关系树01、共享关系树00、….共享关系树n。这些共享关系树集合在一起形成了共享关系森林。换而言之,就是原始共享关系树生成组件110将任务拓扑图从任务域变换到一个共享关系域。
随后,共享冲突识别组件120遍历每一株共享关系树,确定在每一株共享关系树中是否存在共享冲突的情形。图2所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第一运行实例示意图。如图2所示,任务节点04与任务节点03之间存在可修改共享关系,而任务节点00与任务节点03之间也存在可修改共享关系,因此,在原始共享关系树生成组件110变换后的原始共享关系树如图2右侧所示。但是这种情况则会被共享冲突识别组件120为共享冲突情形,因为,任务节点04和任务节点00都是任务节点03的直接下游任务节点,都要使用任务节点04的输出数据缓存中的数据。因此,在图2所示的原始共享关系树种,形成了两个共享分支,即由逻辑输出缓存04和逻辑输出缓存05形成的共享分支和由逻辑输出缓存00和逻辑输出缓存02形成的共享分支。
在图2的原始共享关系树中,逻辑输出数据缓存03与逻辑输出数据缓存04之间的共享连接边显示为可修改共享,且逻辑输出数据缓存03与逻辑输出数据缓存00之间的共享连接边显示为可修改共享,因此,任务节点04和任务节点00之一先执行计算任务后都会修改共享输出数据缓存中任务节点03所写入的数据,因此将导致任务节点04和任务节点00中另一个在执行计算任务时的输入数据被改变,导致计算任务可能无法执行或产生的输出结果不正确。因此这种共享关系树是一种含有共享冲突的情形。为此,共享冲突识别组件120将一个逻辑输出数据缓存分支出至少两个以上的共享连接边并且其中至少一个共享连接边为可修改共享连接边的原始共享关系树识别为含有冲突共享连接边的冲突共享关系树。
在共享冲突识别组件120将图2所示的共享关系树识别为含有冲突共享连接边的冲突共享关系树之后,共享冲突消除组件130以保留最大共享资源的方式消除彼此存在共享冲突的一对共享连接边中的一个共享连接边,例如断开逻辑输出数据缓存03和04之间的共享连接边或逻辑输出数据缓存03和00之间的共享连接边,从而保留彼此冲突的共享连接边的另一个共享连接边。图2中对于断开的共享连接边采用虚线方式表示。如果有三个或三个以上的逻辑输出数据缓存与逻辑输出数据缓存03之间存在可修改共享连接边,则断开其他可修改共享连接边直到只剩下一个可修改共享连接边为止。这种消除共享冲突的方式仅仅是一种特殊的方式。通过共享冲突消除组件130消除彼此冲突的一对共享连接边之一,即,将至少一个可修改共享连接边断开,从而消除被识别为冲突共享关系树中的共享冲突,从而获得如图2的右下侧的修改后共享关系树。尽管图2中所示的修改后共享关系树断开了逻辑输出数据缓存03和00之间的共享连接边,但是共享冲突消除组件130对图2中共享冲突情形也可以断开逻辑输出数据缓存03和04之间的共享连接边而保留逻辑输出数据缓存03和00之间的共享连接边。
最后,共享标签重写组件140基于修改后的共享关系树构成的共享关系森林所包含的共享连接边,对应修改原始任务拓扑图中对应任务节点的共享标签。具体而言,就是共享标签重写组件140将断开共享连接边所对应的原始任务拓扑图中对应任务节点的共享标签的逻辑值从原来的“1”修改为“0”,即该对应的任务节点的对应逻辑输出数据缓存不与其他逻辑输出数据缓存存在共享关系。例如,对应在图2中,原始任务拓扑图中任务节点00的逻辑输出数据缓存00的表示与任务节点03的逻辑输出数据缓存03之间存在共享的共享标签被从原来的“1”重写为“0”。通过这种方式被重写之后,修改后的任务拓扑图也就相应地消除了存在共享冲突的情形,因此能够有效的实现内存共享并有效提高内存资源的使用效率,同时提高数据处理系统的数据处理效率。
图2仅仅显示了一种共享数量比较小的共享关系树的结构。为了更一般化,图3所示的是一种存在共享冲突的共享关系树并被消除共享冲突的实施例的示意图。如图3所示,原始关系树包含了分支于逻辑输出缓存00的两个共享分支,即共享分支1和共享分支2。在每个共享分支中都包含有至少一个可修改共享连接边,例如逻辑输出数据缓存12所属的共享连接边和逻辑输出数据缓存24所属的共享连接边。在这种情况下,由于两者都共享同一逻辑输出数据缓存,因此,无论这两个共享连接边各自所对应的任务节点之间的执行时序如何,彼此都一定存在共享冲突,因为先执行的可修改共享连接边所对应的任务节点必然修改逻辑输出数据缓存中的数据,因此必然导致后执行的另一个可修改共享连接边所对应的任务节点在执行时获得的数据不正确。为此,如图3所示,新共享关系树中的共享分支2中的逻辑输出数据缓存24所属的共享连接边被断开(用虚线表示),从而消除共享冲突。尽管图3中的新共享关系树中显示出共享分支2,其实共享分支2已经断开成两部分,但是由逻辑输出数据缓存22和23组成的共享关系已经成为独立的一株新的共享关系树。为了示意性显示,在图3中仍然保留在共享分支2中,以直观显示断开共享连接边的过程。
图4所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第二运行实例示意图。图4所示的第二运行实例与图2所示的第一运行实例的不同之处在于,第二运行实例中原始任务拓扑图中任务节点00的逻辑输出数据缓存00与任务节点03的逻辑输出数据缓存03之间的共享关系为不可修改共享关系。因此在原始共享关系树中,逻辑输出数据缓存00与逻辑输出数据缓存03之间的共享连接边也为不可修改共享连接边。在这种情况下,由于逻辑输出数据缓存03与逻辑输出数据缓存04之间的共享连接边为可修改共享连接边,因此当逻辑输出数据缓存04所属的共享连接边所对应的任务节点04的执行时序与逻辑输出数据缓存00所属的共享连接边所对应的任务节点00的执行时序之间不存在关联关系的情况下,就存在一个批次的数据执行过程中,逻辑输出数据缓存04所属的共享连接边所对应的任务节点04的执行先于逻辑输出数据缓存00所属的共享连接边所对应的任务节点00的执行的可能,这会导致逻辑输出数据缓存00所属的共享连接边所对应的任务节点00的下游任务节点在使用逻辑输出数据缓存00时不能获得正确的输入数据,导致任务节点00所在的数据处理路径的后续任务节点都将获得错误的输入数据从而获得错误的输出结果数据或者无法执行数据处理。因此,当逻辑输出数据缓存04的任务节点04的执行时序与逻辑输出数据缓存00的任务节点00的执行时序之间不存在关联关系的情况下,图4所示的原始共享关系树的情形是一种潜在的冲突共享关系树,是需要进行消除的共享关系,为此,共享冲突识别组件120将一个其中存在并联于同一逻辑输出数据缓存的一个可修改共享连接边和一个不可修改共享连接边的原始共享关系树识别为含有冲突共享连接边的冲突共享关系树,其中被并联的所述可修改共享连接边对应的任务节点的执行时序与所述不可修改共享连接边对应的任务节点的执行时序不相关。随后,共享冲突消除组件130消除彼此冲突的一对共享连接边之一,从而获得如图4的右下侧的修改后共享关系树。
进一步,当在图4所示的情形中,如果当逻辑输出数据缓存04所属的共享连接边所对应的任务节点04的完全属性所包含的执行时序先于逻辑输出数据缓存00所属的共享连接边所对应的任务节点00的完全属性所包含的执行时序时,共享冲突识别组件120直接将其识别为共享冲突情形。同样当在图4所示的情形中,如果当逻辑输出数据缓存04所属的共享连接边所对应的任务节点04的完全属性所包含的执行时序先于逻辑输出数据缓存02所属的共享连接边所对应的任务节点02的完全属性所包含的执行时序时,共享冲突识别组件120直接将其识别为共享冲突情形。
图5所示的是一种存在共享冲突的共享关系树并被消除共享冲突的另一个实施例的示意图。如图5所示,原始关系树包含了分支于逻辑输出缓存00的多个共享分支,除了共享分支1和共享分支2之外,逻辑输出数据缓存01以及逻辑输出数据缓存22也是共享分支。在共享分支2中包含有至少一个可修改共享连接边,例如逻辑输出数据缓存22所属的共享连接边。任务节点07的逻辑输出数据缓存07并不属于共享关系树,但是任务节点07属于逻辑输出数据缓存12的下游任务节点,因此其对应的执行体在执行过程中会使用整个共享关系树中所共享的逻辑输出数据缓存中的数据。但是当图5中的任务节点07的执行时序与逻辑输出数据缓存22所属的共享连接边所对应的任务节点22的执行时序之间不存在关联关系的情况下,就存在一个批次的数据执行过程中,逻辑输出数据缓存22所属的共享连接边所对应的任务节点22的执行先于任务节点07的执行的可能,这会导致任务节点07在使用逻辑输出数据缓存12时不能获得正确的输入数据,导致任务节点07所在的数据处理路径的后续任务节点都将获得错误的输入数据从而获得错误的输出结果数据或者无法执行数据处理。因此,当任务节点07的执行时序与逻辑输出数据缓存22的任务节点22的执行时序之间不存在关联关系的情况下,图5所示的原始共享关系树的情形是一种潜在的冲突共享关系树,是需要进行消除的共享关系。同样,对于逻辑输出数据缓存22所属的共享连接边的所有上游共享逻辑输出数据缓存,例如逻辑输出数据缓存21,其下游任务节点08的逻辑输出数据缓存08并不属于共享关系树,但其任务节点08对应的执行体在执行过程中会使用整个共享关系树中所共享的逻辑输出数据缓存(例如逻辑输出数据缓存21)中的数据。但是当图5中的任务节点08的执行时序与逻辑输出数据缓存22所属的共享连接边所对应的任务节点22的执行时序之间不存在关联关系的情况下,就存在一个批次的数据执行过程中,逻辑输出数据缓存22所属的共享连接边所对应的任务节点22的执行先于任务节点08的执行的可能,这会导致任务节点08在使用逻辑输出数据缓存12时不能获得正确的输入数据,导致任务节点08所在的数据处理路径的后续任务节点都将获得错误的输入数据从而获得错误的输出结果数据或者无法执行数据处理。因此,当任务节点08的执行时序与逻辑输出数据缓存22的任务节点22的执行时序之间不存在关联关系的情况下。因此,对于一个共享关系树中的任意一个可修改共享关系连接边,需要逐个确定可修改连接边所在的共享分支上的所有上游逻辑输出数据缓存以及其他共享分支上的所有逻辑输出数据缓存被共享关系树之外的其他任务节点使用的时序。如果共享关系树之外的其他任务节点使用共享关系树的共享逻辑输出数据缓存的时序与共享关系树中的可修改连接边所对应的任务节点的执行时序不相关联或更晚,则享冲突识别组件120将具有这种状态的共享关系树确定为含有共享冲突的共享关系树。对于这种情况,共享冲突消除组件130通常直接断开可修改连接边来消除共享冲突。
图6所示的是根据本公开的数据处理系统的内存资源原地共享决策系统的第三运行实例示意图。第三运行实例中原始任务拓扑图中任务节点00的逻辑输出数据缓存00与任务节点03的逻辑输出数据缓存03之间的共享关系为不可修改共享关系,并且任务节点00的逻辑输出数据缓存00与其下游任务节点的逻辑输出数据缓存(例如任务节点02的逻辑输出数据缓存02)之间也存在不可修改共享关系。在这种情况下,由于逻辑输出数据缓存03与逻辑输出数据缓存04之间的共享连接边为可修改共享连接边,因此当逻辑输出数据缓存04的任务节点04的执行时序先于与任务节点00的逻辑输出数据缓存00 处于共享关系的任何下游任务节点(例如任务节点02)的执行时序的情况下,或者当逻辑输出数据缓存04的任务节点04的执行时序与任务节点00的逻辑输出数据缓存00 处于共享关系的任何下游任务节点(例如任务节点02)的执行时序不相关联的情况下,就存在一个批次的数据执行过程中,逻辑输出数据缓存04的任务节点04的执行先于逻辑输出数据缓存02的任务节点02的执行的可能,这会导致逻辑输出数据缓存02的任务节点02的下游任务节点在使用逻辑输出数据缓存02时不能获得正确的输入数据,导致任务节点02所在的数据处理路径的后续任务节点都将获得错误的输入数据从而获得错误的输出结果数据或者无妨执行数据处理。因此,当逻辑输出数据缓存04的任务节点04的执行时序先于逻辑输出数据缓存02的任务节点02的执行时序的情况下,图6所示的原始共享关系树的情形是一种潜在的冲突共享关系树,是需要进行消除的共享关系,为此,共享冲突识别组件120将一个其中存在并联于同一逻辑输出数据缓存的一个可修改共享连接边和多个串联的不可修改共享连接边的原始共享关系树识别为含有冲突共享连接边的冲突共享关系树,其中被并联的所述可修改共享连接边对应的任务节点的执行时序与所述多个串联的不可修改共享连接边之一所对应的任务节点的执行时序不相关或者被并联的所述可修改共享连接边对应的任务节点的执行时序先于所述多个串联的不可修改共享连接边之一所对应的任务节点的执行时序。随后,共享冲突消除组件130消除彼此冲突的一对共享连接边之一,从而获得如图6的右下侧的修改后共享关系树。尽管图6中显示的是逻辑输出数据缓存02的任务节点02是任务节点00的直接下游任务节点,可选择地,执行时序晚于任务节点04的任务节点可以是任务节点02的下游任务节点,并且该节点与任务节点02处于不可修改共享关系。
以上对各种存在冲突共享关系情形的描述。但是为了尽可能事项更多的内存共享,节省内存消耗和使得设备的内存效用最大化,需要对存在各种彼此之间存在共享冲突进行综合考虑,从而能够选择一种断开共享连接边最少的但是又能够消除共享关系树中的所有共享冲突的共享断开方式。具体而言,如图所示,共享冲突消除组件130中的断开方式选择单元131,其在众多冲突情形下,如果只需要断开某一对彼此冲突的共享连接边之一可以消除所有共享冲突,则选择断开彼此冲突的共享连接边之一。同样,如果在众多冲突情形下,如果需要断开某两对彼此冲突的共享连接边以消除所有共享冲突,则选择断开两个共享连接边。以此类推,从而选择共享代价损失最小的但共享冲突消除方式来实现共享范围最大化。
尽管以上针对本公开的构思结合附图进行详细的描述,很显然,根据本公开的另一个方面,其包含了一种内存资源原地共享决策方法。具体而言,用于数据处理系统的内存资源原地共享决策方法,包括:通过原始共享关系树生成组件110在数据处理系统的任务拓扑图生成过程中,基于一个当前任务节点的共享标签所指定的所述当前任务节点的一个逻辑输出数据缓存与其上游任务节点的一个逻辑输出数据缓存之间所存在的共享关系,生成包括当前任务节点的逻辑输出数据缓存、上游任务节点的逻辑输出数据缓存以及用于连接两者以表示两者之间共享关系的共享连接边的原始共享关系树,其中所述共享连接边包括由共享标签指定的可修改共享连接边和和不可修改共享连接边;通过共享冲突识别组件120将一个逻辑输出数据缓存并联有至少两个以上的共享连接边并且其中至少一个共享连接边为可修改共享连接边的原始共享关系树识别为含有冲突共享连接边的冲突共享关系树;以及通过共享冲突消除组件130将至少一个可修改共享连接边断开,从而消除被识别为冲突共享关系树中的共享冲突。最后,通过共享标签冲重写组件140基于新的共享关系树中的共享连接边所包含的信息,修改原始任务拓扑图所对应共享标签,从而形成新的任务拓扑图,其中有些任务节点的共享标签被初始值“1”被重置为“0”。
采用本公开的用于数据处理系统的内存资源原地共享决策系统或方法,能够在数据处理系统基于节点所具有的各种属性预先消除各个任务节点之间可能存在的各种共享冲突的情形,从而消除了数据处理系统在实际执行数据处理过程中会产生的错误以及需要增加彼此等待处理机制的情形,从而能够更合理地利用内存资源共享和提高流式数据处理系统的运行流畅性。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,对本领域的普通技术人员而言,能够理解本公开的方法和装置的全部或者任何步骤或者部件,可以在任何计算装置(包括处理器、存储介质等)或者计算装置的网络中,以硬件、固件、软件或者它们的组合加以实现,这是本领域普通技术人员在阅读了本公开的说明的情况下运用他们的基本编程技能就能实现的。
因此,本公开的目的还可以通过在任何计算装置上运行一个程序或者一组程序来实现。所述计算装置可以是公知的通用装置。因此,本公开的目的也可以仅仅通过提供包含实现所述方法或者装置的程序代码的程序产品来实现。也就是说,这样的程序产品也构成本公开,并且存储有这样的程序产品的存储介质也构成本公开。显然,所述存储介质可以是任何公知的存储介质或者将来所开发出来的任何存储介质。
还需要指出的是,在本公开的装置和方法中,显然,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。并且,执行上述系列处理的步骤可以自然地按照说明的顺序按时间顺序执行,但是并不需要一定按照时间顺序执行。某些步骤可以并行或彼此独立地执行。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,取决于设计要求和其他因素,可以发生各种各样的修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (8)

  1. 一种用于数据处理系统的内存资源原地共享决策系统,包括:
    原始共享关系树生成组件,在数据处理系统的任务拓扑图生成过程中,基于所有任务节点的共享标签生成一个或多个共享关系树,每株共享关系树的每个共享关系对包括一个当前任务节点的逻辑输出数据缓存、上游任务节点的逻辑输出数据缓存以及用于连接两者以表示两者之间共享关系的共享连接边,从而每株共享关系树通过所有共享连接边使得所有逻辑输出数据缓存能够共享同一个指定输出数据缓存,并且所述共享连接边通过其对应当前任务节点的共享标签指明其为可修改共享连接边或不可修改共享连接边;
    共享冲突识别组件,用于将其中存在第一共享冲突状态或第二共享冲突状态的一株共享关系树识别为含有冲突共享连接边的冲突共享关系树,所述第一共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其所在的共享分支中的上游逻辑输出数据缓存或其他共享分支中的任何逻辑输出数据缓存的位于所述共享关系树之外的下游任务节点的执行时序或与其不相关,以及所述第二共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其他共享分支中的任何共享连接边所对应的任务节点的执行时序或与其不相关;
    共享冲突消除组件,用于通过在第一共享冲突状态下断开所述至少一个可修改共享连接边或在第二共享冲突状态下断开彼此之间存在共享冲突的一个或多个共享连接边,从而消除被识别为冲突共享关系树中的共享冲突。
  2. 根据权利要求1所述的用于数据处理系统的内存资源原地共享决策系统,其中所述共享冲突消除组件还包括断开方式选择单元,计算每种能够消除共享冲突的共享断开方式所断开存在共享冲突的共享连接边数量,从而断开存在共享冲突的共享连接边的数量最小的消除共享冲突的共享断开方式得以被选择。
  3. 根据权利要求1或2所述的用于数据处理系统的内存资源原地共享决策系统,其中所所述共享冲突识别组件直接将所述第二共享冲突状态下两个共享分支中都含有可修改共享连接边的共享关系树识别为含有冲突共享连接边的冲突共享关系树。
  4. 根据权利要求1或2所述的用于数据处理系统的内存资源原地共享决策系统,其中还包括:共享标签重写组件,基于消除共享冲突后的共享关系树,将被断开的共享连接边所对应的原始任务拓扑图中对应任务节点的共享标签设置为无效。
  5. 一种用于数据处理系统的内存资源原地共享决策方法,包括:
    通过原始共享关系树生成组件在数据处理系统的任务拓扑图生成过程中,基于所有任务节点的共享标签生成一个或多个共享关系树,每株共享关系树的每个共享关系对包括一个当前任务节点的逻辑输出数据缓存、上游任务节点的逻辑输出数据缓存以及用于连接两者以表示两者之间共享关系的共享连接边,从而每株共享关系树通过所有共享连接边使得所有逻辑输出数据缓存能够共享同一个指定输出数据缓存,并且所述共享连接边通过其对应当前任务节点的共享标签指明其为可修改共享连接边或不可修改共享连接边;
    通过共享冲突识别组件将其中存在第一共享冲突状态或第二共享冲突状态的一株共享关系树识别为含有冲突共享连接边的冲突共享关系树,所述第一共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其所在的共享分支中的上游逻辑输出数据缓存或其他共享分支中的任何逻辑输出数据缓存的位于所述共享关系树之外的下游任务节点的执行时序或与其不相关,以及所述第二共享冲突状态为至少一个可修改共享连接边所对应的任务节点的执行时序早于其他共享分支中的任何共享连接边所对应的任务节点的执行时序或与其不相关;以及
    通过共享冲突消除组件在第一共享冲突状态下断开所述至少一个可修改共享连接边或在第二共享冲突状态下断开彼此之间存在共享冲突的一个或多个共享连接边,从而消除被识别为冲突共享关系树中的共享冲突。
  6. 根据权利要求5所述的用于数据处理系统的内存资源原地共享决策方法,还包括:通过所述共享冲突消除组件的断开方式选择单元计算每种能够消除共享冲突的共享断开方式所断开存在共享冲突的共享连接边数量,从而选择断开存在共享冲突的共享连接边的数量最小的消除共享冲突的共享断开方式。
  7. 根据权利要求5或6所述的用于数据处理系统的内存资源原地共享决策方法,其中所述共享冲突识别组件直接将所述第二共享冲突状态下两个共享分支中都含有可修改共享连接边的共享关系树识别为含有冲突共享连接边的冲突共享关系树。
  8. 根据权利要求5或6所述的用于数据处理系统的内存资源原地共享决策方法,其中还包括:通过共享标签重写组件基于消除共享冲突后的新共享关系树,将被断开的共享连接边所对应的原始任务拓扑图中对应任务节点的共享标签设置为无效。
PCT/CN2021/072785 2020-01-20 2021-01-20 内存资源原地共享决策系统及其方法 WO2021147876A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010063534.4A CN111158919B (zh) 2020-01-20 2020-01-20 内存资源原地共享决策系统及其方法
CN202010063534.4 2020-01-20

Publications (1)

Publication Number Publication Date
WO2021147876A1 true WO2021147876A1 (zh) 2021-07-29

Family

ID=70564606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/072785 WO2021147876A1 (zh) 2020-01-20 2021-01-20 内存资源原地共享决策系统及其方法

Country Status (2)

Country Link
CN (1) CN111158919B (zh)
WO (1) WO2021147876A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158919B (zh) * 2020-01-20 2020-09-22 北京一流科技有限公司 内存资源原地共享决策系统及其方法
CN111488221B (zh) * 2020-06-29 2020-10-09 北京一流科技有限公司 静态网络中的内存空间预配系统及其方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130152090A1 (en) * 2011-12-07 2013-06-13 Soeren Balko Resolving Resource Contentions
CN103226467A (zh) * 2013-05-23 2013-07-31 中国人民解放军国防科学技术大学 数据并行处理方法、系统及负载均衡调度器
US20170187792A1 (en) * 2015-12-28 2017-06-29 EMC IP Holding Company LLC Method and apparatus for distributed data processing
CN110209629A (zh) * 2019-07-15 2019-09-06 北京一流科技有限公司 协处理器的数据处理路径中的数据流动加速构件及其方法
CN110222005A (zh) * 2019-07-15 2019-09-10 北京一流科技有限公司 用于异构架构的数据处理系统及其方法
CN110245024A (zh) * 2019-07-15 2019-09-17 北京一流科技有限公司 静态存储块的动态分配系统及其方法
CN110262995A (zh) * 2019-07-15 2019-09-20 北京一流科技有限公司 执行体创建系统和执行体创建方法
CN111158919A (zh) * 2020-01-20 2020-05-15 北京一流科技有限公司 内存资源原地共享决策系统及其方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100390791C (zh) * 2004-05-31 2008-05-28 国际商业机器公司 流程图的编辑、重组验证、创建和转换的方法和装置
US9632827B2 (en) * 2006-12-21 2017-04-25 International Business Machines Corporation Resource manager for managing the sharing of resources among multiple workloads in a distributed computing environment
CN101685408B (zh) * 2008-09-24 2013-10-09 国际商业机器公司 多个线程并行访问共享数据结构的方法及装置
CN101848549B (zh) * 2010-04-29 2012-06-20 中国人民解放军国防科学技术大学 无线传感器网络节点任务调度方法
CN104166588B (zh) * 2013-05-16 2018-10-09 腾讯科技(深圳)有限公司 阅读内容的信息处理方法及装置
CN107025130B (zh) * 2016-01-29 2021-09-03 华为技术有限公司 处理节点、计算机系统及事务冲突检测方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130152090A1 (en) * 2011-12-07 2013-06-13 Soeren Balko Resolving Resource Contentions
CN103226467A (zh) * 2013-05-23 2013-07-31 中国人民解放军国防科学技术大学 数据并行处理方法、系统及负载均衡调度器
US20170187792A1 (en) * 2015-12-28 2017-06-29 EMC IP Holding Company LLC Method and apparatus for distributed data processing
CN110209629A (zh) * 2019-07-15 2019-09-06 北京一流科技有限公司 协处理器的数据处理路径中的数据流动加速构件及其方法
CN110222005A (zh) * 2019-07-15 2019-09-10 北京一流科技有限公司 用于异构架构的数据处理系统及其方法
CN110245024A (zh) * 2019-07-15 2019-09-17 北京一流科技有限公司 静态存储块的动态分配系统及其方法
CN110262995A (zh) * 2019-07-15 2019-09-20 北京一流科技有限公司 执行体创建系统和执行体创建方法
CN111158919A (zh) * 2020-01-20 2020-05-15 北京一流科技有限公司 内存资源原地共享决策系统及其方法

Also Published As

Publication number Publication date
CN111158919A (zh) 2020-05-15
CN111158919B (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
CN103078941B (zh) 一种分布式计算系统的任务调度方法
TWI713846B (zh) 領域模組運算單元,含有一企業之一模型之系統,單板運算單元,運算單元之網格,提供傳播可追溯性之方法,及非暫時性電腦程式產品
CN101884024B (zh) 在基于图的计算中管理数据流
WO2021008259A1 (zh) 用于异构架构的数据处理系统及其方法
CN111310936A (zh) 机器学习训练的构建方法、平台、装置、设备及存储介质
WO2021008258A1 (zh) 协处理器的数据处理路径中的数据流动加速构件及其方法
WO2021147876A1 (zh) 内存资源原地共享决策系统及其方法
Jones Process real-time big data with twitter storm
US20100036704A1 (en) Method and system for allocating requirements in a service oriented architecture using software and hardware string representation
WO2021008260A1 (zh) 数据执行体及其数据处理方法
CN103532808A (zh) 一种整合规则引擎的企业服务总线
WO2022002021A1 (zh) 静态网络中的内存空间预配系统及其方法
WO2021147878A1 (zh) 控制任务集中的任务并行的系统及其方法
CN116340024A (zh) 仿真模型组件进程间的数据共享方法、计算机设备及介质
Shrivastava et al. Real time transaction management in replicated DRTDBS
Oliveira et al. Reconfiguration mechanisms for service coordination
CN111475684B (zh) 数据处理网络系统及其计算图生成方法
Yamaguchi et al. WF-net based modeling and soundness verification of interworkflows
WO2015045091A1 (ja) ベイジアンネットワークの構造学習におけるスーパーストラクチャ抽出のための方法及びプログラム
US11681545B2 (en) Reducing complexity of workflow graphs through vertex grouping and contraction
de Oliveira et al. Using linear logic to verify requirement scenarios in SOA models based on interorganizational workflow nets relaxed sound
Shi et al. Business Objects-A New Business Process Modeling Approach
Al-hammouri et al. Realizability of service specifications
Hosseinzadeh et al. An effective duplication-based task-scheduling algorithm for heterogeneous systems
Rouhani et al. The role of agent-oriented technology on developing an enterprise architecture implementation methodology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21744212

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21744212

Country of ref document: EP

Kind code of ref document: A1