WO2018045541A1 - Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement - Google Patents

Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement Download PDF

Info

Publication number
WO2018045541A1
WO2018045541A1 PCT/CN2016/098495 CN2016098495W WO2018045541A1 WO 2018045541 A1 WO2018045541 A1 WO 2018045541A1 CN 2016098495 W CN2016098495 W CN 2016098495W WO 2018045541 A1 WO2018045541 A1 WO 2018045541A1
Authority
WO
WIPO (PCT)
Prior art keywords
tasks
task
relationship
server
container
Prior art date
Application number
PCT/CN2016/098495
Other languages
English (en)
Chinese (zh)
Inventor
程德华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/098495 priority Critical patent/WO2018045541A1/fr
Priority to CN201680086973.9A priority patent/CN109416646B/zh
Publication of WO2018045541A1 publication Critical patent/WO2018045541A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present invention relates to the field of cloud computing technologies, and in particular, to an optimization method and a processing device for container allocation.
  • JVM Java Virtual Machine
  • FIG 1 is a system architecture diagram of the Spark on Yarn system of the Hadoop distributed resource management memory computing framework.
  • the Spark on Yarn system includes: a client client, a master server cluster, and a slave server cluster.
  • the master server cluster and the slave server The cluster is described by taking a primary server and a secondary server as examples.
  • the primary server runs a resource management module (Resource Manager, RM) and an application management module (Application Manager, AM), and runs a node management module from the server ( Node Manager (NM), Container Agent Module (Agent), Hadoop Distribution File System (HDFS), Hadoop Database (Hbase), and Map Ruduce (MR).
  • NM Node Manager
  • Agent Container Agent Module
  • HDFS Hadoop Distribution File System
  • Hbase Hadoop Database
  • Map Ruduce Map Ruduce
  • the container allocation process of the above system is: the client first submits a job request including the target application to the primary server, and uploads the data file of the target application to the HDFS. After the resource management module in the primary server listens to the client's job request, Select a slave server to allocate the container, notify the node management module in the slave server to allocate a container as a resident container, and create and run the memory computing framework application master module (Spark Application Master, SAM) in the resident container. , memory computing framework application The sequence master module registers the mapping relationship between the resident container and the memory computing framework application main control module to the application management module, and the memory computing framework application main control module obtains the data file of the target application from the Hadoop distributed file system.
  • SAM memory computing framework application master module
  • the target application is parsed into multiple tasks in multiple different running phases, and each task is assigned a separate container, and the memory calculation is created and run in each independent container.
  • the framework execution module (Spark Executor) executes the tasks in the container execution module through the in-memory computing framework.
  • the invention provides an optimization method and a processing device for container allocation, which are used to reduce resource overhead and improve system resource utilization by optimizing a container allocation strategy of multiple tasks having a calling relationship.
  • an embodiment of the present invention provides a method for optimizing container allocation, which is applied to a cognitive computing server in a distributed resource management memory computing framework Spark on Yarn system, wherein the Spark on Yarn system includes selected To assign a slave server of the container, the slave server has a communication connection with the cognitive computing server, and the method steps include:
  • the cognitive computing server acquires N tasks, N is an integer greater than 1, and the above N tasks are decomposed by the application;
  • the cognitive computing server performs call relationship analysis on the N tasks to determine at least one task relationship group, and each task relationship group includes at least two tasks, wherein there is a call relationship between any two tasks, and the call relationship is any two There is a dependency on the execution result between tasks;
  • the cognitive computing server generates container allocation adjustment information according to the determined at least one task relationship group, where the container allocation adjustment information includes a mapping relationship between the at least one task relationship group and the container;
  • the cognitive computing server sends container allocation adjustment information to the selected slave server.
  • the cognitive computing server of the Spark on Yarn system determines at least one task relationship group by analyzing the calling relationship of the multiple tasks of the target application, and establishes at least one task relationship group and the container. Mapping the relationship, generating container allocation adjustment information including the mapping relationship, and sending the container allocation adjustment information to the slave server to cause the server to allocate a container allocation policy for the plurality of tasks according to the mapping relationship, wherein the task relationship group includes at least two tasks Therefore, the container corresponding to the task relationship group no longer only corresponds to one task, but corresponds to at least two tasks in one task relationship group, and is advantageous for saving only one task allocation strategy corresponding to one container in the existing solution.
  • the number of containers is allocated, thereby reducing resource overhead and improving system resource utilization.
  • the task relationship group includes a first task relationship group and a second task relationship group, before the cognitive computing server generates the container allocation adjustment information according to the determined at least one task relationship group,
  • the method also includes:
  • the cognitive computing server performs call relationship completeness analysis on the first task relationship group and the second task relationship group;
  • the cognitive computing server If there is a calling relationship between the task in the first task relationship group and the task in the second task relationship group, the cognitive computing server combines the first task relationship group and the second task relationship into an independent task relationship group;
  • the cognitive computing server adjusts the mapping relationship between the first task relationship group and the second task relationship group and the container.
  • the cognitive computing server performs a completeness analysis on a plurality of task relationship groups, and merges the task relationship group having the calling relationship into an independent task relationship group, because the independent task relationship
  • the group contains more tasks, so the container corresponding to the independent task relationship group will be associated with more tasks, which is beneficial to further increase the number of tasks corresponding to a single container, thereby reducing the number of containers corresponding to all tasks, and improving While the container performs efficiently, it further reduces resource overhead and improves system resource utilization.
  • the cognitive computing server performs a call relationship analysis on the N tasks, and determines at least one task relationship group, including:
  • the cognitive computing server determines X tasks from the N tasks, and there is a calling relationship between any one of the X tasks and at least one other task, and the other tasks are the X tasks except the above.
  • a task other than a task, X is a positive integer less than or equal to N;
  • the cognitive computing server determines at least one task relationship group from the X tasks.
  • the cognitive computing server pre-screens the X tasks that have the calling relationship, and filters out the single tasks of the N tasks that have no calling relationship with other tasks in time. Determining the task relationship group among the X tasks is beneficial to improving the execution efficiency of the algorithm.
  • the cognitive computing server determines at least one task relationship group from the X tasks, including:
  • the cognitive computing server analyzes the calling relationship included in the X tasks, and determines the Y task relationship groups included in the X tasks, and each of the Y task relationship groups includes at least two tasks, where The call relationship exists between any two tasks, and Y is a positive integer less than X.
  • the cognitive computing server analyzes the calling relationship included in the X tasks, and determines Y task relationship groups included in the X tasks, including:
  • the cognitive computing server equates the calling relationship existing in X tasks and X tasks with the directed graph in graph theory, in which X tasks are equivalent to vertices in the directed graph, and X tasks exist.
  • the calling relationship is equivalent to a directed connecting line between the vertices of the above directed graph;
  • the cognitive computing server solves the Y independent sets of the directed graph to obtain Y task relationship groups corresponding to the Y independent sets.
  • the cognitive computing server sends the container allocation adjustment information to the selected slave server, including:
  • the cognitive computing server detects available resources of the selected slave server
  • the container allocation for the Y1 task relationship groups in the Y task relationship groups is sent to the selected slave server.
  • Y1 is a positive integer less than Y.
  • the cognitive computing server can dynamically adjust the container allocation indication information according to the available resources of the selected slave server, so as to avoid the container being unable to allocate the container due to insufficient available resources of the container. The situation occurs, which is beneficial to improve the stability of the system dispensing container.
  • an embodiment of the present invention provides an optimization method for container allocation, which is applied to a slave server in a Spark on Yarn system of a distributed resource management memory computing framework, and the slave server is a slave server selected to allocate a container.
  • the above Spark on Yarn system includes a cognitive computing server communicatively coupled to the slave server, the method comprising:
  • the container allocation adjustment information includes a mapping relationship between the at least one task relationship group and the container, and the at least one task relationship group is performing a call relationship analysis on the N tasks of the target application. And determining that each of the independent task relationship groups includes at least two tasks, and there is a call relationship between any two of the at least two tasks, and the calling relationship is a dependency relationship between the execution results of any two tasks. ;
  • the slave server allocates adjustment information according to the container allocation, and allocates a container for the above N tasks.
  • the Spark on Yarn system obtains, from the server, the container allocation adjustment information sent by the cognitive computing server, where the container allocation adjustment information includes a mapping relationship between the at least one task relationship group and the container,
  • the task relationship group includes at least two tasks, so the container corresponding to the task relationship group no longer only corresponds to one task, but corresponds to at least two tasks in one task relationship group, and only corresponds to one container in the existing solution.
  • a task allocation strategy is conducive to saving the number of container allocations, helping to reduce resource overhead and improve system resource utilization.
  • the method before the obtaining, by the server, the container allocation adjustment information sent by the cognitive computing server, the method further includes:
  • a container allocation request is sent from the server to the cognitive computing server, the container allocation request is used to request the cognitive computing server application to perform a call relationship analysis on the N tasks to determine container allocation adjustment information of the N tasks.
  • At least one task relationship group is Y task relationship groups, and Y is a positive integer less than N;
  • the container allocation adjustment information is for the Y1 task relationship groups in the Y task relationship groups.
  • the container allocates adjustment information, and Y1 is a positive integer smaller than Y.
  • the container allocation adjustment information obtained by the Spark on Yarn system from the server can be dynamically adjusted according to the available resources of the container from the server, so as to avoid the insufficient resources available from the server due to the container.
  • the inability to allocate containers can help to improve the stability of the system dispensing container.
  • the method further includes:
  • n tasks of the task relationship group corresponding to the target container in the container from the server, where n is a positive integer greater than 1;
  • an execution phase parameter of n tasks and a calling relationship of n tasks from the server and the execution phase parameter includes at least two execution phase parameters;
  • the slave server determines the execution order of the n tasks according to the execution phase parameter and the calling relationship of the n tasks;
  • the above n tasks are run from the server in the target container in the order of execution.
  • the n tasks of the task relationship group can be continuously run in the target container, compared to the life of a container in the prior art.
  • the cycle is only used to run a task, which is beneficial to improve the efficiency of the container.
  • the method further includes:
  • the target container is destroyed from the server, and the Java virtual machine resource corresponding to the target container is recovered.
  • the server continuously runs the n tasks of the task relationship group in the target container, and after the n tasks are finished running, the target container is destroyed, and the Java corresponding to the target container is recovered.
  • the virtual machine resource that is, the operation process of container creation, destruction, and resource recycling is not required for each task, and the reuse of the container is realized, which is beneficial to reducing resource overhead and improving container execution efficiency.
  • an embodiment of the present invention provides a cognitive computing server, which has an implementation
  • the above method is designed to recognize the behavior of the computing server's behavior.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the cognitive computing server includes a processor configured to support a cognitive computing server to perform the corresponding functions of the above methods. Further, the cognitive computing server may further include a receiver and a transmitter for supporting communication between the cognitive computing server and a slave server or the like. Further, the cognitive computing server can also include a memory for coupling with the processor that holds program instructions and data necessary for the cognitive computing server.
  • an embodiment of the present invention provides a slave server, the device having a function of implementing behavior of a slave server in the design of the foregoing method.
  • the above functions can be implemented by hardware or by executing corresponding software through hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the slave server includes a processor configured to support execution of a corresponding function from the server in the above method. Further, the slave server may further include a receiver and a transmitter for supporting communication between the slave server and a device such as a cognitive computing server. Further, the slave server may further include a memory for coupling with the processor, which stores necessary program instructions and data from the server.
  • an embodiment of the present invention provides a container processing system, an application and a distributed resource management memory computing framework Spark on Yarn system, where the container processing system includes a cognitive computing server provided by a third aspect of the embodiment of the present invention.
  • a slave server provided by the fourth aspect of the embodiments of the present invention.
  • an embodiment of the present invention provides a computer readable storage medium, where the program code is stored.
  • the above program code includes instructions for performing some or all of the steps described in any of the methods of the first aspect of the embodiments of the present invention.
  • an embodiment of the present invention provides a computer readable storage medium, where the computer readable storage medium stores program code.
  • the program code includes instructions for performing some or all of the steps described in any of the methods of the second aspect of the embodiments of the present invention.
  • the calling relationship is a calling relationship between data files of the task, where the calling relationship includes at least one of: a direct calling relationship and an indirect calling relationship;
  • the direct calling relationship includes at least one of the following: a one-way direct calling relationship, and a two-way direct calling relationship;
  • the indirect calling relationship includes at least one of the following: a transitive indirect calling relationship, and an indirect calling relationship dependent on a third party.
  • FIG. 1 is a system architecture diagram of an operating mechanism of an in-memory computing framework Spark disclosed in the prior art solution in a Yarn container;
  • FIG. 2 is a system architecture diagram of a Spark on Yarn system 100 for a Hadoop distributed resource management memory computing framework according to an embodiment of the present invention
  • FIG. 3 is a schematic flow chart of a container allocation processing process according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of a method for optimizing container allocation according to an embodiment of the present invention.
  • FIG. 5A is a schematic diagram of a call relationship between tasks according to an embodiment of the present invention.
  • FIG. 5B is a schematic diagram of task analysis of a target application according to an embodiment of the present invention.
  • FIG. 5C is a schematic diagram of container allocation according to a resource balancing policy according to an embodiment of the present invention.
  • 6A is a block diagram of a unit composition of a cognitive computing server according to an embodiment of the present invention.
  • 6B is a schematic structural diagram of a cognitive computing server according to an embodiment of the present invention.
  • FIG. 7A is a block diagram of a unit configuration of a slave server according to an embodiment of the present invention.
  • FIG. 7B is a schematic structural diagram of a slave server according to an embodiment of the present invention.
  • FIG. 2 is a system architecture diagram of a Spark on Yarn system 100 for a Hadoop distributed resource management memory computing framework according to an embodiment of the present invention.
  • the Spark on Yarn system 100 specifically includes: a client client, a master server in a master server cluster, a slave server in a slave server cluster, and a cognitive computing server in a cluster of cognitive computing servers.
  • the client is used to obtain the target application and generate data files and application requests for the target application.
  • the main server runs a resource management module (Resource Manager, RM) and an application management (AMMS) module, wherein the RM module is configured to receive an application request, and the query AM module determines that the slave server available to the container is selected as the selected one.
  • a resource allocation request is sent from the server and to the secondary server.
  • Storage resources such as the slave server's hard disk resources belonging to the slave server from the server running node management (NM) module, the container agent (Agent) module, and the Hadoop distribution file system (HDFS) a Hadoop database (Hbase), a map reduction module (Map Reduce) module; wherein the HDFS includes storage resources of all slave servers in the slave server cluster in the Spark on Yarn system, and each slave server Ability to retrieve data files from HDFS or upload data files to HDFS.
  • NM server running node management
  • Agent container agent
  • HDFS Hadoop distribution file system
  • Hbase Hadoop database
  • Map Reduce Map Reduce
  • the slave server is configured to receive a resource allocation request, process a data file of the target application, and send a container allocation request to the cognitive computing server;
  • the NM module is configured to manage resources of the slave server, such as creating a resident container, and managing the task for assigning The container, etc.;
  • the resident container created by the NM module runs the memory computing framework application master (Spark Application Master, SAM) module, and the SAM module is used to allocate containers for multiple tasks of the target application according to the container allocation result;
  • the Agent module Design the software module for collecting the information of the container pre-allocation result generated by the SAM module and the data file of the task, and write the Pipeline write mode through the pipeline, store the data file of the task to the HDFS, and write the container pre-allocation result.
  • Hbase Into Hbase; HDFS is used to store the data files of the target application, Hbase is used to store the container pre-allocation results and the container allocation result sent by the cognitive computing server; Map Ruduce module is used to analyze the calling relationship of multiple tasks in the cognitive computing server Time, Provide the corresponding data analysis capabilities for cognitive computing servers.
  • Cognitive computing server operation management platform AP
  • cognitive center module Cognitive Center
  • analysis scripts Analysis Scripts
  • Web UI network product interface design
  • cognitive computing server is used to receive container allocation
  • the data file of the plurality of tasks of the target application is requested to perform call relationship analysis, and the container allocation result of the plurality of tasks is generated according to the call relationship of the plurality of tasks obtained by the analysis, and the container allocation result is sent to the slave server.
  • the analysis script module is configured to store a preset resource allocation policy
  • the Web UI module provides a human-machine interaction interface for outputting information about the running information of the Spark on Yarn system and the real-time monitoring status of the container running the task.
  • the container allocation processing process of the above Spark on Yarn system 100 is described in detail below by taking the target application of the user input by the client as an example. As shown in FIG. 3, the example container allocation process includes the following steps:
  • the client acquires a target application entered by the user, generates an application request and a data file of the target application, and sends the data file to the HDFS.
  • the HDFS includes the storage resources of all the slave servers in the server cluster in the Spark on Yarn system, and each slave server shares the data files in the HDFS, and the storage resources may be, for example, hard disk resources.
  • the client sends an application request to the RM module of the primary server.
  • the RM module of the primary server receives the application request, queries the AM module, and determines the slave server available for the container resource as the selected slave server used to allocate the container.
  • the RM module sends a resource allocation request to the NM module of the slave server.
  • S305 Receive a resource allocation request from the NM of the server, allocate a resident container, and create and run a SAM module in the resident container.
  • the SAM module obtains a data file of the target application from the HDFS, and parses the target application into multiple tasks in multiple different running phases according to the pre-stored original container allocation policy, and generates data files and containers of multiple tasks. Pre-allocation result;
  • the container pre-allocation result includes a mapping relationship between the task and the container, and one task corresponds to one container.
  • the SAM module sends a data file of multiple tasks to the HDFS through the Agent module, and sends a mapping relationship between the server and the target application, a mapping relationship between the target application and multiple tasks, and operation of multiple tasks to Hbase. Stage parameters, as well as container pre-allocation results.
  • the container proxy module After performing the foregoing operations, the container proxy module sends a container allocation request to the AP of the cognitive computing server.
  • the AP of the cognitive computing server receives the container allocation request, and sends a cognitive computing request for the plurality of tasks of the target application to the cognitive center module.
  • the cognitive center module After receiving the cognitive computing request, acquires data files of multiple tasks from the HDFS, and obtains a preset container allocation policy from the script analysis module, according to the container allocation policy and the data files of the multiple tasks acquired above. Calling the Map Reduce module to perform call relationship analysis on multiple tasks of the target application, determining at least one task relationship group, and generating container allocation adjustment information according to at least one task relationship group, and transmitting the container allocation adjustment information to the Hbase of the slave server.
  • the container allocation information includes a mapping relationship between the at least one task relationship group and the container.
  • the container proxy module invokes a mapping relationship between at least one task relationship group and the container in the Hbase, and returns the mapping relationship to the SAM module.
  • the SAM module allocates a container according to a mapping relationship between the at least one task relationship group and the container.
  • the Spark on Yarn system includes a cognitive computing server and a selected slave server for allocating containers, wherein the cognitive computing server can establish a container and a target.
  • the slave server according to the optimized container allocation policy may be Multiple tasks are assigned a container so that the container can run all the tasks in the task relationship group, and after all the tasks are completed, the container is destroyed and the corresponding resources of the container are recycled. There is no need to perform container creation, destruction, and resources for each task. Recycling operation process, realizing the container With, help reduce resource costs, improve resource utilization efficiency and containers.
  • FIG. 4 is a schematic diagram of an optimization method for container allocation according to an embodiment of the present invention, which is applied to a cognitive computing server in a distributed resource management memory computing framework Spark on Yarn system, and the Spark on Yarn system includes selected To assign a slave server of the container, the slave server is in communication with the cognitive computing server.
  • the method includes: S401 ⁇ S406 parts, as follows:
  • the cognitive computing server acquires N tasks, where N is an integer greater than 1, and the N tasks are decomposed by the target application.
  • the N tasks are obtained by the cognitive computing server from the Hadoop distributed file system HDFS associated with the slave server.
  • the specific form of the N tasks may be N data files corresponding to N tasks.
  • the HDFS includes the storage resources of all the slave servers in the server cluster in the Spark on Yarn system, and each slave server shares the data files in the HDFS, and the storage resources may be, for example, hard disk resources.
  • the cognitive computing server before the cognitive computing server acquires N tasks, the following operations are also performed: the cognitive computing server receives a container allocation request for N tasks for the target application sent from the server.
  • the cognitive computing server performs call relationship analysis on the N tasks, and determines at least one task relationship group.
  • Each task relationship group includes at least two tasks, and any two tasks in each of the task relationship groups.
  • the cognitive center module of the cognitive computing server acquires a preset call relationship analysis policy from the script analysis module, and performs operations of steps S402 to S404 according to the call relationship analysis policy.
  • the above calling relationship includes at least one of the following: a direct calling relationship and an indirect calling relationship.
  • a direct calling relationship and an indirect calling relationship may be specifically represented by the calling relationship diagram shown in FIG. 5A.
  • the direct calling relationship includes at least one of the following: a one-way direct calling relationship (see (1) in FIG. 5A), a two-way direct calling relationship (see (2) in FIG. 5A); the indirect calling relationship includes at least one of the following : transitive indirect call relationships (see (3) in Figure 5A), indirect call relationships that depend on third parties (see (4) in Fig. 5A.
  • the cognitive computing server performs a call relationship analysis on the N tasks, and the specific implementation manner of determining the at least one task relationship group may be:
  • the cognitive computing server determines X tasks from the N tasks, and there is a calling relationship between any one of the X tasks and at least one other task, and the other tasks are other than the any one of the X tasks.
  • Task, X is a positive integer less than or equal to N;
  • the cognitive computing server determines at least one task relationship group from the X tasks.
  • the cognitive computing server pre-screens the X tasks that have the calling relationship, filters out the orphan tasks in the N tasks in time, and determines the task relationship group only from the X tasks, which is beneficial to improving the algorithm execution. effectiveness.
  • the specific implementation manner in which the cognitive computing server determines at least one task relationship group from the above X tasks may be:
  • the cognitive computing server analyzes the calling relationship included in the X tasks, and determines Y task relationship groups included in the X tasks, and each of the Y task relationship groups includes at least two tasks, at least two There is a call relationship between any two tasks in the task, and Y is a positive integer less than X.
  • the cognitive computing server analyzes the call relationships included in the X tasks, and determines that the specific implementation of the Y task relationship groups included in the X tasks may be:
  • the cognitive computing server equivalents the calling relationship existing in the X tasks and the X tasks to the directed graph in the graph theory, wherein the X tasks are equivalent to the vertices in the directed graph, and the X tasks are The calling relationship existing in the equivalent is the directed connecting line between the vertices of the directed graph;
  • the cognitive computing server solves Y independent sets of the directed graphs to obtain Y task relationship groups corresponding to the Y independent sets.
  • an independent set refers to a subset of the set of vertices of the graph, and the derived subgraph of the subset does not contain edges. If an independent set is not a subset of any independent set, then this independent set is called a very large independent set. An independent set with the largest number of vertices in a graph is called the largest independent set.
  • the independent set includes: ⁇ Task 2, Task 4 ⁇ , ⁇ Task 2, Task 4, Task 7 ⁇ , ⁇ Task 2, Task 4, Task 7, Task 8 ⁇ , ⁇ Task 2, Task 4, Task 7, Task 9 ⁇ , ⁇ Task 3, Task 5, Task 7, Task 9 ⁇ , etc., wherein the largest independent set includes ⁇ Task 2, Task 4, Task 7, Task 8 ⁇ , ⁇ Task 2, Task 4, Task 7, Task 9 ⁇ , ⁇ Task 3, Task 5, Task 7, Task 9 ⁇ .
  • the cognitive computing server generates container allocation adjustment information according to the determined at least one task relationship group, where the container allocation adjustment information includes a mapping relationship between the at least one task relationship group and the container.
  • the task relationship group includes a first task relationship group and a second task relationship group
  • the cognitive computing server performs the following operations before generating the container allocation adjustment information according to the determined at least one task relationship group:
  • the cognitive computing server performs call relationship completeness analysis on the first task relationship group and the second task relationship group;
  • the cognitive computing server If there is a calling relationship between the task in the first task relationship group and the task in the second task relationship group, the cognitive computing server combines the first task relationship group and the second task relationship into an independent task relationship group;
  • the cognitive computing server adjusts the mapping relationship between the first task relationship group and the second task relationship group and the container.
  • the cognitive computing server performs a completeness analysis on multiple task relationship groups, and merges the task relationship group with the calling relationship into an independent task relationship group, because the independent task relationship group contains more tasks. Therefore, the container corresponding to the independent task relationship group will be associated with more tasks, which is advantageous for further increasing the number of tasks corresponding to a single container, thereby reducing the number of containers corresponding to all tasks, and improving container execution efficiency while further improving the container execution efficiency. Reduce resource overhead and increase system resource utilization.
  • the cognitive computing server sends the container allocation adjustment information to the selected slave server.
  • the specific implementation manner in which the cognitive computing server sends the container allocation adjustment information to the selected slave server may be:
  • the cognitive computing server detects available resources of the selected slave server
  • Y1 of the Y task relationship groups are sent to the selected slave server.
  • the container allocation indication information of the relationship group, Y1 is a positive integer smaller than Y.
  • the available resources of the container include the software and hardware resources such as the memory, CPU, and hard disk of the server.
  • the cognitive computing server can dynamically adjust the container allocation indication information according to the available resources of the selected slave server, and avoid the situation that the server cannot be allocated due to insufficient available resources of the container, which is beneficial to improvement.
  • the system allocates the stability of the container.
  • the cognitive computing server if it is detected that the available resources of the selected slave server are smaller than the resources required by the Y task relationship groups, the cognitive computing server further performs the following operations:
  • the cognitive computing server selects another server in the Spark on Yarn system as an alternate slave server, the candidate slave server being the slave server having the smallest data transmission distance from the selected slave server;
  • the cognitive computing server of the Spark on Yarn system determines at least one task relationship group by analyzing the calling relationship of the plurality of tasks of the target application, and establishes at least one task relationship group and the container. a mapping relationship between the container, the container allocation adjustment information including the mapping relationship, and the container allocation adjustment information sent to the slave server to facilitate the container allocation policy for optimizing the plurality of tasks from the server according to the mapping relationship, wherein the task relationship group includes at least Two tasks, so the container corresponding to the task relationship group no longer only corresponds to one task, but corresponds to at least two tasks in a task relationship group, and only one task allocation strategy is corresponding to one container in the existing solution. It is beneficial to save the number of container allocations, which is beneficial to reduce resource overhead and improve system resource utilization.
  • each task relationship group includes at least two tasks, and there is a call relationship between any two of the at least two tasks, and the call relationship is a dependency relationship between the execution results of any two tasks.
  • the container allocation adjustment information sent by the cognitive computing server is obtained from the server. Before, the slave also performs the following operations:
  • a container allocation request is sent from the server to the cognitive computing server, and the container allocation request is used to request the cognitive computing server application to perform a call relationship analysis on the N tasks to determine container allocation adjustment information of the N tasks.
  • the target application 1 submitted by the user through the client is parsed into 10 tasks from the server, then, after the server writes the N tasks to the Hbase through the container proxy module, the table 1 can be obtained.
  • An application task index relationship table that contains the calling relationships between tasks.
  • the Job ID is the application ID
  • the Task ID is the task ID
  • the Stage is the running phase ID of the task.
  • 1 indicates that the corresponding task is in the first running phase
  • 2 indicates that the corresponding task is in the second running phase.
  • Caller/ Callee represents a task that has a calling relationship with the corresponding task.
  • the Caller/Callee corresponding to task 1 is null, that is, there is no task that has a calling relationship with task 1.
  • the corresponding Caller/Callee of task 2 is 4, that is, the call with task 2 exists.
  • the task of the relationship is task 4.
  • the at least one task relationship group is Y task relationship groups, and Y is a positive integer less than N;
  • the container allocation adjustment information is container allocation adjustment information for Y1 task relationship groups in the Y task relationship groups, and Y1 is a positive integer smaller than Y.
  • the container allocation adjustment information obtained by the Spark on Yarn system from the server can be dynamically adjusted according to the available resources of the container from the server, so as to avoid the situation that the container cannot be allocated due to insufficient available resources of the container. It is beneficial to improve the stability of the system dispensing container.
  • the slave server allocates adjustment information according to the container, and allocates a container for N tasks.
  • the cognitive computing server after the slave server allocates adjustment information according to the container, after allocating containers for the N tasks, the cognitive computing server further performs the following operations:
  • the cognitive computing server allocates a container for tasks other than the determined task relationship group among the N tasks according to a preset resource balancing policy.
  • the cognitive computing server may allocate a container for a task other than the determined task relationship group among the N tasks according to a preset resource balancing policy:
  • the cognitive computing server obtains available resources of the container from each of the slave servers in the Spark on Yarn system;
  • the cognitive computing server allocates a container in the slave server with the most available resources for the tasks other than the determined task relationship group among the N tasks.
  • the target application corresponding to FIG. 5B and Table 1 is still taken as an example, wherein the tasks in Task 1, Task 6, and Task 10 are single tasks that do not have any calling relationship with other tasks, assuming from the server cluster.
  • the slave server with the most available resources includes the slave server 1, the slave server 2, and the slave server 3.
  • the cognitive computing server can assign the slave server 1 to the task 1 according to the preset resource balancing policy. 6 is assigned from server 2, and task 10 is assigned to slave 3.
  • the slave server allocates adjustment information according to the container
  • the following operations are also performed:
  • n tasks of the task relationship group corresponding to the target container in the container where n is a positive integer greater than one;
  • the execution phase parameter includes at least two execution phase parameters
  • the slave server runs n tasks in the target container in the above-described execution order.
  • the server allocates the target container to the task relationship group
  • the n tasks of the task relationship group can be continuously run in the target container, and the life cycle of one container in the prior art is only used to run one task.
  • it is beneficial to improve the efficiency of the execution of the container.
  • the slave server destroys the target container, and recovers a Java virtual machine resource corresponding to the target container.
  • the server continuously runs the n tasks of the task relationship group in the target container, and after the n tasks are finished running, the target container is destroyed, and the Java virtual machine resource corresponding to the target container is recovered, that is, no need to be
  • Each task performs container creation, destruction, and resource recycling operations, which implements container reuse, which helps reduce resource overhead and improve container execution efficiency.
  • the Spark on Yarn system obtains the container allocation adjustment information sent by the cognitive computing server from the server, and the container allocation adjustment information includes mapping between at least one task relationship group and the container. Relationship, since the task relationship group includes at least two tasks, the container corresponding to the task relationship group no longer only corresponds to one task, but corresponds to at least two tasks in one task relationship group, compared to one of the existing solutions.
  • the container only corresponds to a task allocation strategy, which is beneficial to save the number of container allocations, and is beneficial to reducing resource overhead and improving system resource utilization.
  • each server such as a cognitive computing server and a slave server, includes hardware structures and/or software modules for performing respective functions in order to implement the above functions.
  • each server such as a cognitive computing server and a slave server, includes hardware structures and/or software modules for performing respective functions in order to implement the above functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is executed by hardware or computer software to drive hardware, depending on the technology Specific application and design constraints for the protocol.
  • a person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiments of the present invention may divide the functional units of the cognitive computing server and the slave server according to the foregoing method.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit. in.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 6A shows a possible structural diagram of the cognitive computing server involved in the above embodiment.
  • the cognitive computing server 600 includes a processing unit 602 and a communication unit 603.
  • the processing unit 602 is configured to control and manage the actions of the cognitive computing server.
  • the processing unit 602 is configured to support the cognitive computing server to perform steps S402, S403, S404 in FIG. 4 and/or for the techniques described herein. Other processes.
  • the processing unit 602 is further configured to support the management platform, the cognitive center module, and the network product interface design module in the cognitive computing server in FIG. 2 to perform corresponding operations.
  • the communication unit 603 is configured to support communication between the cognitive computing server and other devices, such as communication with the slave server or the like shown in FIG.
  • the cognitive computing server may further include a storage unit 601, configured to store program code and data of the cognitive computing server, specifically for supporting the script analysis module in the cognitive computing server in FIG. 2 to store a preset container allocation policy.
  • the processing unit 602 can be a processor or a controller, and can be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific). Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 603 can be a communication interface, a transceiver, a transceiver circuit, etc., wherein the communication interface
  • one or more interfaces may be included, for example, may include: an interface between the cognitive computing server and the secondary server and/or other interfaces.
  • the storage unit 601 can be a memory.
  • the cognitive computing server may be the cognitive computing server shown in FIG. 6B.
  • the cognitive computing server 610 includes a processor 612, a communication interface 613, and a memory 66.
  • the cognitive computing server 610 can also include a bus 614.
  • the communication interface 613, the processor 612, and the memory 66 may be connected to each other through a bus 614.
  • the bus 614 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (abbreviated). EISA) bus and so on.
  • the bus 614 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 6B, but it does not mean that there is only one bus or one type of bus.
  • the cognitive computing server shown in FIG. 6A or FIG. 6B can also be understood as a device for a cognitive computing server, which is not limited in the embodiment of the present invention.
  • FIG. 7A shows a possible structural diagram of the slave server involved in the above embodiment.
  • the slave server 700 includes a processing unit 702 and a communication unit 703.
  • Processing unit 702 is for controlling management of actions from the server, for example, processing unit 702 is configured to support performing steps S406 of FIG. 4 and/or other processes for the techniques described herein from a server.
  • the processing unit 702 is further configured to support the node management module, the memory computing framework job master module, the container proxy module, and the mapping protocol computing module in the server in FIG. 2 to perform corresponding operations.
  • the communication unit 703 is configured to support communication between the server and other network entities, such as the communication between the client, the primary server, the cognitive computing server, and the like shown in FIG.
  • the slave server may further include a storage unit 701 for storing program code and data of the slave server, specifically for supporting data files stored in the storage resource belonging to the slave server from the HDFS in the server as in FIG. 2, and supporting Hbase.
  • the pre-allocation result of the container stored in the storage, the mapping relationship between the server and the target application, and the mapping relationship between the target application and multiple tasks.
  • the processing unit 702 can be a processor or a controller, and can be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific). Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 703 can be a communication interface, a transceiver, a transceiver circuit, etc., wherein the communication interface is a collective name and can include one or more interfaces, for example, can include: an interface between the cognitive computing server and the secondary server and/or other interfaces .
  • the storage unit 701 can be a memory.
  • the slave server according to the embodiment of the present invention may be the slave server shown in FIG. 7B.
  • the slave server 710 includes a processor 77, a communication interface 713, and a memory 711.
  • the slave server 710 may also include a bus 714.
  • the communication interface 713, the processor 77, and the memory 711 may be connected to each other through a bus 714.
  • the bus 714 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (abbreviated). EISA) bus and so on.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 714 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 7B, but it does not mean that there is only one bus or one type of bus.
  • the above-mentioned slave server shown in FIG. 7A or FIG. 7B can also be understood as a device for a slave server, which is not limited in the embodiment of the present invention.
  • an embodiment of the present invention further provides a container processing system, which is applied to a distributed resource management memory computing framework Spark on Yarn system as shown in FIG. 2, and the container processing system includes any of the foregoing implementations.
  • the steps of the method or algorithm described in the embodiments of the present invention may be implemented in a hardware manner, or may be implemented by a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, which can be stored in random access memory (Random Access Memory, RAM), flash memory, read only memory (ROM), Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk , a removable hard disk, a compact disk read only (CD-ROM), or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a gateway device or mobility management network element. Of course, the processor and the storage medium may also exist as discrete components in the gateway device or the mobility management network element.
  • the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Stored Programmes (AREA)

Abstract

L'invention concerne un procédé d'optimisation de l'attribution d'un conteneur, et un dispositif de traitement. Dans le procédé d'optimisation de l'attribution d'un conteneur selon l'invention, un serveur informatique cognitif : acquiert N tâches, N étant un nombre entier supérieur à 1, les N tâches étant obtenues au moyen d'une décomposition exécutée par un programme d'application ; exécute une analyse de relation d'invocation sur les N tâches, détermine au moins un groupe de relations de tâches, chaque groupe de relations de tâches comprenant au moins deux tâches, une relation d'invocation existant entre deux tâches quelconques des deux tâches ou plus, la relation d'invocation étant la relation de dépendance sur un résultat d'exécution entre les deux tâches quelconques ; génère, d'après le ou les groupes de relations de tâches déterminés, des informations d'ajustement d'attribution de conteneur, les informations d'ajustement d'attribution de conteneur comprenant la relation de mappage entre au moins un groupe de relations de tâches et un conteneur ; et transmet les informations d'ajustement d'attribution de conteneurs à un serveur esclave sélectionné. Les modes de réalisation de la présente invention permettent avantageusement de réduire les surdébits de ressources, et d'améliorer l'utilisation de ressources de mémoire.
PCT/CN2016/098495 2016-09-08 2016-09-08 Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement WO2018045541A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/098495 WO2018045541A1 (fr) 2016-09-08 2016-09-08 Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement
CN201680086973.9A CN109416646B (zh) 2016-09-08 2016-09-08 一种容器分配的优化方法及处理设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/098495 WO2018045541A1 (fr) 2016-09-08 2016-09-08 Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement

Publications (1)

Publication Number Publication Date
WO2018045541A1 true WO2018045541A1 (fr) 2018-03-15

Family

ID=61561644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098495 WO2018045541A1 (fr) 2016-09-08 2016-09-08 Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement

Country Status (2)

Country Link
CN (1) CN109416646B (fr)
WO (1) WO2018045541A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427263A (zh) * 2018-04-28 2019-11-08 深圳先进技术研究院 一种面向Docker容器的Spark大数据应用程序性能建模方法、设备及存储设备
CN113452727A (zh) * 2020-03-24 2021-09-28 北京京东尚科信息技术有限公司 一种设备云化的业务处理方法和装置
CN113568599A (zh) * 2020-04-29 2021-10-29 伊姆西Ip控股有限责任公司 用于处理计算作业的方法、电子设备和计算机程序产品
WO2023206635A1 (fr) * 2022-04-29 2023-11-02 之江实验室 Procédé de traitement de décomposition de tâche pour calcul distribué
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing
CN113568599B (zh) * 2020-04-29 2024-05-31 伊姆西Ip控股有限责任公司 用于处理计算作业的方法、电子设备和计算机程序产品

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113849242A (zh) * 2020-06-12 2021-12-28 华为技术有限公司 生成、注册ui服务包、以及加载ui服务的方法及装置
CN112882818A (zh) * 2021-03-30 2021-06-01 中信银行股份有限公司 任务动态调整方法、装置以及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059377A1 (fr) * 2007-11-09 2009-05-14 Manjrosoft Pty Ltd Plate-forme logicielle et système pour une informatique en grille
CN101615159A (zh) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 离线测试系统及其本地数据管理方法及相应的装置
CN103034475A (zh) * 2011-10-08 2013-04-10 中国移动通信集团四川有限公司 分布式并行计算方法、装置及系统
CN105897826A (zh) * 2015-11-24 2016-08-24 乐视云计算有限公司 云平台服务创建方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478499B (zh) * 2009-01-08 2012-01-04 清华大学深圳研究生院 一种多协议标签交换网络中的流量分配方法及装置
US9256467B1 (en) * 2014-11-11 2016-02-09 Amazon Technologies, Inc. System for managing and scheduling containers
CN104657214A (zh) * 2015-03-13 2015-05-27 华存数据信息技术有限公司 一种基于多队列和多优先级的大数据任务管理系统和方法
CN105512083B (zh) * 2015-11-30 2018-09-21 华为技术有限公司 基于yarn的资源管理方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009059377A1 (fr) * 2007-11-09 2009-05-14 Manjrosoft Pty Ltd Plate-forme logicielle et système pour une informatique en grille
CN101615159A (zh) * 2009-07-31 2009-12-30 中兴通讯股份有限公司 离线测试系统及其本地数据管理方法及相应的装置
CN103034475A (zh) * 2011-10-08 2013-04-10 中国移动通信集团四川有限公司 分布式并行计算方法、装置及系统
CN105897826A (zh) * 2015-11-24 2016-08-24 乐视云计算有限公司 云平台服务创建方法及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427263A (zh) * 2018-04-28 2019-11-08 深圳先进技术研究院 一种面向Docker容器的Spark大数据应用程序性能建模方法、设备及存储设备
CN110427263B (zh) * 2018-04-28 2024-03-19 深圳先进技术研究院 一种面向Docker容器的Spark大数据应用程序性能建模方法、设备及存储设备
CN113452727A (zh) * 2020-03-24 2021-09-28 北京京东尚科信息技术有限公司 一种设备云化的业务处理方法和装置
CN113452727B (zh) * 2020-03-24 2024-05-24 北京京东尚科信息技术有限公司 一种设备云化的业务处理方法、装置和可读介质
CN113568599A (zh) * 2020-04-29 2021-10-29 伊姆西Ip控股有限责任公司 用于处理计算作业的方法、电子设备和计算机程序产品
CN113568599B (zh) * 2020-04-29 2024-05-31 伊姆西Ip控股有限责任公司 用于处理计算作业的方法、电子设备和计算机程序产品
WO2023206635A1 (fr) * 2022-04-29 2023-11-02 之江实验室 Procédé de traitement de décomposition de tâche pour calcul distribué
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing

Also Published As

Publication number Publication date
CN109416646A (zh) 2019-03-01
CN109416646B (zh) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2018045541A1 (fr) Procédé d'optimisation de l'attribution d'un conteneur, et dispositif de traitement
JP7463544B2 (ja) ブロックチェーンメッセージ処理方法、装置、コンピュータデバイスおよびコンピュータプログラム
US10635664B2 (en) Map-reduce job virtualization
WO2018149221A1 (fr) Procédé de gestion de dispositif, et système de gestion de réseau
US10701139B2 (en) Life cycle management method and apparatus
US11579907B2 (en) Acceleration management node, acceleration node, client, and method
CN108304473B (zh) 数据源之间的数据传输方法和系统
CN105045871B (zh) 数据聚合查询方法及装置
WO2018120171A1 (fr) Procédé, dispositif et système d'exécution de procédure stockée
CN109117252B (zh) 基于容器的任务处理的方法、系统及容器集群管理系统
WO2017092505A1 (fr) Procédé, système et dispositif pour mise à l'échelle élastique de ressources virtuelles dans un environnement informatique en nuage
CN110347515B (zh) 一种适合边缘计算环境的资源优化分配方法
CN111124589B (zh) 一种服务发现系统、方法、装置及设备
WO2020125396A1 (fr) Procédé et dispositif de traitement pour données partagées et serveur
WO2019223099A1 (fr) Procédé et système d'appel de programme d'application
US11249850B1 (en) Cluster diagnostics data for distributed job execution
Al-Sinayyid et al. Job scheduler for streaming applications in heterogeneous distributed processing systems
KR101765725B1 (ko) 대용량 방송용 빅데이터 분산 병렬처리를 위한 동적 디바이스 연결 시스템 및 방법
WO2021169264A1 (fr) Procédé et appareil de planification automatique pour intergiciel de couche d'accès à une base de données
WO2022257247A1 (fr) Procédé et appareil de traitement de données, et support de stockage lisible par ordinateur
WO2018188607A1 (fr) Procédé et dispositif de traitement de flux
WO2019034091A1 (fr) Procédé de distribution pour le calcul de données distribué, dispositif, serveur et support de stockage
WO2023184917A1 (fr) Procédé et système de traitement d'informations de puissance de calcul, et passerelle de puissance de calcul
WO2017185801A1 (fr) Procédé de connexion à une table d'un système de base de données distribué, et système de base de données distribué
US10817334B1 (en) Real-time analysis of data streaming objects for distributed stream processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16915476

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16915476

Country of ref document: EP

Kind code of ref document: A1