CN116610442A - Memory allocation method and device, computer equipment and storage medium - Google Patents

Memory allocation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116610442A
CN116610442A CN202310389001.9A CN202310389001A CN116610442A CN 116610442 A CN116610442 A CN 116610442A CN 202310389001 A CN202310389001 A CN 202310389001A CN 116610442 A CN116610442 A CN 116610442A
Authority
CN
China
Prior art keywords
task
memory
executed
value
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310389001.9A
Other languages
Chinese (zh)
Inventor
张馨益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202310389001.9A priority Critical patent/CN116610442A/en
Publication of CN116610442A publication Critical patent/CN116610442A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3017Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a memory allocation method, apparatus, computer device and storage medium, where the method includes: acquiring a task to be executed, and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory amount occupied by the task to be executed; determining a target prediction model corresponding to the task to be executed; inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value; and adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value. In the embodiment of the disclosure, the predicted memory value required by the task to be executed can be predicted through the target prediction model, and the memory limit value of the operation container corresponding to the task to be executed is adjusted based on the predicted memory value, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.

Description

Memory allocation method and device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a memory allocation method, a memory allocation device, computer equipment and a storage medium.
Background
Generally, before an operation task is executed by a server, a memory value occupied by the operation task may be predicted, and a memory resource may be specified for the operation task according to a prediction result, for example, when the operation task is a video processing task such as video transcoding, the memory resource may be allocated for the video processing task according to a video type of a video to be transcoded.
However, since the decision factor of the memory required by the video to be transcoded is more, the above memory resource allocation method is easy to generate the situation that the designated memory resource is too high or too low, and the designated resource is too high, which causes a great amount of resource waste. Specifically, in the peak period of the processing flow, the memory allocation rate can reach 100%, however, the utilization rate can only reach 10%, the resource waste phenomenon is serious, and the task cannot be executed due to too low designated resources, so that the success rate of task execution is affected.
Disclosure of Invention
The embodiment of the disclosure at least provides a memory allocation method, a memory allocation device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a memory allocation method, which is characterized in that the method includes:
acquiring a task to be executed, and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory amount occupied by the task to be executed;
Determining a target prediction model corresponding to the task to be executed;
inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value;
and adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value.
In an optional implementation manner, the determining the target prediction model corresponding to the task to be performed includes:
determining a historical task matched with the task to be executed;
determining task data of the historical task, wherein the task data comprises a memory occupation threshold value and target task parameters of the historical task;
constructing a training sample set corresponding to the original prediction model based on the task data;
and training the original prediction model according to the training sample set to obtain the target prediction model.
In an alternative embodiment, the determining task data of the historical task includes:
setting a memory monitoring process based on the historical task;
and controlling the memory monitoring process, collecting memory occupation data of the historical task according to a preset time interval, and determining a memory occupation threshold value in the memory occupation data.
In an alternative embodiment, the method further comprises:
after a memory occupation threshold value is determined in the memory occupation data, determining the running state of the memory monitoring process;
determining the operation time length of the memory monitoring process in a target state based on the operation state, wherein the target state is used for indicating a state that the memory monitoring process does not execute the action of collecting memory occupied data;
and under the condition that the running time exceeds a time threshold, determining that the confidence of the memory occupation threshold does not meet a confidence condition.
In an alternative embodiment, when the task to be performed is a task for transcoding a video to be processed, the target task parameter includes at least one of the following: the method comprises the steps of video type of a video to be processed, target code stream definition, a coding mode of a target code stream, a dynamic range of the target code stream and pixel information of the video to be processed.
In an optional implementation manner, the adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value includes:
determining the task priority of the task to be executed, and determining a preset multiplier matched with the priority;
And calculating the product of the predicted memory value and the preset multiple value to obtain the memory limit value.
In an alternative embodiment, the method further comprises:
after the memory limit value of the operation container corresponding to the task to be executed is adjusted, monitoring the execution state of the task to be executed;
and when determining that the task to be executed fails to be executed based on the execution state, adjusting the memory limit value of the operation container according to a preset proportion, and executing the task to be executed based on the adjusted operation container until the task to be executed is successfully executed.
In a second aspect, an embodiment of the present disclosure further provides a memory allocation device, including:
the first determining unit is used for acquiring a task to be executed and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory quantity occupied by the task to be executed;
the second determining unit is used for determining a target prediction model corresponding to the task to be executed;
the prediction unit is used for inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value;
and the adjusting unit is used for adjusting the memory limit value of the running container corresponding to the task to be executed based on the predicted memory value.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
The disclosure provides a memory allocation method, a memory allocation device, computer equipment and a storage medium. In the embodiment of the present disclosure, after a task to be executed is acquired, a target task parameter may be determined from task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed. Then, a target prediction model corresponding to the task to be executed can be determined, and the parameter value of the target task parameter is input into the target prediction model to obtain a predicted memory value, wherein the predicted memory value is the expected available memory corresponding to the task to be executed. Then, based on the predicted memory value, the memory limit value of the operation container corresponding to the task to be executed can be adjusted, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 shows a flowchart of a memory allocation method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flowchart provided by an embodiment of the present disclosure for determining a target prediction model corresponding to the task to be performed;
FIG. 3 is a schematic diagram illustrating a process for determining a memory occupancy threshold for a historical task according to an embodiment of the present disclosure;
Fig. 4 is a schematic diagram of a memory allocation device according to an embodiment of the disclosure;
fig. 5 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It has been found that, generally, before an operation task is executed by a server, a memory value occupied by the operation task may be predicted, and a memory resource may be specified for the operation task according to a prediction result, for example, when the operation task is a video processing task such as video transcoding, the memory resource may be allocated for the video processing task according to a video type of a video to be transcoded.
However, since the decision factor of the memory required by the video to be transcoded is more, the above memory resource allocation method is easy to generate the situation that the designated memory resource is too high or too low, and the designated resource is too high, which causes a great amount of resource waste. Specifically, in the peak period of the processing flow, the memory allocation rate can reach 100%, however, the utilization rate can only reach 10%, the resource waste phenomenon is serious, and the task cannot be executed due to too low designated resources, so that the success rate of task execution is affected.
Based on the above study, the present disclosure provides a memory allocation method, apparatus, computer device and storage medium. In the embodiment of the present disclosure, after a task to be executed is acquired, a target task parameter may be determined from task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed. Then, a target prediction model corresponding to the task to be executed can be determined, and the parameter value of the target task parameter is input into the target prediction model to obtain a predicted memory value, wherein the predicted memory value is the expected available memory corresponding to the task to be executed. Then, based on the predicted memory value, the memory limit value of the operation container corresponding to the task to be executed can be adjusted, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.
For the sake of understanding the present embodiment, first, a detailed description will be given of a memory allocation method disclosed in the embodiments of the present disclosure, where an execution body of the memory allocation method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability. In some possible implementations, the memory allocation method may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a memory allocation method according to an embodiment of the present disclosure is shown, where the method includes steps S101 to S107, where:
s101: and acquiring a task to be executed, and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory amount occupied by the task to be executed.
In the embodiment of the present disclosure, the task to be performed may be a task related to video processing, for example, a video transcoding task, where the video transcoding task refers to converting a compressed and encoded video bitstream to another video bitstream, so as to adapt to different network bandwidths, different terminal processing capacities and different user requirements, and specifically, may first decode the video bitstream to be processed and encode according to a preset video encoding standard.
The task parameters of the task to be executed may be parameters related to task execution, so that target task parameters related to the memory occupation amount, such as task types, task execution requirements, data information to be processed, and the like, may be determined from the task parameters.
It should be understood that task parameters corresponding to different types of tasks to be performed may be different, and thus, target task parameters corresponding to respective types of assigned tasks may be predetermined. For example, there are a task 1 to be executed and a task 2 to be executed, where the task parameter set corresponding to the task 1 to be executed is { A1, A2, A3, …, a10}, and the task parameter set corresponding to the task 2 to be executed is { B1, B2, B3, …, B11 }. The target task parameters determined for the task to be executed 1 may be A1, A2, A3, A4, and the target task parameters determined for the task to be executed 2 may be B1, B2, B3.
S103: and determining a target prediction model corresponding to the task to be executed.
In the embodiment of the disclosure, the memory resources occupied by the task to be executed may be predicted by a target prediction model, where the target prediction model may be an XGBoost algorithm, and the XGBoost algorithm may predict the memory value required to be occupied by the task to be executed through a constructed decision tree.
When the target prediction model is determined, an original XGBoost algorithm can be firstly obtained, and a training sample set is constructed to train the original XGBoost algorithm, so that algorithm parameters of the original XGBoost algorithm are adjusted to obtain the target prediction model with a prediction result meeting the accuracy requirement, and a specific training process is described below and is not repeated here.
S105: and inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value.
In the embodiment of the disclosure, the weak learner in the target prediction model may be initialized first to obtain the initial learner f 0 () The initial learner may then be iterated based on the algorithm parameters of the target prediction model, where the number n of decision trees in the XGBoost algorithm described above may be determined based on the number of iterations in the algorithm parameters, and the number m of layers of decision trees may be determined based on the data depth.
In particular, the learning rate r in the algorithm parameters can be based on 1 For the residual error f determined based on each decision tree n () Fitting to obtain a predicted memory value f (x), wherein f (x) = 0 (x)+ 1 ×(f 1 (x)+…+ n ()). For example, when n=3, r 1 When=0.1, the residual error of each decision tree can be determined to obtain the residual value f 1 ()、f 2 (x)、f 3 () Then, the predicted memory value f (x) =calculated based on the residual value 0 (x)+0.1×(f 1 (x)+ 2 (x)+ 3 ())。
S107: and adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value.
In the embodiment of the disclosure, when the server side processes the task to be executed, the task to be executed may be executed through an execution container, and the capacity of the execution container is often fixed. Therefore, it is necessary to allocate memory resources for the task to be executed in advance, and generate an operation container based on the memory resources, where the operation container includes container attributes: a limit value (which refers to the actual available memory of the container) and a request value (which refers to the expected available memory of the container), wherein the limit value is the memory limit value, and the limit value is generally greater than the request value.
Considering that the predicted memory value determined by the target prediction model is the memory value occupied by running the task to be executed under ideal conditions, the memory value occupied in the actual running process is fluctuating, and once the actual occupied memory value exceeds the limit value of the running container, the container is crashed and the task execution fails.
Therefore, the predicted memory value can be used as a request value of the operation container, and the kubernetes resource management tool is used to adjust the limit value based on the request value, so that the memory value actually occupied by the operation container in the process of operating the task to be executed is ensured not to exceed the adjusted limit value, and the success rate of task execution is improved.
As can be seen from the foregoing description, in the embodiments of the present disclosure, after a task to be executed is acquired, a target task parameter may be determined in task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed. Then, a target prediction model corresponding to the task to be executed can be determined, and the parameter value of the target task parameter is input into the target prediction model to obtain a predicted memory value, wherein the predicted memory value is the expected available memory corresponding to the task to be executed. Then, based on the predicted memory value, the memory limit value of the operation container corresponding to the task to be executed can be adjusted, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.
In an optional embodiment, when the task to be performed is a task for transcoding a video to be processed, the target task parameter includes at least one of the following: video type of the video to be processed, target code stream definition, coding mode, target code stream dynamic range, pixel information of the video to be processed.
As can be seen from the above embodiment corresponding to fig. 1, the target task parameter is a task parameter related to the memory occupation amount of the task to be executed, and the target task parameter may include a task type, a task execution requirement, data information to be processed, and the like.
Specifically, the task type may be a video type of the video to be processed, for example, a movie, a television show, a short video, or the like. The task execution requirements may include the target code stream definition, the coding mode of the target code stream, the target code stream dynamic range, and the like, where the target code stream is a video code stream obtained after the task video to be processed is transcoded again. The data information to be processed may be pixel information of the video to be processed, for example, pixel values of the video to be processed.
In the embodiment of the disclosure, the task parameter with higher correlation with the memory occupation amount of the task to be executed can be stripped out as the target parameter based on the bottom execution logic of the task to be executed, so that the accuracy of the predicted memory value of the task to be executed obtained when the prediction is performed based on the parameter value of the target parameter is higher.
In an alternative embodiment, as shown in fig. 2, a flowchart of determining the target prediction model corresponding to the task to be executed in the step S103 includes the following steps:
S11: and determining a historical task matched with the task to be executed.
S12: and determining task data of the historical task, wherein the task data comprises a memory occupation threshold value and target task parameters of the historical task.
S13: and constructing a training sample set corresponding to the original prediction model based on the task data.
In the embodiment of the disclosure, a historical task matched with a task to be executed may be determined first, for example, when the task to be executed is a video decoding task, the historical task may be a historical video decoding task. Here, the type of the video to be decoded corresponding to the historical video decoding task is the same as or different from the task to be performed, which is not limited by the present disclosure. After determining the historical task, task data of the historical task may be obtained, where the task data may include a memory occupancy threshold of the historical task and a target task parameter.
In the implementation, the parameter value of the target task parameter of the historical task can be directly read in the task log. Then, a monitoring result of the resource monitoring system on the running container of the historical task can be obtained, and a memory occupation threshold of the historical task, that is, a maximum value of memory resources occupied in the execution process, is determined based on the monitoring result, for example, the resource monitoring system can be a Promitus monitoring system. A training sample set may then be constructed based on the target task parameters and the memory footprint threshold.
S14: and training the original prediction model according to the training sample set to obtain the target prediction model.
In the embodiment of the disclosure, when the original prediction model is trained based on the training sample set, the parameter value of the target task parameter may be input into the original prediction model, and the algorithm parameter of the original prediction model is adjusted based on the difference between the prediction result of the original prediction model and the corresponding memory occupation threshold value, so as to achieve that the difference between the prediction result and the corresponding memory occupation threshold value meets the accuracy requirement.
It should be understood that, in the actual use process, it is more important to ensure that the task to be executed successfully executes than perfectly predicting the memory value of the memory resource occupied by the task to be executed, so that the predicted memory value obtained by predicting the task to be executed by the target prediction model may be slightly larger than the actual memory value. Based on this, the accuracy requirement may be that the prediction result for the historical task is greater than the memory occupation threshold, and the difference between the prediction result and the memory occupation threshold meets the preset difference.
Here, when a punishment rule for the original prediction model training process is set, algorithm parameters with smaller prediction results can be set to be punished more, so that the finally obtained target prediction model meets the accuracy requirement. Specifically, the model effect can be measured by RMSLE (Root Mean Squared Logarithmic Error, root mean square logarithmic error) index until the target prediction model meets the accuracy requirement.
Considering that the prediction result of the original prediction model is usually larger, for example, 1G, the calculation amount is larger, and the accuracy of measuring the effect of the original prediction model is lower. Based on the above, when the RMSLE index is executed, the variance between the prediction result of the original prediction model and the actual memory occupation threshold of the history task can be calculated through the logistic regression operation, and the logistic regression operation, that is, the log opening, is performed on the variance so as to reduce the value of the prediction result, thereby being convenient for evaluating the prediction result of the original prediction model.
In addition, in the embodiment of the disclosure, the original prediction model may be optimized by presetting a part of algorithm parameters in the original prediction model, so that when the target prediction model obtained finally predicts the task to be executed, the obtained predicted memory value is greater than the actual memory value of the memory resource occupied by the task to be executed in the running process. For example, the above-described learning rate parameter may be set to 0.1, and the above-described data depth parameter may be set to 5.
In the embodiment of the disclosure, when the original prediction model is trained through the training sample set, the training result can be guided through the accuracy requirement, so that when the target prediction model finally obtained predicts the task to be executed, the obtained predicted memory value is larger than the actual memory value of the memory resource occupied by the task to be executed in the running process, and the task to be executed is ensured to be successfully executed.
In an optional embodiment, the step S12, determining task data of the historical task specifically includes the following steps:
s121: and setting a memory monitoring process based on the historical task.
In the embodiment of the present disclosure, it is known from the above that the memory value occupied by the running container of the historical task in the running process may be monitored by the resource monitoring system, however, when the resource monitoring system is the abovementioned primisu monitoring system, the primisu monitoring system may monitor the rss value (i.e. the abovementioned memory occupation data) recorded in the running container kernel, so that the memory occupation threshold value in the running process of the historical task cannot be dynamically obtained.
Based on this, a memory monitoring process may be set, specifically, the memory monitoring process may be a custom agent, and the agent may collect an rss value in a cgroup of the running container, record a life cycle (a historical task start execution value ends execution as a life cycle, and may kill the running container after the life cycle ends) of the running container, and determine the maximum rss value as a memory occupation threshold of the historical task.
S122: and controlling the memory monitoring process, collecting memory occupation data of the historical task according to a preset time interval, and determining a memory occupation threshold value in the memory occupation data.
In the embodiment of the disclosure, as shown in fig. 3, a schematic diagram of a process for determining a memory occupancy threshold of a historical task is shown, where the agent may be first mounted on a path where an rss file of a running container is located through daemons (daemon of the running container), and a cgroup of the running container is called to obtain container memory information (i.e., the rss value).
It should be understood that the agent may collect the rss value of the running container according to the preset time interval information, for example, the preset time interval may be set to 1s. When the acquired rss value is stored, a coverage storage method can be adopted, namely, the acquired rss value X1 at this time is compared with the acquired rss value X2 at the last time, if X1 is more than X2, X2 is abandoned, and X1 is stored; if X1 < X2, X1 is discarded and X2 is kept. Therefore, after the historical task is executed, the rss value stored at the moment can be read, and the rss value is used as a memory occupation threshold.
In addition, as shown in fig. 3, a survival probe may be set for the agent to monitor whether the agent operates normally, and restart the agent when the agent fails to operate, so as to avoid missing the acquisition memory occupation threshold because the agent does not operate.
In the embodiment of the disclosure, a memory monitoring process may be set to collect memory occupation data of a historical task in a process of executing the historical task, so as to provide a technical basis for determining a memory occupation threshold of the historical task based on the memory occupation data.
In an alternative embodiment, the embodiment corresponding to the step S12 further includes the following procedure:
(1) Determining the running state of the memory monitoring process after determining a memory occupation threshold value in the memory occupation data;
(2) Determining the running time of the memory monitoring process in a target state based on the running state, wherein the target state is used for indicating a state that the memory monitoring process does not execute the action of collecting memory occupied data;
(3) And under the condition that the running time exceeds a time threshold, determining that the confidence of the memory occupation threshold does not meet a confidence condition.
In the embodiment of the disclosure, considering that the memory monitoring process may have an operation failure, the memory monitoring process may miss the collection memory occupation threshold value during the process of restarting the memory monitoring process through the survival probe. Thus, as shown in fig. 3, dirty data detection may be performed on the memory occupancy threshold collected by the memory monitoring process.
In specific implementation, after the memory occupation threshold is determined, the running state of the memory monitoring process in the process of collecting the memory occupation data can be determined, and when the memory monitoring process is determined to be in the target passing state in the process of collecting the memory occupation data based on the running state, the running time of the memory monitoring process in the target state can be determined.
Then, a time threshold may be obtained, and when the running duration exceeds the time threshold, the memory monitoring process is considered to miss the collection of the real memory occupation threshold, and the confidence of the determined memory occupation threshold is low, and the confidence condition is not satisfied. It should be appreciated that the memory occupancy threshold that does not satisfy the confidence condition may be determined as dirty data, and the dirty data may be marked, so as to avoid training the original prediction model using the dirty data, and affecting the training result.
In the embodiment of the disclosure, considering that the memory monitoring process may have an operation failure, the memory monitoring process may miss the collection memory occupation threshold value during the process of restarting the memory monitoring process through the survival probe. Therefore, dirty data detection can be performed on the memory occupation threshold value collected by the memory monitoring process, so that the confidence of the determined memory occupation threshold value is improved.
In an optional embodiment, the step S107 adjusts the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value, which specifically includes the following steps:
s1071: and determining the task priority of the task to be executed, and determining a preset multiple value matched with the priority.
S1072: and calculating the product of the predicted memory value and the preset multiple value to obtain the memory limit value.
In the embodiment of the present disclosure, the corresponding task priority may be set in advance based on the task type of the task to be executed, where the priority of the task to be executed, the task type of which is important, may be set higher. For example, when the task to be executed is the above-described video decoding task, it is possible to set the video decoding task whose corresponding video type is a movie work to a high priority, and set the video decoding task whose corresponding video type is a short video to a normal priority. Based on the above, after determining the target task parameter of the task to be executed, the video type parameter of the video to be processed in the target task parameter can be obtained, and the task priority preset for the video type can be determined.
As can be seen from the above embodiment corresponding to fig. 1, the execution container for executing the task to be executed includes container attributes: a limit value (which refers to the actual available memory of the container) and a request value (which refers to the expected available memory of the container) are used as the request value for the running container. In the actual running process, the memory value occupied by the task to be executed fluctuates, and once the memory value actually occupied exceeds the limit value of the running container, the container is crashed, and the task execution fails.
Based on this, the limit value may be set to a multiple value of the request value, for example, limit=request×3. Specifically, corresponding preset times values may be set for different task priorities, for example, for a high priority, the preset times value may be 3 times that of the request×2, that is, the preset times value is 6 times, and for a normal priority, the preset times value may be that of the request×3, that is, the preset times value is 3 times.
In addition, in order to ensure the execution success rate of the task to be executed with high priority as much as possible, a lower limit of a request value may be set for the task to be executed with high priority, and when the predicted memory value output by the target prediction model is lower than the lower limit of the request value, the lower limit of the request value may be used as the request value of the running container of the task to be executed. For example, when the lower limit of the request value of the task to be executed is 1G, if the predicted memory value output by the target prediction model is 300M, 1G may be used as the request value of the running container, and the limit value may be calculated based on the request value.
In the embodiment of the disclosure, the task priorities of the tasks to be executed may be different, and at the same time, corresponding preset multiple values may be set for different priorities, so as to determine the memory limit value based on the product of the preset multiple values and the predicted memory value, and increase the memory resources allocated for the tasks to be executed with high priority, thereby improving the execution success rate of the tasks to be executed with high priority.
In an alternative embodiment, the foregoing embodiment corresponding to fig. 1 further includes the following procedure:
(1) After the memory limit value of the operation container corresponding to the task to be executed is adjusted, the execution state of the task to be executed is monitored;
(2) And when the execution failure of the task to be executed is determined based on the execution state, adjusting the memory limit value of the operation container according to a preset proportion, and executing the task to be executed based on the adjusted operation container until the task to be executed is successfully executed.
In the embodiment of the disclosure, considering that an error may exist in a predicted memory value determined based on a target prediction model, or that an execution fault may occur in an execution process of a task to be executed, after a memory limit value of an operation container corresponding to the task to be executed is adjusted, an execution state of the task to be executed may be monitored.
When the failure of executing the task to be executed is monitored, the reason of the failure of executing can be analyzed, and when the failure is caused by the failure of executing due to the insufficient memory of the running container, the memory limit value is adjusted so as to allocate the memory resource for the task to be executed again.
In implementation, the limit value of the previous operation container may be determined as the request value of the new operation container, and the limit value corresponding to the new operation container is calculated based on the task priority of the task to be executed, where the specific calculation process is described in the embodiment corresponding to step S107 and is not repeated herein. Then, a new run container may be generated based on the determined request value and limit value, and the task to be performed may be performed through the run container.
In addition, a limit upper limit value of the operation container may be set, and when the calculated limit value of the new operation container exceeds the limit upper limit value, the limit value of the new operation container may be determined as the limit upper limit value. Here, limit upper limit values corresponding to tasks to be executed of different task priorities may be different.
For example, for the task to be executed of the above-described general priority, the limit upper limit may be set to 8G; for the above-described high-priority task to be executed, the limit upper limit value may be set to be unlimited.
In the embodiment of the disclosure, considering that an error may exist in a predicted memory value determined based on a target prediction model or that an execution failure may occur in an execution process of a task to be executed, after the task to be executed fails to execute, memory resources may be allocated to the task to be executed again, so that the execution success rate of the task to be executed is improved.
In summary, in the embodiments of the present disclosure, after a task to be executed is obtained, a target task parameter may be determined from task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed. Then, a target prediction model corresponding to the task to be executed can be determined, and the parameter value of the target task parameter is input into the target prediction model to obtain a predicted memory value, wherein the predicted memory value is the expected available memory corresponding to the task to be executed. Then, based on the predicted memory value, the memory limit value of the operation container corresponding to the task to be executed can be adjusted, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a memory allocation device corresponding to the memory allocation method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the memory allocation method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, a schematic diagram of a memory allocation device according to an embodiment of the present disclosure is shown, where the device includes: a first determination unit 41, a second determination unit 42, a prediction unit 43, an adjustment unit 44; wherein, the liquid crystal display device comprises a liquid crystal display device,
a first determining unit 41, configured to obtain a task to be executed, and determine a target task parameter from task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed;
a second determining unit 42, configured to determine a target prediction model corresponding to the task to be executed;
A prediction unit 43, configured to input a parameter value of the target task parameter to the target prediction model, to obtain a predicted memory value;
and the adjusting unit 44 is configured to adjust a memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value.
In the embodiment of the present disclosure, after a task to be executed is acquired, a target task parameter may be determined from task parameters of the task to be executed, where the target task parameter is a task parameter associated with an amount of memory occupied by the task to be executed. Then, a target prediction model corresponding to the task to be executed can be determined, and the parameter value of the target task parameter is input into the target prediction model to obtain a predicted memory value, wherein the predicted memory value is the expected available memory corresponding to the task to be executed. Then, based on the predicted memory value, the memory limit value of the operation container corresponding to the task to be executed can be adjusted, so that a large amount of memory resources are saved under the condition that the task to be executed is ensured to be executed successfully.
In a possible implementation manner, the second determining unit 42 is further configured to:
determining a historical task matched with the task to be executed;
Determining task data of the historical task, wherein the task data comprises a memory occupation threshold value and target task parameters of the historical task;
constructing a training sample set corresponding to the original prediction model based on the task data;
and training the original prediction model according to the training sample set to obtain the target prediction model.
In a possible implementation manner, the second determining unit 42 is further configured to:
setting a memory monitoring process based on the historical task;
and controlling the memory monitoring process, collecting memory occupation data of the historical task according to a preset time interval, and determining a memory occupation threshold value in the memory occupation data.
In a possible implementation manner, the second determining unit 42 is further configured to:
after a memory occupation threshold value is determined in the memory occupation data, determining the running state of the memory monitoring process;
determining the operation time length of the memory monitoring process in a target state based on the operation state, wherein the target state is used for indicating a state that the memory monitoring process does not execute the action of collecting memory occupied data;
and under the condition that the running time exceeds a time threshold, determining that the confidence of the memory occupation threshold does not meet a confidence condition.
In a possible implementation manner, when the task to be performed is a task for transcoding a video to be processed, the target task parameters include at least one of the following: the method comprises the steps of video type of a video to be processed, target code stream definition, a coding mode of a target code stream, a dynamic range of the target code stream and pixel information of the video to be processed.
In a possible embodiment, the adjusting unit 44 is further configured to:
determining the task priority of the task to be executed, and determining a preset multiplier matched with the priority;
and calculating the product of the predicted memory value and the preset multiple value to obtain the memory limit value.
In a possible embodiment, the device is further configured to:
after the memory limit value of the operation container corresponding to the task to be executed is adjusted, monitoring the execution state of the task to be executed;
and when determining that the task to be executed fails to be executed based on the execution state, adjusting the memory limit value of the operation container according to a preset proportion, and executing the task to be executed based on the adjusted operation container until the task to be executed is successfully executed.
The process flow of each unit in the apparatus and the interaction flow between units may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Corresponding to the memory allocation method in fig. 1, the embodiment of the present disclosure further provides a computer device 500, as shown in fig. 5, which is a schematic structural diagram of the computer device 500 provided in the embodiment of the present disclosure, including:
a processor 51, a memory 52, and a bus 53; memory 52 is used to store execution instructions, including memory 521 and external storage 522; the memory 521 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 51 and data exchanged with the external memory 522 such as a hard disk, and the processor 51 exchanges data with the external memory 522 through the memory 521, and when the computer device 500 is operated, the processor 51 and the memory 52 communicate with each other through the bus 53, so that the processor 51 executes the following instructions:
acquiring a task to be executed, and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory amount occupied by the task to be executed;
determining a target prediction model corresponding to the task to be executed;
inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value;
And adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the memory allocation method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, and instructions included in the program code may be used to perform the steps of the memory allocation method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A memory allocation method, comprising:
acquiring a task to be executed, and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory amount occupied by the task to be executed;
determining a target prediction model corresponding to the task to be executed;
inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value;
and adjusting the memory limit value of the operation container corresponding to the task to be executed based on the predicted memory value.
2. The method of claim 1, wherein the determining the target prediction model corresponding to the task to be performed comprises:
determining a historical task matched with the task to be executed;
determining task data of the historical task, wherein the task data comprises a memory occupation threshold value and target task parameters of the historical task;
constructing a training sample set corresponding to the original prediction model based on the task data;
and training the original prediction model according to the training sample set to obtain the target prediction model.
3. The method of claim 2, wherein the determining task data for the historical task comprises:
setting a memory monitoring process based on the historical task;
and controlling the memory monitoring process, collecting memory occupation data of the historical task according to a preset time interval, and determining a memory occupation threshold value in the memory occupation data.
4. A method according to claim 3, characterized in that the method further comprises:
after a memory occupation threshold value is determined in the memory occupation data, determining the running state of the memory monitoring process;
determining the operation time length of the memory monitoring process in a target state based on the operation state, wherein the target state is used for indicating a state that the memory monitoring process does not execute the action of collecting memory occupied data;
and under the condition that the running time exceeds a time threshold, determining that the confidence of the memory occupation threshold does not meet a confidence condition.
5. The method of claim 1, wherein when the task to be performed is a task for transcoding video to be processed, the target task parameters include at least one of: the method comprises the steps of video type of a video to be processed, target code stream definition, a coding mode of a target code stream, a dynamic range of the target code stream and pixel information of the video to be processed.
6. The method of claim 1, wherein adjusting the memory limit of the operation container corresponding to the task to be performed based on the predicted memory value comprises:
determining the task priority of the task to be executed, and determining a preset multiplier matched with the priority;
and calculating the product of the predicted memory value and the preset multiple value to obtain the memory limit value.
7. The method according to claim 1, wherein the method further comprises:
after the memory limit value of the operation container corresponding to the task to be executed is adjusted, monitoring the execution state of the task to be executed;
and when determining that the task to be executed fails to be executed based on the execution state, adjusting the memory limit value of the operation container according to a preset proportion, and executing the task to be executed based on the adjusted operation container until the task to be executed is successfully executed.
8. A memory allocation apparatus, comprising:
the first determining unit is used for acquiring a task to be executed and determining a target task parameter from task parameters of the task to be executed, wherein the target task parameter is a task parameter related to the memory quantity occupied by the task to be executed;
The second determining unit is used for determining a target prediction model corresponding to the task to be executed;
the prediction unit is used for inputting the parameter value of the target task parameter into the target prediction model to obtain a predicted memory value;
and the adjusting unit is used for adjusting the memory limit value of the running container corresponding to the task to be executed based on the predicted memory value.
9. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the memory allocation method according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the memory allocation method according to any of claims 1 to 7.
CN202310389001.9A 2023-04-12 2023-04-12 Memory allocation method and device, computer equipment and storage medium Pending CN116610442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310389001.9A CN116610442A (en) 2023-04-12 2023-04-12 Memory allocation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310389001.9A CN116610442A (en) 2023-04-12 2023-04-12 Memory allocation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116610442A true CN116610442A (en) 2023-08-18

Family

ID=87675426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310389001.9A Pending CN116610442A (en) 2023-04-12 2023-04-12 Memory allocation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116610442A (en)

Similar Documents

Publication Publication Date Title
CN106933650B (en) Load management method and system of cloud application system
CN110378487B (en) Method, device, equipment and medium for verifying model parameters in horizontal federal learning
CN109597685B (en) Task allocation method, device and server
US9658910B2 (en) Systems and methods for spatially displaced correlation for detecting value ranges of transient correlation in machine data of enterprise systems
US10069900B2 (en) Systems and methods for adaptive thresholding using maximum concentration intervals
WO2012110918A1 (en) Multiple modeling paradigm for predictive analytics
CN111459754B (en) Abnormal task processing method, device, medium and electronic equipment
CN109992473B (en) Application system monitoring method, device, equipment and storage medium
US20170052870A1 (en) Evaluating user experience
CN110955586A (en) System fault prediction method, device and equipment based on log
US20120174231A1 (en) Assessing System Performance Impact of Security Attacks
US20120116747A1 (en) Recommending Alternatives For Providing A Service
CN111275245A (en) Potential network switching user identification method, system, message pushing method, device and medium
CN114385463A (en) Data acquisition method and device and electronic equipment
CN110851333B (en) Root partition monitoring method and device and monitoring server
WO2009100528A1 (en) System and method for estimating combined workloads of systems with uncorrelated and non-deterministic workload patterns
CN110389840B (en) Load consumption early warning method and device, computer equipment and storage medium
EP3499374B1 (en) An adaptive system and a method for application error prediction and management
CN110796591A (en) GPU card using method and related equipment
CN114546590A (en) Java virtual machine heap memory set object monitoring method and memory overflow analysis method
CN107168643B (en) Data storage method and device
CN116610442A (en) Memory allocation method and device, computer equipment and storage medium
CN113676377B (en) Online user number evaluation method, device, equipment and medium based on big data
CN112631577B (en) Model scheduling method, model scheduler and model safety test platform
CN106375372B (en) big data resource allocation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination