CN116991563B - Queue generating method and device supporting rapid sandbox construction - Google Patents

Queue generating method and device supporting rapid sandbox construction Download PDF

Info

Publication number
CN116991563B
CN116991563B CN202311269953.3A CN202311269953A CN116991563B CN 116991563 B CN116991563 B CN 116991563B CN 202311269953 A CN202311269953 A CN 202311269953A CN 116991563 B CN116991563 B CN 116991563B
Authority
CN
China
Prior art keywords
task
queue
end processor
sandbox
sandboxes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311269953.3A
Other languages
Chinese (zh)
Other versions
CN116991563A (en
Inventor
周天舒
文君
倪鸿仪
王宝琛
李劲松
田雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311269953.3A priority Critical patent/CN116991563B/en
Publication of CN116991563A publication Critical patent/CN116991563A/en
Application granted granted Critical
Publication of CN116991563B publication Critical patent/CN116991563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a queue generating method and a device supporting rapid construction of sandboxes, wherein the method comprises the following steps: acquiring sandbox creation data and classifying the sandbox creation data according to time periods; based on the sandbox creation data, predicting the number of sandboxes created in the next time period of each front-end processor by adopting a time sequence; and creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation, and constructing sandboxes for the task layering in each front-end processor task queue by utilizing the basic sandbox mirror image. The method is used for solving the problems of resource waste caused by the fact that the sandbox is complex in establishment process, long in waiting time of users and empty.

Description

Queue generating method and device supporting rapid sandbox construction
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a queue generating method and device supporting rapid construction of sandboxes.
Background
With the development of information science and technology, privacy protection is becoming a more and more widespread concern, especially in the traditional security fields of medical treatment, finance, etc.
The sandbox technology is a means for privacy protection, and can realize the security isolation of programs and user data by strictly limiting the use of system resources through security policies. The system resource and authority are managed by the sandbox through the execution environment provided by the sandbox technology, so that the application program runs in the sandbox, and the access of the application program is firstly checked through the sandbox according to the security policy, thereby forming an isolated running effect relative to the system, and effectively protecting the security of the system. However, since the steps required for setting up the sandboxes are complicated, the waiting time of the user is long, and the empty sandboxes can cause resource waste, so that the user cannot utilize the resources maximally when using the sandboxes.
Chinese patent document CN116010941a discloses a system and method for constructing a multi-center medical queue based on sandboxes, the system comprising a central machine and a plurality of front-end processors; the central machine is used for receiving user requests, executing management control and the like and is deployed at the cloud; the front-end processor is deployed inside the medical institution and connected with a medical information system of the medical institution for analyzing, inquiring, calculating and the like of the user request. Although the scheme uses the sandboxes to isolate different users so as to improve the safety; the queue statistics and calculation of the medical privacy data are completely carried out in the sandbox so as to prevent data leakage; and the available resources of the sandboxes are limited, and the running resources of different tasks are reasonably planned, so that the influence of the consumption of the hardware resources by a single task of a certain user on other tasks of other users is avoided. However, the scheme aims at the problems of complex sandbox establishing process, long waiting time of users and resource waste caused by empty sandboxes, and does not disclose any solving means.
Chinese patent document CN113419831a discloses a sandboxed task scheduling method and system. The sandbox task scheduling method comprises the following steps: acquiring task sources and priority factors; and (B) step (B): calculating a priority score of each task; step C: distributing tasks to corresponding candidate queues according to task sources, and setting the maximum task number of the candidate queues; step D: the priority grade is increased according to the candidate queue characteristics; step E: and selecting the sandbox resource with earliest completion time when processing the task from the sandbox resources meeting the task requirements, and establishing a mapping relation with the task. The scheme can weigh the task priority and the waiting time by setting the priority and the gain, and preferentially process the tasks in the queue when the number of the candidate queue tasks is large, so that the priority of the tasks is dynamically adjusted, the task waiting time is prevented from being overlong, and the service quality of sandbox task scheduling is improved. However, this approach has the disadvantage of poor objectivity of the task scores, which may result in no reduction in user latency for multiple tasks of the same priority. Likewise, no solution is disclosed for the problems of complicated sandbox establishment process, long waiting time of users and resource waste caused by empty sandboxes.
As can be seen, a method for supporting rapid sandbox construction is needed at present to solve the problems of complex sandbox establishment process, long waiting time of users and resource waste caused by empty sandboxes.
Disclosure of Invention
In order to solve the problems, the invention provides a queue generating method and a device for supporting rapid construction of sandboxes.
In a first aspect, an embodiment of the present invention provides a queue generating method supporting rapid sandbox construction, where the method includes the following steps: acquiring sandbox creation data and classifying the sandbox creation data according to time periods; based on the sandbox creation data, predicting the number of sandboxes created in the next time period of each front-end processor by adopting a time sequence; creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation,
the task allocation to each queue according to the first strategy comprises the following steps: arranging all tasks in descending order according to the size of the data set to wait for distribution; comparing task completion time after adding a task in each queue according to the sequence, and distributing the task to the queue with the shortest task completion time; the task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps: taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding a task in each front-end processor, and distributing the task to the front-end processor task queue with the shortest total time increase; creating tasks according to the reverse order of the task queues of each front-end processor;
and constructing sandboxes for the task layers in the task queues of each front-end processor by using the basic sandbox mirror image.
The invention utilizes a multi-queue allocation method based on the shortest waiting time strategy to obtain the sandbox construction sequence with the shortest actual waiting time of users in a multi-queue scene under the condition of the same number of tasks.
Preferably, the predicting, based on the sandbox creation data, the number of sandboxes created in the next time period of each front-end processor according to the time sequence includes:
the method comprises the steps of utilizing sandbox creation data to predict a double-period Holt-windows model taking an hour as a unit to obtain the number of sandboxes created for each predicted front-end processor to be required for the next hour, wherein the double-period Holt-windows model splits a seasonal component in an original Holt-windows model into a first periodic component p i And a second periodic component w i And will be gamma 1 As a smoothing coefficient of the first periodic component, γ 2 As a smoothing coefficient of the second periodic component.
By using an improved algorithm based on Holt-windows exponential smoothing method, the prediction accuracy of the sandbox quantity in a double-period environment can be effectively improved, the prediction result is utilized for early creation, and the somatosensory waiting time of a user is greatly reduced.
Further, the first periodic component takes a month as a period, and the second periodic component takes a week as a period.
Further, if the ratio of the actual number of created sandboxes Xt' to the predicted number of created sandboxes Xt exceeds a specific range in a time period, readjusting the values of parameters in the bi-periodic Holt-windows model, wherein the specific range is as follows:wherein k represents a preset error parameter and takes the value as(0,1)。
Preferably, the task completion time T after adding one task i in each queue is equal to the queue task processing time T c Adding the completion time T of the front-end task of the queue w The task completion time T is as follows:where n represents the current queue length, i represents the i-th task that currently needs to be added, s i Data set size, k, representing current task i 1 And k 2 Representing the bandwidth of the data set transmission, alpha is the environmental deployment time of the sandbox, if the current data set and the front-end processor belong to the same mechanism, o i =0, otherwise o i =1。
Further, the queue task processing time is as follows:the completion time of the front-end task of the queue is as follows: />
Preferably, the total time for executing all the front-end processor task queues after adding one task is as follows:where n represents the current queue length, i represents the i-th task that currently needs to be added, s i Data set size, k, representing current task i 1 And k 2 Representing the bandwidth of the data set transmission, alpha is the environmental deployment time of the sandbox, if the current data set and the front-end processor belong to the same mechanism, o i =0, otherwise o i =1。
In a second aspect, an embodiment of the present invention provides a queue generating apparatus supporting rapid sandbox construction, where the apparatus includes: the sandbox monitoring module is used for acquiring sandbox creation data and classifying the sandbox creation data according to time periods; the quantity prediction module is used for predicting the quantity of sandboxes created required by the next time period of each front-end processor by adopting a time sequence based on the sandbox creation data; a queue allocation module for creating queues with the length equal to the number of the predicted sandboxes for each front-end processor, allocating tasks to each queue according to a first strategy, allocating tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation,
the task allocation to each queue according to the first strategy comprises the following steps: arranging all tasks in descending order according to the size of the data set to wait for distribution; comparing task completion time after adding one task in each queue according to the sequence, and distributing the task to be added to the queue with the shortest task completion time; the task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps: taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding one task in each front-end processor, and distributing the tasks to be added to the front-end processor task queue with the shortest total time increase; creating tasks according to the reverse order of the task queues of each front-end processor;
and the sandbox construction module is used for constructing sandboxes for task layering in the task queue of each front-end processor by utilizing the basic sandbox mirror image.
In a third aspect, an embodiment of the present invention provides a computing device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above-described queue generating method supporting rapid sandboxes construction when the computer program is executed.
In a fourth aspect, an embodiment of the present invention provides a computer readable medium having a computer program stored thereon, where the computer program when executed is configured to implement the steps of the above-described queue generating method supporting fast sandboxes.
The technical scheme of the invention has the beneficial effects that:
1. the complexity of the sandbox building process is reduced: the sandbox building process can be simplified and the building efficiency can be improved by predicting the time-interval requirements and generating a queue;
2. the waiting time of the user is reduced: by reasonably distributing tasks and fully utilizing resources, the establishment speed of the sandbox can be increased, and the waiting time of a user is reduced;
3. optimizing resource utilization: by monitoring and adjusting the number of the queues in real time, resource waste caused by empty sandboxes can be avoided, and resource optimization is realized.
By implementing the technical scheme of the invention, the problems of complex sandbox establishing process, long waiting time of users and resource waste caused by empty sandboxes in the prior art can be solved, and the method is especially suitable for scenes with high requirements on customer privacy data in industries such as medical treatment, finance and the like. For example, in a medical research scenario, when analyzing patient data, because the privacy data is not locally preserved, sandboxes are generated from the front-end processor for analysis each time, and the process of generating sandboxes is slow, which results in inefficiency. Through the prediction and pre-construction steps of the invention, the sandbox can be constructed in advance according to the historical research scene, and the sandbox is almost instantaneously generated only by slightly, so that the waiting time is completely free, and the scientific research efficiency is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a queue generating method supporting quick sandboxes construction provided by an embodiment;
FIG. 2 is a flow chart of notifying a predictive model to adjust parameters according to an embodiment;
FIG. 3 is a flow diagram of queue allocation provided by an embodiment;
FIG. 4 is a schematic flow diagram of a hierarchical sandbox construction provided by an embodiment;
FIG. 5 is a graph comparing the effects of the rapid build sandboxes provided by the examples;
FIG. 6 is a block diagram of a queue generating device supporting quick sandboxes construction according to an embodiment;
FIG. 7 is a schematic diagram of a computing device provided by an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
Fig. 1 is a flow chart of a queue generating method supporting quick sandbox construction according to an embodiment, as shown in fig. 1, where the method includes:
s101, acquiring sandbox creation data and classifying the sandbox creation data according to time periods.
And collecting data created by the sandbox histories, classifying and sorting according to time periods, detecting the change of the current demand quantity of the sandboxes, and informing the prediction model to adjust parameters for re-prediction if the ratio of the actual sandbox creation quantity Xt' to the predicted sandbox quantity Xt exceeds a specific range in one time period.
In an embodiment, the data is counted once an hour, the counted data including: the number of sandboxes created on each front-end processor, the business tag of each sandbox, the data set size of each sandbox, and the time it takes for each sandbox to actually be created.
The number of created sandboxes on each front-end processor is used for predicting the number of created sandboxes required by the next time period, the service tag of each sandbox is used for acquiring the required dependence of the corresponding service when the sandbox is built, the data set size of each sandbox is used for task allocation after a queue is generated, and the time consumed by the actual creation of each sandbox is used for comparing effects in the embodiment.
As shown in fig. 2, when the ratio of the actual sandbox creation number Xt' to the predicted sandbox number Xt exceeds a specific range (k, 1/k) in a period of time, the prediction model is notified to adjust the parameters for re-prediction, wherein the specific range has the following formula:k is a preset error parameter, the value is (0, 1), and the closer the value is to 1, the more sensitive the error is.
And if the error does not accord with the expected range, informing the prediction model of adjusting the parameters for re-prediction. And comparing with the historical data, judging whether the first periodic component (month), the second periodic component (week) or the trend component is inaccurate, adjusting the corresponding component parameters, and re-predicting. When the adjustment value reaches the limit, an alarm is sent out to inform an operator.
S102, based on the sandbox creation data, predicting the number of sandboxes created required by each front-end processor in the next time period by adopting a time sequence.
The method comprises the steps of utilizing sandbox creation data to predict a double-period Holt-windows model taking an hour as a unit to obtain the number of sandboxes created for each predicted front-end processor to be required for the next hour, wherein the double-period Holt-windows model splits a seasonal component in an original Holt-windows model into a first periodic component p i And a second periodic component w i And will be gamma 1 As a smoothing coefficient of the first periodic component, γ 2 As a smoothing coefficient of the second periodic component.
In this embodiment, the time-series analysis method is used for prediction of sandbox construction data in the past period. This is because sandboxed creation data tends to exhibit strong trends and seasonality, and seasonality is manifested as multiple cycles, with strong periodic associations with both months and weeks.
So the prediction is carried out by adopting an improved Holt-winter index smoothing method, the improvement is mainly aimed at the multi-period situation, and the method is based on the original Holt-winter index smoothing method and the original seasonal component p i Is additionally added with a seasonal component w i The monthly periodicity and the weekly periodicity are shown, respectively.
The original Holt-windows exponential smoothing method formula comprises:
horizontal prediction componentGlobal trend componentSeasonal component->Predicted value
All prediction components in the improved double-period Holt-windows model are predicted in a smooth adjustment mode, and the method is divided into 4 parts in total:
the first part being the horizontal prediction component:/>
A level value representing the change in data in the ith time node. Wherein (1)>Is a trend coefficient, ++>Representing the number of sandboxes created by the ith time node, p i-k Representing data before the first period, sandbox construction data in this example, one month ago, w i-k2 Representing data before the second cycle, in this example sandboxed construction data before one week, k represents the first cycle time, 30 days, k 2 Representing the second cycle time, i.e. 7 days, s i-1 A level value, t, representing the change of data in the previous period i-1 Representing the overall trend portion of the previous time period.
The second part is an overall trend part:/>
t i Representing the overall trend part in the current time period, t i Is the upper trend part t i-1 And (3) smoothly adjusting the base of the formula (I), wherein beta is a trend coefficient.
The third part is the two periodic trend: first period componentSecond periodic component->
Wherein the first period component p i And a second periodic component w i Is an adjustment based on periodic data, wherein gamma 1 、k 1 、γ 2 、k 2 Smoothing coefficients and period intervals of the first period component and the second period component respectively.
The fourth part is the predicted number x of sandboxes to be created after h time units i+h :
Wherein p is i-k1+h A periodic trend component, w, representing a first period i-k2+h Representing a periodic trend component of the second period.
In this embodiment, based on the sandbox creation scene, since there are two significant periodic rules of the sandbox creation number, the first period is a month and the second period is a week, which fluctuate by month and by week.
According to historical data, selecting proper trend coefficients alpha, beta and cycle coefficient gamma for different institutions and business scenes 1 、γ 2 Fitting, substituting the modified double-period Holt-winter index smoothing method formula, and predicting by using the modified double-period Holt-winter index smoothing method formula.
When S101 notifies the prediction model of the re-prediction of the adjustment parameters, that is, when the predicted number and the actually generated number exceed the error coefficient, the service variation occurs in this case, and the prediction coefficient needs to be readjusted at this time, and the adjustment method dynamically adjusts according to the difference between each predicted component and the actual value.
Taking the first periodic component p as an example, p is a month periodic component in this embodiment, the predicted value x can be obtained by the predicting step i+h And a predicted first periodic component p i And an actual sandbox creation value x, bringing the actual sandbox creation value x into the first period component p i The actual trend component p can be obtained from the predictive formula of (2) i And (3) the method. According to p i And p i The magnitude relation of the' first period smoothing coefficient gamma 1 Either increasing or decreasing.
In addition, since Holt-windows exponential smoothing is itself a trend prediction, there may be an over-prediction situation, so when the adjustment value exceeds a reasonable range, such as predicting that the sandbox number is less than 0, an alarm will be sent to notify researchers.
S103, creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, and distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation.
The task allocation to each queue according to the first strategy comprises the following steps:
arranging all tasks in descending order according to the size of the data set to wait for distribution; comparing task completion time after adding a task in each queue according to the sequence, and distributing the task to the queue with the shortest task completion time;
the task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps:
taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding a task in each front-end processor, and distributing the task to the front-end processor task queue with the shortest total time increase; and creating tasks according to the reverse order of the task queues of each front-end processor.
As shown in fig. 3, the process of queue allocation includes:
1. the tasks are ordered in descending order of duration. In this embodiment, all tasks are arranged in descending order according to the size of the data set to wait for allocation, and all the front-end processors are ordered according to the mechanism order to wait for allocation tasks, and when allocation is performed, the following parameters are acquired from the front-end processors: the data set size s required to be extracted from the mechanism, the mechanism to which the data set belongs and the front-end processor private line network bandwidth k 1 Bandwidth k of internal network 2 And a front-end processor corresponding mechanism. To order tasks;
2. a queue length is created for each front-end processor equal to the number of sandboxes created in S102 required to predict the next time period for each front-end processor. And comparing the task completion time after adding one task into each queue according to the sequence, and distributing the task to the queue with the shortest task completion time.
Calculating task completion time after adding a task, namely, calculating ith task processing time ti, namely, current sandbox deployment time, wherein the time ti for acquiring a data set from a mechanism in each deployment is changed in proportion to the size s of the data set and bandwidthThe inverse proportion is changed and is related to the mechanism of the front-end processor.
When the front-end processor and the data set are in the same mechanism, the data set is transmitted once, and the ith task processing time ti is as follows:wherein the environment deployment time of the sandbox is constant alpha, s i Data set size, k, representing the ith task 1 Representing the broadband over which the data set is transmitted once.
When the front-end processor and the data set do not belong to the same mechanism, the data set is transmitted once by an internal line, and the ith task processing time ti is as follows:wherein k is 2 Representing bandwidth of the internal line transmitting data set, if the current data set and the front-end processor belong toThe same mechanism, o i =0, otherwise o i =1。
In order to ensure that as many tasks as possible are preferentially completed, it is also necessary to consider the waiting time of each task, i.e., task completion time=waiting time+processing time, where the waiting time is equal to the completion time of the preceding task.
In other words, the processing time of each task is accumulated to the completion time of the subsequent task at the same time. When the queue length isWhen the queue task processing time is: />
The completion time of all the front-end tasks in the queue is as follows:
adding a task to each queuePost task completion time->Equal to queue task processing time +.>Adding queue front-end task completion time->Complete time of the entire queue task>:/>
Where n represents the current queue length, i represents the i-th task that needs to be added currently, s i Representing a current taskData set size, k 1 And k 2 Representing the bandwidth of the data set transmission, alpha is the environmental deployment time of the sandbox, if the current data set and the front-end processor belong to the same mechanism, o i =0, otherwise o i =1。
In order to maximize the utilization of the performance of all build queues, the total time of all build queues should be made as close as possible, so the tasks need to be ordered in descending order according to the data set size, the largest tasks are preferentially allocated to the queue with the shortest total time consumption, and when all task allocation is completed, the total time consumption of all build queues is closest.
Illustrating: as shown in fig. 3, when a task needs to be newly added to the queue a and the queue B, if the task is added to the queue a to increase the total time consumption by 4 times of the task time consumption, and the task is added to the queue B to increase the total time consumption by 3 times of the task time consumption, the task is added to the queue B.
3. And distributing the tasks in the queues to the task queues of the front-end processors for reverse order creation.
According to the completion time of the whole queue taskAs can be seen from the formula of>The smaller the cumulative waiting coefficient (n-i+1) is, the larger it is. In order to minimize the overall completion time, tasks need to be arranged in ascending order according to the size of the data set during execution, and the smallest tasks are processed preferentially, so that the subsequent waiting time is reduced.
I.e., assigning tasks, larger tasks are preferentially assigned to ensure that each queue is time-consuming averaged, thereby maximizing utilization of build queue performance. When executing tasks, the distributed tasks are executed in order from small to large, so as to ensure that the total waiting time is the shortest.
In this embodiment, the tasks are first fetched in the order of each task in the respective queues. Secondly, the extracted tasks are distributed, the task queues in all the front-end computers are required to be calculated during distribution, and the total time added by executing the task queues of all the front-end computers after adding one task in each front-end computer is compared. And finally, distributing the task to a front-end processor task queue with the shortest total time increment.
Specifically, a length isThe total waiting time of the queue task is as follows: />
For the above formula, let i=n-k, at this time, correspond to sequence inversion, the inverted sequence is referred to as k sequence, and for the total latency of k sequence, there is:
therefore, inserting a task at the forefront of the original queue corresponds to adding a task at the end of the k-sequence, and the total waiting time of the added queue is:
the accumulated value plus the total completion time of the existing tasks in the queue is the allocated new total completion time.
It should be noted that due to the existence of the mechanism coefficientAnd the length of the existing task of each queue is different, and the time consumption of the same task is not the same when comparing different queues. And after all tasks are added, executing the added teams of the front-end processor in reverse order, wherein the overall completion time of all the tasks is the shortest.
S104, constructing sandboxes for task layering in the task queues of each front-end processor by utilizing the basic sandbox mirror image.
In this embodiment, as shown in fig. 4, a local mirror image warehouse is first constructed for the number of tasks in the task queue of each front-end processor, and the general dependence is packaged into a base mirror image and stored in the local mirror image warehouse, so that the downloading of the base dependence from the internet is avoided, and the construction speed is greatly improved.
And secondly, acquiring the dependence required by the corresponding service based on the service tag of the sandbox in the step S101. And thirdly, summarizing and checking automatically acquired business dependencies and manually filled dependencies of a user, and generating the simplified sandbox construction file after removing repeated dependencies.
And checking whether the needed dependence exists in the local warehouse or not, if so, if not, pulling the local warehouse from the Internet for the first time, and subsequently obtaining the local warehouse.
And finally, constructing the sandbox according to the sandbox construction file. After the construction is finished, the user can still enter the sandbox to manually import other dependencies, so that personalized sandbox assembly is realized.
In this embodiment, sandbox creation data of 2021.03 to 2022.03 is used, and as shown in fig. 5, the closer to 0 the difference between the predicted value of the created sandbox and the actual value of the created sandbox, the closer to the predicted value of the created sandbox and the actual value of the created sandbox.
It can be seen that the bi-periodic prediction results are significantly better than the single-periodic prediction and the non-periodic prediction, and the variances of the predicted values are 798.22, 2631.44 and 51374.50, respectively.
Compared with the original mode of creating according to the task adding sequence, the method of distributing first and then executing according to the time length can save about 30% of the whole completion time according to calculation.
Based on the same inventive concept, an embodiment further provides a queue generating apparatus 600 supporting rapid sandbox construction, as shown in fig. 6, the apparatus including:
the sandbox monitoring module 610 is configured to obtain sandbox creation data and classify the sandbox creation data according to a time period.
The quantity prediction module 620 is configured to predict, based on the sandbox creation data, a sandbox creation quantity required for a next time period of each front-end processor in a time sequence.
The queue allocation module 630 is configured to create queues with a queue length equal to the number of the predicted sandboxes created for each front-end processor, allocate tasks to each queue according to a first policy, and allocate tasks in the queues to each front-end processor task queue according to a second policy for reverse order creation.
The task allocation to each queue according to the first strategy comprises the following steps: arranging all tasks in descending order according to the size of the data set to wait for distribution; and comparing the task completion time after adding one task into each queue according to the sequence, and distributing the task to the queue with the shortest task completion time.
The task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps: taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding a task in each front-end processor, and distributing the task to the front-end processor task queue with the shortest total time increase; and creating tasks according to the reverse order of the task queues of each front-end processor.
The sandbox construction module 640 is configured to utilize the base sandbox image to hierarchically construct sandboxes for the tasks in the task queue of each front-end processor.
It should be noted that, when the queue generating device supporting quick construction of sandboxes provided in the foregoing embodiment performs queue generation supporting quick construction of sandboxes, the foregoing division of each functional module should be illustrated, and the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the terminal or the server is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the queue generating device supporting the quick-construction sandbox and the queue generating method supporting the quick-construction sandbox provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the queue generating device supporting the quick-construction sandbox are described in the embodiments of the queue generating method supporting the quick-construction sandbox, which are not described herein.
Based on the same inventive concept, the embodiments also provide a computing device, as shown in fig. 7, which includes a processor, an internal bus, a network interface, a memory, and a storage, and may include hardware required by other services, as shown in a hardware level. The processor reads the corresponding computer program from the memory to the memory and then runs the computer program to realize a queue generating method supporting quick construction of the sandbox, and the method comprises the following steps:
s101, acquiring sandbox creation data and classifying the sandbox creation data according to time periods.
S102, based on the sandbox creation data, predicting the number of sandboxes created required by each front-end processor in the next time period by adopting a time sequence.
S103, creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, and distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation.
S104, constructing sandboxes for task layering in the task queues of each front-end processor by utilizing the basic sandbox mirror image.
The memory may be a volatile memory at the near end, such as RAM, or a nonvolatile memory, such as ROM, FLASH, floppy disk, mechanical hard disk, or a remote storage cloud. The processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e. the steps of the queue generating method supporting fast construction of sandboxes may be implemented by these processors.
Based on the same inventive concept, an embodiment further provides a computer readable storage medium having stored thereon a computer program, which when processed and executed, implements the above-mentioned queue generating method supporting rapid sandbox construction, including:
s101, acquiring sandbox creation data and classifying the sandbox creation data according to time periods.
S102, based on the sandbox creation data, predicting the number of sandboxes created required by each front-end processor in the next time period by adopting a time sequence.
S103, creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, and distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation.
S104, constructing sandboxes for task layering in the task queues of each front-end processor by utilizing the basic sandbox mirror image.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (8)

1. A method for generating a queue supporting rapid sandbox construction, said method comprising the steps of:
acquiring sandbox creation data and classifying the sandbox creation data according to time periods;
based on sandbox creation data, predicting the number of sandboxes required by the next time period of each front-end processor by adopting a time sequence, wherein the method comprises the steps of using the sandbox creation data to predict a double-period Holt-windows model taking an hour as a unit to obtain the number of sandboxes required by the next hour of each front-end processor, wherein the double-period Holt-windows model splits the seasonal components in the original Holt-windows model into first periodic componentsAnd a second periodic component->And will->Smoothing coefficients as first periodic component, +.>As a smoothing coefficient of a second periodic component, wherein the first periodic component takes a month as a period, and the second periodic component takes a week as a period;
creating queues with the same queue length as the number of the predicted sandboxes for each front-end processor, distributing tasks to each queue according to a first strategy, distributing the tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation, wherein,
the task allocation to each queue according to the first strategy comprises the following steps: arranging all tasks in descending order according to the size of the data set to wait for distribution; comparing task completion time after adding a task in each queue according to the sequence, and distributing the task to the queue with the shortest task completion time;
the task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps: taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding a task in each front-end processor, and distributing the task to the front-end processor task queue with the shortest total time increase; creating tasks according to the reverse order of the task queues of each front-end processor;
and constructing sandboxes for the task layers in the task queues of each front-end processor by using the basic sandbox mirror image.
2. The queue generating method supporting quick construction of sandboxes as claimed in claim 1, wherein if the actual sandbox creation number is within a period of timeQuantity of sandboxes created with prediction +.>When the ratio of (2) exceeds a specific range, readjusting the values of parameters in the bi-periodic Holt-windows model, wherein the specific range is as follows: />Wherein k represents a preset error parameter, and the value is +.>
3. The queue generating method supporting quick construction of sandboxes as claimed in claim 1, characterized in thatCharacterized in that each queue is added with a taskPost task completion time->Equal to queue task processing time +.>Adding queue front-end task completion time->The task completion time->
Wherein,representing the current queue length, +.>Indicating that the current need to add +.>Task of personal->Representing the current task->Data set size of +.>Representing that the data set is transmitted once when the front-end processor and the data set are the same mechanismBandwidth (I)>Indicating the bandwidth of the required internal transmission data set when the front-end processor and the data set do not belong to the same organization, < >>For the time of the environmental deployment of the sandbox, if the current data set and the front-end processor belong to the same organization, the front-end processor is +.>On the contrary->
4. The method for generating a queue supporting rapid build sandboxes as claimed in claim 3 wherein said queue task processing timeThe completion time of the queue front-end task
5. The method for generating a queue for supporting quick sandboxes as claimed in claim 1, wherein said adding a task is performed for a total time added by all of the front-end processor task queues
Wherein,representing the current queue length, +.>Indicating that the current need to add +.>Task of personal->Representing the current task->Data set size of +.>Indicating the bandwidth of transmitting the data set once when the front-end processor and the data set are the same organization,/->Indicating the bandwidth of the required internal transmission data set when the front-end processor and the data set do not belong to the same organization, < >>For the time of the environmental deployment of the sandbox, if the current data set and the front-end processor belong to the same organization, the front-end processor is +.>On the contrary->
6. A queue generating apparatus supporting rapid sandboxes construction, the apparatus comprising: the sandbox monitoring module is used for acquiring sandbox creation data and classifying the sandbox creation data according to time periods;
the quantity prediction module is used for predicting the quantity of sandboxes required by the next time period of each front-end processor by adopting a time sequence based on the sandbox creation data, and comprises the steps of using the sandbox creation data to predict a double-period Holt-windows model in units of hours to obtain the quantity of sandboxes required by the next hour of each front-end processor, wherein the double-period Holt-windows model predicts the seasons in the original Holt-windows modelSplitting the node component into first periodic componentsAnd a second periodic componentAnd will->Smoothing coefficients as first periodic component, +.>As a smoothing coefficient of a second periodic component, wherein the first periodic component takes a month as a period, and the second periodic component takes a week as a period;
a queue allocation module for creating queues with the length equal to the number of the predicted sandboxes for each front-end processor, allocating tasks to each queue according to a first strategy, allocating tasks in the queues to each front-end processor task queue according to a second strategy for reverse order creation, wherein,
the task allocation to each queue according to the first strategy comprises the following steps: arranging all tasks in descending order according to the size of the data set to wait for distribution; comparing task completion time after adding a task in each queue according to the sequence, and distributing the task to the queue with the shortest task completion time;
the task in the queue is distributed to each front-end processor task queue for reverse order creation according to a second strategy, which comprises the following steps: taking out tasks according to the sequence of each task in each queue; comparing the total time added by executing all the front-end processor task queues after adding a task in each front-end processor, and distributing the task to the front-end processor task queue with the shortest total time increase; creating tasks according to the reverse order of the task queues of each front-end processor;
and the sandbox construction module is used for constructing sandboxes for task layering in the task queue of each front-end processor by utilizing the basic sandbox mirror image.
7. A computing device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the method of any one of claims 1-5 for queue generation supporting rapid sandboxes construction.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when processed is executed to implement the steps of the queue generating method supporting fast construction of sandboxes according to any one of claims 1 to 5.
CN202311269953.3A 2023-09-28 2023-09-28 Queue generating method and device supporting rapid sandbox construction Active CN116991563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311269953.3A CN116991563B (en) 2023-09-28 2023-09-28 Queue generating method and device supporting rapid sandbox construction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311269953.3A CN116991563B (en) 2023-09-28 2023-09-28 Queue generating method and device supporting rapid sandbox construction

Publications (2)

Publication Number Publication Date
CN116991563A CN116991563A (en) 2023-11-03
CN116991563B true CN116991563B (en) 2023-12-22

Family

ID=88534280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311269953.3A Active CN116991563B (en) 2023-09-28 2023-09-28 Queue generating method and device supporting rapid sandbox construction

Country Status (1)

Country Link
CN (1) CN116991563B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414226A (en) * 2019-01-07 2020-07-14 北京智融网络科技有限公司 Method and system for establishing task sandbox
CN113419831A (en) * 2021-06-23 2021-09-21 上海观安信息技术股份有限公司 Sandbox task scheduling method and system
US11144359B1 (en) * 2019-06-20 2021-10-12 Amazon Technologies, Inc. Managing sandbox reuse in an on-demand code execution system
CN114282210A (en) * 2021-12-31 2022-04-05 树根互联股份有限公司 Sandbox automatic construction method and system, computer equipment and readable storage medium
CN114816707A (en) * 2021-12-24 2022-07-29 统信软件技术有限公司 Method and device for creating sandbox environment for plug-in operation and computing equipment
CN114997417A (en) * 2022-06-02 2022-09-02 之江实验室 Function-level operation distributed intelligent decomposition method
CN115860694A (en) * 2023-01-04 2023-03-28 广东电网有限责任公司 Business expansion process management and control method and system based on instant message technology
CN116010941A (en) * 2023-03-28 2023-04-25 之江实验室 Multi-center medical queue construction system and method based on sandboxes

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8799862B2 (en) * 2011-06-24 2014-08-05 Alcatel Lucent Application testing using sandboxes
US20230011004A1 (en) * 2021-07-07 2023-01-12 Darktrace Holdings Limited Cyber security sandbox environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414226A (en) * 2019-01-07 2020-07-14 北京智融网络科技有限公司 Method and system for establishing task sandbox
US11144359B1 (en) * 2019-06-20 2021-10-12 Amazon Technologies, Inc. Managing sandbox reuse in an on-demand code execution system
CN113419831A (en) * 2021-06-23 2021-09-21 上海观安信息技术股份有限公司 Sandbox task scheduling method and system
CN114816707A (en) * 2021-12-24 2022-07-29 统信软件技术有限公司 Method and device for creating sandbox environment for plug-in operation and computing equipment
CN114282210A (en) * 2021-12-31 2022-04-05 树根互联股份有限公司 Sandbox automatic construction method and system, computer equipment and readable storage medium
CN114997417A (en) * 2022-06-02 2022-09-02 之江实验室 Function-level operation distributed intelligent decomposition method
CN115860694A (en) * 2023-01-04 2023-03-28 广东电网有限责任公司 Business expansion process management and control method and system based on instant message technology
CN116010941A (en) * 2023-03-28 2023-04-25 之江实验室 Multi-center medical queue construction system and method based on sandboxes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SandBoxer: A Self-Contained Sensor Architecture for Sandboxing the Industrial Internet of Things;Galal H.等;2019 IEEE International Conference on Communications Workshops (ICC Workshops);全文 *
恶意软件动态分析云平台;徐欣;程绍银;蒋凡;;计算机系统应用(第03期);全文 *

Also Published As

Publication number Publication date
CN116991563A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
TWI620075B (en) Server and cloud computing resource optimization method thereof for cloud big data computing architecture
Das et al. Performance optimization for edge-cloud serverless platforms via dynamic task placement
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
US9112782B2 (en) Reactive auto-scaling of capacity
US10430218B2 (en) Management of demand for virtual computing resources
US20130139152A1 (en) Cloud provisioning accelerator
WO2021159638A1 (en) Method, apparatus and device for scheduling cluster queue resources, and storage medium
CN112328399A (en) Cluster resource scheduling method and device, computer equipment and storage medium
WO2019056695A1 (en) Task scheduling method and apparatus, terminal device, and computer readable storage medium
US20130173597A1 (en) Computing resource allocation based on query response analysis in a networked computing environment
WO2023051505A1 (en) Job solving method and apparatus
CN110990138A (en) Resource scheduling method, device, server and storage medium
CN113515382B (en) Cloud resource allocation method and device, electronic equipment and storage medium
CN109558248B (en) Method and system for determining resource allocation parameters for ocean mode calculation
CN113034171B (en) Business data processing method and device, computer and readable storage medium
Tuli et al. SplitPlace: AI augmented splitting and placement of large-scale neural networks in mobile edge environments
US20200150957A1 (en) Dynamic scheduling for a scan
CN113485833B (en) Resource prediction method and device
US20220245001A1 (en) Cloud computing capacity management system using automated fine-grained admission control
CN116991563B (en) Queue generating method and device supporting rapid sandbox construction
Zhang et al. PRMRAP: A proactive virtual resource management framework in cloud
JP2011141703A (en) System, method and program for arranging resource
Du et al. A combined priority scheduling method for distributed machine learning
CN115562841A (en) Cloud video service self-adaptive resource scheduling system and method
Chunlin et al. Multi-queue scheduling of heterogeneous jobs in hybrid geo-distributed cloud environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant