CN115827250A - Data storage method, device and equipment - Google Patents

Data storage method, device and equipment Download PDF

Info

Publication number
CN115827250A
CN115827250A CN202211706903.2A CN202211706903A CN115827250A CN 115827250 A CN115827250 A CN 115827250A CN 202211706903 A CN202211706903 A CN 202211706903A CN 115827250 A CN115827250 A CN 115827250A
Authority
CN
China
Prior art keywords
task
resource
pointer
memory
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211706903.2A
Other languages
Chinese (zh)
Inventor
张海玉
孙伶君
陈波扬
程淼
符哲蔚
丁乃英
刘�东
刘明
邓志吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211706903.2A priority Critical patent/CN115827250A/en
Publication of CN115827250A publication Critical patent/CN115827250A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a data storage method, a data storage device and data storage equipment, which relate to the field of data processing, and the method comprises the following steps: after receiving a task processing instruction, the task node determines that the states of the pointers in the target pointer queue are all non-empty; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, and the data queue comprises at least one memory resource block; the task node sends a resource acquisition request to the memory node pool, so that when the memory node pool determines that the current resource can meet the resource acquisition request, a resource allocation pointer is fed back to a target pointer queue, and the resource allocation pointer points to a target resource allocated to the task node; and the task node stores the task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer. Through the mode of combining the resource pre-allocation and the dynamic allocation, the throughput of the whole assembly line is improved, and the method is suitable for more scenes.

Description

Data storage method, device and equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data storage method, apparatus, and device.
Background
With the increasing development of deep learning, the network model compression technology is more mature, so that the complexity of a single model is reduced, the memory occupied by the model is smaller and smaller, and the reasoning speed is faster and faster. This also makes it possible to solve complex scenario problems through multi-model collaboration.
At present, when a complex scene is encountered, the complex scene is often split into a plurality of sub-task nodes (i.e. models) which are dependent on each other, and data flows in each task node until a final calculation result is obtained.
In a traditional pipeline operation mode, a fixed amount of memory resources are generally allocated to each task node in advance, when the task node needs to run, whether idle resources exist in the corresponding memory resources is determined at first, and when the idle resources exist, the idle resources are used for storing output data and running related tasks; and when no idle resource exists, stopping the operation of the task node, and waiting for the data in the corresponding memory resource to be taken away by the next node.
Although the data storage mode in the pipeline operation realizes the relation decoupling between different task nodes, the data throughput in the operation process is limited, and the adaptive scenario is limited.
Disclosure of Invention
The invention provides a data storage method, a data storage device and data storage equipment, which are used for solving the problems that the throughput of data is limited and the applicable scene is limited in the traditional pipeline operation mode, the throughput of the whole pipeline is improved and the applicable scene is expanded in a mode of combining resource pre-allocation and dynamic allocation.
In a first aspect, an embodiment of the present application provides a data storage method, which is applied to any task node in a task processing system, where the task processing system includes a memory node pool, a plurality of task nodes, a pointer queue corresponding to each task node, and a data queue corresponding to each task node, and the method includes:
after receiving a task processing instruction, the task node determines that the states of the pointers in the target pointer queue are all non-empty; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
the task node sends a resource acquisition request to the memory node pool, so that when the memory node pool determines that the current resource can meet the resource acquisition request, a resource allocation pointer is fed back to a target pointer queue, and the resource allocation pointer points to a target resource allocated to the task node;
and the task node stores the task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
As an optional implementation, the method further comprises:
when the task node determines that a pointer with an empty state exists in the target pointer queue, storing task data by using a memory resource block in a data queue pointed by the pointer with the empty state;
and the pointer in the empty state represents that the memory resource block in the data queue pointed by the pointer is not used.
As an optional implementation, the method further comprises:
the task node receives the indication information fed back by the memory node pool, and sends a resource acquisition request to the memory node pool again at intervals of preset duration;
the indication information is fed back by the memory node pool when the current resource is determined not to meet the resource acquisition request.
In a second aspect, an embodiment of the present application provides another data storage method, which is applied to a memory node pool in a task processing system, where the task processing system includes the memory node pool, a plurality of task nodes, a pointer queue corresponding to each task node, and a data queue corresponding to each task node, and the method includes:
the method comprises the steps that a memory node pool receives a resource obtaining request sent by a task node, wherein the resource obtaining request is sent by the task node when the state of a pointer in a target pointer queue is determined to be non-empty, the pointer in the state of non-empty represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
when determining that the current resources in the memory node pool can meet the resource acquisition request, the memory node pool feeds back a resource allocation pointer to a target aiming queue corresponding to the task node, wherein the resource allocation pointer points to the target resources allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resources.
As an optional implementation, the method further comprises:
and when determining that the current resources in the memory node pool cannot meet the resource acquisition request, the memory node pool feeds back indication information to the task node, so that the task node sends the resource acquisition request to the memory node pool again at a preset time interval after receiving the indication information.
As an optional implementation manner, after the memory node pool feeds back the resource allocation pointer to the target destination queue corresponding to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resource, the method further includes:
and when determining that the data stored in the target resource is invalid, deleting the resource allocation pointer pointing to the target resource from the target pointer queue by the memory node pool.
As an optional implementation manner, the memory node pool includes a priority queue, and the priority queue stores priorities of a plurality of task nodes;
when the memory node pool receives resource acquisition requests sent by a plurality of task nodes, the memory node pool determines whether current resources in the memory node pool can meet the resource acquisition requests, and the method comprises the following steps:
the memory node pool determines the priorities of a plurality of task nodes according to the priority queues;
according to the sequence of the priorities from top to bottom, the memory node pool sequentially determines whether the data volume of the currently unallocated resources in the memory node pool can meet the resource acquisition request of the task node;
the priority of each task node in the priority queue is determined according to the running time of the task node and the dependency relationship between the task node and other task nodes.
In a third aspect, an embodiment of the present application provides a data storage device, including:
the first receiving module is used for determining that the states of the pointers in the target pointer queue are all non-null after receiving the task processing instruction; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
the sending module is used for sending a resource obtaining request to the memory node pool so that the memory node pool feeds back a resource allocation pointer to the target pointer queue when determining that the current resource can meet the resource obtaining request, and the resource allocation pointer points to a target resource allocated to the task node;
and the storage module is used for storing the task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
In a fourth aspect, an embodiment of the present application provides another data storage device, including:
the second receiving module is used for receiving a resource obtaining request sent by the task node, wherein the resource obtaining request is sent by the task node when the state of the pointer in the target pointer queue is determined to be non-empty, the pointer in the state of non-empty represents that a memory resource block in the data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
and the feedback module is used for feeding back a resource allocation pointer to the target aiming queue corresponding to the task node when the current resource in the memory node pool is determined to meet the resource acquisition request, wherein the resource allocation pointer points to the target resource allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resource.
In a fifth aspect, an embodiment of the present application provides a data storage device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any step of the data storage method in the first aspect.
In a sixth aspect, an embodiment of the present application provides a data storage device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement any step of the data storage method in the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, implement any step of the data storage method in the first aspect and the second aspect.
In an eighth aspect, an embodiment of the present application provides a computer program product, including a computer program, where the computer program is stored in a computer-readable storage medium; when the processor of the memory access device reads the computer program from the computer-readable storage medium, the processor executes the computer program, so that the memory access device performs any one of the steps of the data storage method in the first aspect and the second aspect.
According to the method, the autonomous optimization of resource allocation is realized by adopting a mode of combining resource pre-allocation (namely, the memory resource blocks in the task queue) and dynamic allocation (the resources in the memory node pool), so that the effect of improving the throughput of the whole assembly line is achieved, and meanwhile, the adaptive scene is expanded, so that the adaptive scene is more adaptive to the scene of rapid change of the task node resource occupation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a diagram illustrating a related art pipeline operation;
fig. 2 is a schematic structural diagram of a task processing system according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data storage method according to an embodiment of the present application;
FIG. 4 is a schematic flowchart of another data storage method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a data storage device according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating another data storage device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a data storage device according to an embodiment of the present application;
fig. 8 is a schematic diagram of another data storage device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
The application scenario described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not form a limitation on the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems. In the description of the present application, the term "plurality" means two or more unless otherwise specified.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the increasing development of deep learning, the network model compression technology is mature, so that the complexity of a single model is reduced, the memory occupied by the model is smaller and smaller, and the reasoning speed is higher and higher. This also makes it possible to solve complex scenario problems through multi-model collaboration.
For computing resources, more models can be fully utilized, and multi-model cooperation can greatly improve the utilization rate of the computing resources, but for memory resources, different memory allocation modes have great influence on the throughput of a solution, and how to allocate the memory resources becomes a problem to be solved urgently.
At present, when a complex scene is encountered, the complex scene is often split into a plurality of sub-task nodes (i.e. models) which are dependent on each other, and data flows in each task node until a final calculation result is obtained.
Fig. 1 is a schematic diagram of a related art pipeline operation. As shown in fig. 1, in a conventional pipeline operation manner, a fixed amount of memory resources are usually allocated to each task node in advance, the memory resources exist in the form of data queues, each task node corresponds to one data queue, and the data queue includes at least one slot (pre-allocated memory resources with a fixed size). And the adjacent task nodes interact through a data queue.
When a task node needs to run, firstly determining whether an unused slot position exists in a memory in a corresponding data queue, when the unused slot position exists, taking out data of one slot position from an input data queue (namely, a data queue corresponding to a task node at the previous stage) of the data queue, and storing a result in an output queue, wherein at the moment, 1 unused slot position is added to the input queue, and 1 used slot position is added to the output queue; and when the unused slot does not exist, stopping running of the task node, and waiting for the data in the data queue corresponding to the task node to be taken away by the next task node.
The above is a typical pipeline operation mode, and although the relationship between task nodes is decoupled through a data queue, different task nodes can run in parallel, the throughput of data in the operation process is limited, and the adaptive scenario is limited.
In order to solve the above problems, an embodiment of the present application provides a data storage method, which is applied to a task processing system, and improves the throughput of an overall pipeline and expands an adaptive scenario by a combination of resource pre-allocation and dynamic allocation.
Fig. 2 is a schematic diagram of a task processing system according to an embodiment of the present application, and as shown in fig. 2, the task processing system includes a memory node pool, a plurality of task nodes, a pointer queue corresponding to each task node, and a data queue corresponding to each task node.
The following description is given by taking an example that a task processing system includes a task node, and specifically introduces a memory node pool, a pointer queue corresponding to the task node, and a data queue corresponding to the task node in the task processing system:
the data queue stores pre-allocated memory resources, and the memory resources are artificially allocated to resources with fixed sizes corresponding to the task nodes when the task processing system is initialized. The data queue comprises at least one slot position, the resource amount corresponding to each slot position is the same, namely each slot position corresponds to one memory resource block, and the number of the slot positions can be manually set and changed.
It should be noted that, because the task processing system provided in the embodiment of the present application combines resource pre-allocation and dynamic allocation, the data queue is set only to prevent a large number of requests generated during initial operation of pipeline operation, and is used as a buffer for data streams, the size of the data queue corresponding to each task node only needs to be roughly estimated, and it is not necessary to perform fine calculation as in the related art.
Pointers corresponding to the output data and/or memory resources in the memory node pool are stored in the pointer queue. The length of the pointer queue is variable, the minimum length is the length of the data queue, and the pointer queue comprises a plurality of pointers which are in one-to-one correspondence with slot bits in the data queue.
Pointers within a minimum length range have two states: null and non-null. The pointer in the non-empty state points to the memory address of the corresponding slot position in the data queue, and the pointed slot position is represented to be used; a pointer whose state is empty indicates that the slot it points to is unused.
When all slot positions in the data queue are used, namely the pointer states in the minimum length range are all non-empty, a task node applies for acquiring resources from a memory node pool when needing to process a task, if the memory node pool allocates the task node resources, a pointer (for the convenience of distinguishing, the pointer is called as a resource allocation pointer hereinafter) with a non-empty state is added in the pointer queue, the pointer points to the resources allocated by the memory node pool, and the length of the pointer queue is increased at the moment; that is, if a pointer exceeding the minimum length of the queue exists, its state is determined to be non-empty and points to a certain memory address in the memory node pool.
The memory node pool, memory node Chi Zhongbao, contains the requested memory resources (i.e. the pre-requested memory resources of fixed size), the memory management tool and the priority queue.
The memory management tool is mainly used for packing, allocating, recycling and the like memory resources requested by the task nodes, and because the request sources are all the task nodes and the memory size requested by each task node is relatively fixed (that is, the basic unit of the resource requested by each node is the same, for example, the basic unit is a memory resource block), the memory allocation and fragmentation management are relatively easy.
The priority queue stores the priorities of all task nodes in the task processing system, and in some embodiments, the priority queue may store the priorities of the pointer queues corresponding to the task nodes to record the priorities of the task nodes, where the priorities in the priority queue are variable.
It should be noted that, when only one task node is included in the task processing system, the priority queue does not need to be included in the memory node pool.
Fig. 3 is a schematic flowchart of a data storage method according to an embodiment of the present application; as shown in fig. 3, an embodiment of the present application provides a data storage method, which is applied to any task node in a task processing system, and specifically includes the following steps:
step 301, after receiving a task processing instruction, determining that the states of the pointers in the target pointer queue are all non-null;
the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
after receiving the task processing instruction, the task node first determines whether an empty pointer exists in a target pointer queue corresponding to the task node, that is, whether an unused memory resource block (that is, a slot position) exists in a data queue corresponding to the task node, and if it is determined that the empty pointer exists in the target pointer queue, that is, it is determined that all the memory resource blocks in the data queue corresponding to the task node are used, then a subsequent step of requesting resources from a memory node pool is performed.
If the task node determines that a pointer in an empty state exists in the target pointer queue, namely an unused memory resource block exists in a corresponding data queue, storing task data by using the memory resource block in the data queue pointed by the pointer in the empty state; and the pointer in the empty state represents that the memory resource block in the data queue pointed by the pointer is not used.
Step 302, sending a resource acquisition request to a memory node pool;
when the task node determines that the states of the pointers in the target pointer queue are all non-empty, that is, it determines that all memory resource blocks in the corresponding data queue are used, a resource acquisition request is sent to the memory node pool, that is, the resource is requested to be allocated by the resource saving battery, so that when the memory node pool determines that the current resources can meet the resource acquisition request, the resource allocation pointer is fed back to the target pointer queue, and the resource allocation pointer points to the target resources allocated to the task node.
In implementation, the resource obtaining request carries an identifier of a task node and a resource amount requested by the task node, for example, resources of several memory resource blocks, so that the memory node pool determines the task node that sends the resource obtaining request according to the identifier of the task node, and determines whether a target resource can be allocated to the task node according to the resource amount requested by the task node.
Step 303, storing the task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
And when the task node memory node pool feeds back the resource allocation pointer to the target pointer queue corresponding to the task node, storing task data corresponding to the task processing instruction by using the resource pointed by the resource allocation pointer, and executing the task corresponding to the task processing instruction.
According to the method, the autonomous optimization of resource allocation is realized by adopting a mode of combining resource pre-allocation (namely, the memory resource blocks in the task queue) and dynamic allocation (the resources in the memory node pool), so that the effect of improving the throughput of the whole assembly line is achieved, and meanwhile, the adaptive scene is expanded, so that the adaptive scene is more adaptive to the scene of rapid change of the task node resource occupation.
In implementation, after a task node sends a resource acquisition request to a memory node pool, the memory node pool feeds back indication information to the task node when determining that current resources in the memory node pool cannot meet the resource acquisition request. After receiving the indication information fed back by the memory node pool, the task node sends a resource acquisition request to the memory node pool at intervals of preset duration. The indication information is fed back by the memory node pool when determining that the current resource can not meet the resource acquisition request, and is used for indicating that the task node can not allocate the resource for the task node currently.
In some embodiments, the indication information may be in the form of a null pointer, and it should be noted that, when the indication information is a null pointer, the memory node pool only sends the null pointer to the task node, indicating that resources cannot be currently allocated to the task node, and does not add the null pointer to its corresponding pointer queue.
FIG. 4 is a schematic flow chart illustrating another data storage method according to an embodiment of the present application; as shown in fig. 4, an embodiment of the present application provides another data storage method, which is applied to a memory node pool in a task processing system, and specifically includes the following steps:
step 401, a memory node pool receives a resource acquisition request sent by a task node;
the resource obtaining request is sent when the task node determines that the states of the pointers in the target pointer queue are all non-empty, namely the task node determines that all memory resource blocks in the corresponding data queue are used. The pointer in the non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block.
In some embodiments, the resource obtaining request carries an identifier of the task node and a resource amount requested by the task node, for example, resources of several memory resource blocks. When the memory node pool receives the resource acquisition request, the task node sending the resource acquisition request is determined according to the identification of the task node carried by the resource acquisition request, and the resource required to be distributed is determined according to the resource quantity requested by the task node carried by the resource acquisition request.
Step 402, when determining that the current resources in the memory node pool can meet the resource acquisition request, the memory node pool feeds back a resource allocation pointer to a target aiming queue corresponding to the task node;
after the memory node pool receives the resource acquisition request sent by the task node, whether the resource can be allocated to the task node sending the resource acquisition request and the resource requested to be acquired by the task node is evaluated according to the current resource condition in the memory node pool, the task node sending the resource acquisition request, and the like.
And if the memory node pool determines that the current resources in the memory node pool can meet the resource acquisition request, determining target resources allocated to the task node, and feeding back a resource allocation pointer to a target aiming queue corresponding to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resources. And the resource allocation pointer points to the target resource allocated to the task node.
And if the memory node pool determines that the current resources in the memory node pool cannot meet the resource acquisition request, feeding back indication information to the task node, so that the task node sends the resource acquisition request to the memory node pool again at a preset time interval after receiving the indication information.
According to the method, the autonomous optimization of resource allocation is realized through the dynamic allocation of resources, and the effect of improving the throughput of the whole assembly line is further achieved. And the dynamic allocation of the resources of the task nodes is more flexible, and the method is more suitable for the scene that the resource occupation of the task nodes changes rapidly.
In some embodiments, in step 402, after the memory node pool feeds back the resource allocation pointer to the target destination queue corresponding to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resource, the following steps are further performed in this embodiment of the present application:
and when determining that the data stored in the target resource is invalid, deleting the resource allocation pointer pointing to the target resource from the target pointer queue by the memory node pool.
That is, after the task node stores the task data corresponding to the task processing instruction using the target resource, and the memory node pool monitors that the data stored in the target resource becomes invalid, the memory management tool is used to perform resource recovery, and at the same time, the resource allocation pointer pointing to the target resource in the target pointer queue of the task node is deleted, where when the next task node of the task node acquires the data stored in the target resource, the state of the data stored in the target resource becomes invalid.
In some embodiments, the memory node pool includes a priority queue, where the priority queue stores priorities of all task nodes in the task processing system;
when the memory node pool receives the resource obtaining requests sent by the plurality of task nodes in step 401, the determining, by the memory node pool, whether the current resource in the memory node pool can satisfy the resource obtaining requests by the memory node pool includes:
the memory node pool determines the priorities of a plurality of task nodes according to the priority queues;
according to the sequence of the priorities from top to bottom, the memory node pool sequentially determines whether the data volume of the currently unallocated resources in the memory node pool can meet the resource acquisition request of the task node;
the priority of each task node in the priority queue is determined according to the running time of the task node and the dependency relationship between the task node and other task nodes.
The running time refers to the time consumed by the task node for completing the task, and the shorter the running time is, the less likely the data stream is to block the task node, so that the priority of the task node can be properly lowered. I.e. the shorter the running time of a task node, the lower its corresponding priority.
The dependency relationship refers to an upstream-downstream relationship between the task node and other task nodes of the task node, and the priority of the downstream node compared with the priority of the upstream node can be properly increased, that is, the higher the number of the upstream nodes of the task node, the higher the priority of the upstream node.
According to the method, the intervention of human beings on the dynamic resource allocation is increased by setting the priority of the dynamic allocation, so that the resource allocation is more reasonable.
The data storage method provided by the embodiment of the application adopts a mode of combining the pre-allocation and the dynamic allocation of the resources, the pre-allocation and the dynamic allocation of the resources are complementary, and the data storage method is suitable for more scenes; because the relative fixed resource occupation of the task nodes is not assumed, the accurate calculation of the pre-allocated resources of each task node is not needed, and the method is more suitable for the scene of rapid change of the resource occupation of the task nodes; at the same time, the resource allocation is prioritized, the resource allocation can be managed flexibly and reasonably, so that the overall throughput of the production line is improved.
Based on the same disclosure concept, the embodiments of the present application further provide a data storage device, and since the device is the device in the method in the embodiments of the present application, and the principle of the device to solve the problem is similar to the method, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Fig. 5 is a schematic diagram of a data storage device according to an embodiment of the present application, and referring to fig. 5, a data storage device according to an embodiment of the present application includes:
a first receiving module 501, configured to determine that the states of the pointers in the target pointer queue are all non-empty after receiving the task processing instruction; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
a sending module 502, configured to send a resource obtaining request to a memory node pool, so that when it is determined that the current resource can meet the resource obtaining request, the memory node pool feeds back a resource allocation pointer to the target pointer queue, where the resource allocation pointer points to a target resource allocated to the task node;
a storage module 503, configured to store task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
Optionally, the storage module 503 is further configured to: when the pointer in the empty state exists in the target pointer queue, storing the task data by using a memory resource block in a data queue pointed by the pointer in the empty state; and the pointer in the empty state represents that the memory resource block in the data queue pointed by the pointer is not used.
Optionally, the storage module 503 is further configured to receive indication information fed back by the memory node pool, and send a resource acquisition request to the memory node pool again at a preset time interval; the indication information is fed back by the memory node pool when the current resource is determined not to meet the resource acquisition request.
An embodiment of the present application further provides another data storage device, fig. 6 is a schematic diagram of another data storage device provided in an embodiment of the present application, please refer to fig. 6, an embodiment of the present application provides a data storage device, including:
a second receiving module 601, configured to receive a resource acquisition request sent by a task node, where the resource acquisition request is sent by the task node when it is determined that all the pointers in a target pointer queue are in a non-empty state, a pointer in the non-empty state represents that a memory resource block in a data queue to which the pointer points is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue includes at least one memory resource block;
a feedback module 602, configured to, when it is determined that the current resource in the memory node pool can meet the resource acquisition request, feed a resource allocation pointer back to the task node to a target destination queue corresponding to the task node, where the resource allocation pointer points to a target resource allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resource.
Optionally, the feedback module 602 is further configured to: and when determining that the current resources in the memory node pool cannot meet the resource acquisition request, feeding back indication information to the task node so that the task node sends the resource acquisition request to the memory node pool again at a preset time interval after receiving the indication information.
Optionally, the feedback module 602 is configured to feed back, to the task node, a resource allocation pointer to a target destination queue corresponding to the task node, so that after the task node stores task data corresponding to the received task processing instruction by using the target resource, the feedback module is further configured to: and when the data stored in the target resource is determined to be invalid, deleting the resource allocation pointer pointing to the target resource from the target pointer queue.
Optionally, the memory node pool includes a priority queue, where the priority queue stores priorities of a plurality of task nodes;
optionally, when receiving resource obtaining requests sent by a plurality of task nodes, the feedback module 602 is configured to determine whether current resources in the memory node pool can meet the resource obtaining requests, and includes: determining the priorities of a plurality of task nodes according to the priority queues; sequentially determining whether the data volume of the currently unallocated resources in the memory node pool can meet the resource acquisition request of the task node according to the sequence of the priority from top to bottom; the priority of each task node in the priority queue is determined according to the running time of the task node and the dependency relationship between the task node and other task nodes.
Based on the same disclosure concept, the embodiment of the present application further provides a data storage device, and since the device is the device in the method in the embodiment of the present application, and the principle of the device to solve the problem is similar to the method, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, an apparatus according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the data storage method according to various exemplary embodiments of the present application described above in the present specification.
An apparatus 700 according to this embodiment of the present application is described below with reference to fig. 7. The device 700 shown in fig. 7 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present application.
As shown in fig. 7, the device 700 is embodied in the form of a general purpose device. The components of device 700 may include, but are not limited to: the at least one processor 701, the at least one memory 702, the bus 703 connecting the different system components (including the memory 702 and the processor 701), wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of:
after receiving a task processing instruction, determining that the states of the pointers in the target pointer queue are all non-null; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
sending a resource acquisition request to a memory node pool, so that when the memory node pool determines that the current resource can meet the resource acquisition request, feeding back a resource allocation pointer to a target pointer queue, wherein the resource allocation pointer points to a target resource allocated to a task node;
and storing the task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
Bus 703 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 702 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 7021 and/or cache memory 7022, and may further include Read Only Memory (ROM) 7023.
Memory 702 may also include a program/utility 7025 having a set (at least one) of program modules 7024, such program modules 7024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Device 700 can also communicate with one or more external devices 704 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with device 700, and/or with any devices (e.g., router, modem, etc.) that enable device 700 to communicate with one or more other devices. Such communication may occur via input/output (I/O) interfaces 705. Also, the device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 706. As shown, the network adapter 706 communicates with the other modules for the device 700 over a bus 703. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the device 700, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Optionally, the processor is further configured to: when a pointer in an empty state exists in the target pointer queue, storing task data by using a memory resource block in a data queue pointed by the pointer in the empty state; and the pointer in the empty state represents that the memory resource block in the data queue pointed by the pointer is not used.
Optionally, the processor is further configured to: receiving indication information fed back by the memory node pool, and sending a resource acquisition request to the memory node pool again at intervals of preset duration; the indication information is fed back by the memory node pool when the current resource is determined not to meet the resource acquisition request.
A data storage device is also provided in an embodiment of the present application, and a device 800 according to this embodiment of the present application is described below with reference to fig. 8. The device 800 shown in fig. 8 is only an example and should not impose any limitation on the functionality and scope of use of embodiments of the present application.
As shown in fig. 8, the device 800 is in the form of a general purpose device. The components of device 800 may include, but are not limited to: the at least one processor 801, the at least one memory 802, and the bus 803 connecting the various system components (including the memory 802 and the processor 801), wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of:
receiving a resource acquisition request sent by a task node, wherein the resource acquisition request is sent by the task node when the state of a pointer in a target pointer queue is determined to be non-empty, the pointer in the state of non-empty represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
when it is determined that the current resources in the memory node pool can meet the resource acquisition request, feeding back a resource allocation pointer to the task node to a target aiming queue corresponding to the task node, wherein the resource allocation pointer points to the target resources allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resources.
Bus 803 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 802 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 8021 and/or cache memory 8022, and may further include Read Only Memory (ROM) 8023.
Memory 802 may also include a program/utility 8025 having a set (at least one) of program modules 8024, such program modules 8024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Device 800 can also communicate with one or more external devices 804 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with device 800, and/or with any devices (e.g., router, modem, etc.) that enable device 800 to communicate with one or more other devices. Such communication may be through input/output (I/O) interfaces 805. Also, device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via network adapter 806. As shown, the network adapter 806 communicates with the other modules for the device 800 over the bus 803. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the device 800, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Optionally, the processor is further configured to:
when determining that the current resources in the memory node pool cannot meet the resource acquisition request, feeding back indication information to the task node, so that the task node sends the resource acquisition request to the memory node pool again at a preset time interval after receiving the indication information.
Optionally, the processor is configured to feed back, to the task node, a resource allocation pointer to a target destination queue corresponding to the task node, so that after the task node stores task data corresponding to the received task processing instruction by using the target resource, the processor is further configured to:
and when the data stored in the target resource is determined to be invalid, deleting the resource allocation pointer pointing to the target resource from the target pointer queue.
Optionally, the memory node pool includes a priority queue, and the priority queue stores priorities of a plurality of task nodes;
when receiving resource acquisition requests sent by a plurality of task nodes, the processor is configured to determine whether current resources in a memory node pool can meet the resource acquisition requests, and includes:
the memory node pool determines the priorities of a plurality of task nodes according to the priority queues;
according to the sequence of the priority from top to bottom, the memory node pool sequentially determines whether the data volume of the current unallocated resources in the memory node pool can meet the resource acquisition request of the task node;
the priority of each task node in the priority queue is determined according to the running time of the task node and the dependency relationship between the task node and other task nodes.
In some possible embodiments, various aspects of a data storage method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of a data storage method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for monitoring of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a device. However, the program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device, or entirely on the remote device or server. In the case of a remote device, the remote device may be connected to the user device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and block diagrams, and combinations of flows and blocks in the flowchart illustrations and block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A data storage method is characterized in that the method is applied to any task node in a task processing system, the task processing system comprises a memory node pool, a plurality of task nodes, a pointer queue corresponding to each task node and a data queue corresponding to each task node, and the method comprises the following steps:
after receiving a task processing instruction, the task node determines that the states of the pointers in the target pointer queue are all non-empty; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
the task node sends a resource acquisition request to the memory node pool, so that the memory node pool feeds back a resource allocation pointer to the target pointer queue when determining that the current resource can meet the resource acquisition request, and the resource allocation pointer points to a target resource allocated to the task node;
and the task node stores task data corresponding to the task processing instruction by using the target resource pointed by the resource allocation pointer.
2. The method of claim 1, further comprising:
when the task node determines that a pointer with an empty state exists in the target pointer queue, the task node stores the task data by using a memory resource block in a data queue pointed by the pointer with the empty state;
and the pointer in the empty state represents that the memory resource block in the data queue pointed by the pointer is not used.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the task node receives the indication information fed back by the memory node pool, and sends a resource acquisition request to the memory node pool again at intervals of preset duration;
and the indication information is fed back by the memory node pool when the current resource is determined not to meet the resource acquisition request.
4. A data storage method is characterized in that the method is applied to a memory node pool in a task processing system, the task processing system comprises the memory node pool, a plurality of task nodes, a pointer queue corresponding to each task node and a data queue corresponding to each task node, and the method comprises the following steps:
the method comprises the steps that a resource obtaining request sent by a task node is received by a memory node pool, wherein the resource obtaining request is sent by the task node when the state of a pointer in a target pointer queue is determined to be non-empty, the pointer in the state of non-empty represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
and when determining that the current resources in the memory node pool can meet the resource acquisition request, the memory node pool feeds back a resource allocation pointer to a target aiming queue corresponding to the task node, wherein the resource allocation pointer points to the target resources allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resources.
5. The method of claim 4, further comprising:
and when determining that the current resources in the memory node pool cannot meet the resource acquisition request, the memory node pool feeds back indication information to the task node, so that after receiving the indication information, the task node sends the resource acquisition request to the memory node pool at preset time intervals.
6. The method according to claim 4, wherein the memory node pool feeds back a resource allocation pointer to the target destination queue corresponding to the task node, so that after the task node stores task data corresponding to the received task processing instruction by using the target resource, the method further comprises:
and when determining that the data stored in the target resource is invalid, deleting a resource allocation pointer pointing to the target resource from the target pointer queue by the memory node pool.
7. The method according to any one of claims 4 to 6, wherein the memory node pool comprises a priority queue, and the priority queue stores the priorities of the plurality of task nodes;
when the memory node pool receives resource acquisition requests sent by a plurality of task nodes, the memory node pool determines whether current resources in the memory node pool can meet the resource acquisition requests, and the method comprises the following steps:
the memory node pool determines the priorities of the task nodes according to the priority queue;
according to the sequence of the priorities from top to bottom, the memory node pool sequentially determines whether the data volume of the currently unallocated resources in the memory node pool can meet the resource acquisition request of the task node;
the priority of each task node in the priority queue is determined according to the running time of the task node and the dependency relationship between the task node and other task nodes.
8. A data storage device, the device comprising:
the first receiving module is used for determining that the states of the pointers in the target pointer queue are all non-empty after receiving the task processing instruction; the pointer in a non-empty state represents that a memory resource block in a data queue pointed by the pointer is used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue comprises at least one memory resource block;
a sending module, configured to send a resource acquisition request to a memory node pool, so that when determining that a current resource can meet the resource acquisition request, the memory node pool feeds back a resource allocation pointer to the target pointer queue, where the resource allocation pointer points to a target resource allocated to the task node;
and the storage module is used for storing the task data corresponding to the task processing instruction by utilizing the target resource pointed by the resource allocation pointer.
9. A data storage device, the device comprising:
a second receiving module, configured to receive a resource acquisition request sent by a task node, where the resource acquisition request is sent by the task node when it is determined that all the pointers in a target pointer queue are in a non-empty state, and the pointers in the non-empty state represent that memory resource blocks in a data queue to which the pointers point are used, the target pointer queue is a pointer queue corresponding to the task node, and the data queue includes at least one memory resource block;
and the feedback module is used for feeding a resource allocation pointer back to the task node to a target aiming queue corresponding to the task node when the current resource in the memory node pool is determined to meet the resource acquisition request, wherein the resource allocation pointer points to the target resource allocated to the task node, so that the task node stores task data corresponding to the received task processing instruction by using the target resource.
10. A data storage device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
CN202211706903.2A 2022-12-29 2022-12-29 Data storage method, device and equipment Pending CN115827250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211706903.2A CN115827250A (en) 2022-12-29 2022-12-29 Data storage method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211706903.2A CN115827250A (en) 2022-12-29 2022-12-29 Data storage method, device and equipment

Publications (1)

Publication Number Publication Date
CN115827250A true CN115827250A (en) 2023-03-21

Family

ID=85519292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211706903.2A Pending CN115827250A (en) 2022-12-29 2022-12-29 Data storage method, device and equipment

Country Status (1)

Country Link
CN (1) CN115827250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795322A (en) * 2023-06-21 2023-09-22 广州市玄武无线科技股份有限公司 Multi-label queue implementation method and device, electronic equipment and storage medium
CN117235167A (en) * 2023-11-14 2023-12-15 戎行技术有限公司 Task flow dynamic configuration method and system applied to ETL system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795322A (en) * 2023-06-21 2023-09-22 广州市玄武无线科技股份有限公司 Multi-label queue implementation method and device, electronic equipment and storage medium
CN117235167A (en) * 2023-11-14 2023-12-15 戎行技术有限公司 Task flow dynamic configuration method and system applied to ETL system
CN117235167B (en) * 2023-11-14 2024-01-30 戎行技术有限公司 Task flow dynamic configuration method and system applied to ETL system

Similar Documents

Publication Publication Date Title
US9916183B2 (en) Scheduling mapreduce jobs in a cluster of dynamically available servers
US10223166B2 (en) Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment
US9569262B2 (en) Backfill scheduling for embarrassingly parallel jobs
CN109034396B (en) Method and apparatus for processing deep learning jobs in a distributed cluster
CN115827250A (en) Data storage method, device and equipment
CN109992407B (en) YARN cluster GPU resource scheduling method, device and medium
CN112148455B (en) Task processing method, device and medium
US8572614B2 (en) Processing workloads using a processor hierarchy system
KR20140080434A (en) Device and method for optimization of data processing in a mapreduce framework
CN109117252B (en) Method and system for task processing based on container and container cluster management system
US9471387B2 (en) Scheduling in job execution
US11347546B2 (en) Task scheduling method and device, and computer storage medium
CN111198754B (en) Task scheduling method and device
CN112905342A (en) Resource scheduling method, device, equipment and computer readable storage medium
Liu et al. Optimizing shuffle in wide-area data analytics
EP4123449A1 (en) Resource scheduling method and related device
US20220300322A1 (en) Cascading of Graph Streaming Processors
CN107632890B (en) Dynamic node distribution method and system in data stream architecture
CN117093335A (en) Task scheduling method and device for distributed storage system
CN110120959A (en) Big data method for pushing, device, system, equipment and readable storage medium storing program for executing
CN114090201A (en) Resource scheduling method, device, equipment and storage medium
CN114116790A (en) Data processing method and device
CN116737088B (en) Object migration method and device, electronic equipment and storage medium
CN111367875B (en) Ticket file processing method, system, equipment and medium
CN112416539B (en) Multi-task parallel scheduling method for heterogeneous many-core processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination