CN110659127A - Method, device and system for processing task - Google Patents
Method, device and system for processing task Download PDFInfo
- Publication number
- CN110659127A CN110659127A CN201810701224.3A CN201810701224A CN110659127A CN 110659127 A CN110659127 A CN 110659127A CN 201810701224 A CN201810701224 A CN 201810701224A CN 110659127 A CN110659127 A CN 110659127A
- Authority
- CN
- China
- Prior art keywords
- processing
- task
- resource
- processed
- container
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 245
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013135 deep learning Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 9
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
Abstract
The application relates to a method, a device and a system for processing tasks. The method comprises the following steps: receiving a processing request message, wherein the processing request message comprises resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating to use a container; allocating a container according to the indication information, and allocating resources for processing the task to be processed according to the resource demand information; and processing the task to be processed in the container through the resources and the processing program. The method and the device avoid the problem that a plurality of tasks may have mutual influence.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a system for processing a task.
Background
The server cluster framework is made up of a plurality of servers, and may provide a large number of computing resources that may be used to process tasks. For example, server clusters can now be used to handle deep learning tasks.
The server cluster framework comprises a management server and a plurality of processing servers for processing tasks, wherein the management server can collect the resource condition of each processing server, and allocates the tasks to be processed to a certain processing server according to the resource condition of each processing server, and the processing server uses the included resources to process the tasks.
In the process of implementing the present application, the inventors found that the above manner has at least the following defects:
often, a processing server is allocated to multiple tasks, which may be processed locally and simultaneously, and there may be interactions between the multiple tasks while they are being processed.
Disclosure of Invention
In order to avoid the problem that a plurality of tasks may have mutual influence, embodiments of the present application provide a method, an apparatus, and a system for processing a task. The technical scheme is as follows:
in a first aspect, the present application provides a method of processing a task, the method comprising:
receiving a processing request message, wherein the processing request message comprises resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating to use a container;
allocating a container according to the indication information, and allocating resources for processing the task to be processed according to the resource demand information;
and processing the task to be processed in the container through the resources and the processing program.
Optionally, the container includes a storage location of a GPU driver of the graphics processor, and the resource includes a GPU;
the processing the task to be processed in the container through the resource and the processing program comprises:
calling the GPU driver according to the storage position of the GPU driver;
and driving the GPU to run the processing program in the container through the GPU driver, and processing the task to be processed by using the processing program.
Optionally, after the to-be-processed task is processed in the container through the resource and the handler, the method further includes:
and sending resource condition information to the management server, wherein the resource condition information at least comprises the number of the current idle resources.
Optionally, the container further includes an identifier of a GPU, and before the GPU is driven by the GPU driver in the container to run the processing program, the method further includes
And mapping the GPU corresponding to the identification of the GPU to the container.
Optionally, the task to be processed is a deep learning task.
In a second aspect, the present application provides a method of processing a task, the method comprising:
receiving a resource unit sent by a management device, where the resource unit includes an idle resource number in a processing device and a device identifier of the processing device, and the resource unit is generated by the management device according to resource situation information sent by the processing device, where the resource situation information includes the idle resource number;
acquiring a task to be processed, wherein the resource quantity included in the resource demand information corresponding to the task to be processed is less than or equal to the idle resource quantity;
sending a processing request message to the processing device, where the processing request message includes the resource demand information, the to-be-processed task, a processing program for processing the to-be-processed task, and indication information for indicating to use a container, so that the processing device processes the to-be-processed task.
In a third aspect, the present application provides an apparatus for processing tasks, comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a processing request message, and the processing request message comprises resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating a container to be used;
the allocation module is used for allocating containers according to the indication information and allocating resources for processing the tasks to be processed according to the resource demand information;
and the processing module is used for processing the tasks to be processed in the container through the resources and the processing program.
Optionally, the container includes a storage location of a GPU driver of the graphics processor, and the resource includes a GPU;
the processing module comprises:
the calling unit is used for calling the GPU driver according to the storage position of the GPU driver;
and the processing unit is used for driving the GPU to run the processing program in the container through the GPU driver, and processing the task to be processed by using the processing program.
Optionally, the apparatus further comprises:
and the sending module is used for sending resource condition information to the management server, wherein the resource condition information at least comprises the number of the current idle resources.
Optionally, the container further includes an identifier of the GPU, and the apparatus further includes
And the mapping module is used for mapping the GPU corresponding to the identification of the GPU to the container.
Optionally, the task to be processed is a deep learning task.
In a fourth aspect, the present application provides an apparatus for processing a task, the apparatus comprising:
a receiving module, configured to receive a resource unit sent by a management device, where the resource unit includes an idle resource number in a processing device and a device identifier of the processing device, the resource unit is generated by the management device according to resource situation information sent by the processing device, and the resource situation information includes the idle resource number;
the acquisition module is used for acquiring a task to be processed, and the resource quantity included in the resource demand information corresponding to the task to be processed is less than or equal to the idle resource quantity;
a sending module, configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and indication information indicating to use a container, so that the processing device processes the to-be-processed task.
In a fifth aspect, the present application provides a system for processing tasks, the system comprising the apparatus of the third aspect and the apparatus of the fourth aspect.
In a sixth aspect, embodiments provided herein provide a non-transitory computer-readable storage medium for storing a computer program, the computer program being loaded by a processor for executing the instructions of the first aspect, any optional implementation manner of the first aspect, or the method of the second aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the tasks to be processed are processed in the containers through the resources and the processing programs, so that when a plurality of tasks are simultaneously processed in the same processing device, each task is processed by the processing device in the respective container, the tasks are isolated from each other through the containers, and the mutual influence among the tasks is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1-1 is a schematic diagram of an apparatus cluster framework provided in an embodiment of the present application;
fig. 1-2 are schematic diagrams of a deep learning platform architecture provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method for processing tasks according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for processing tasks provided by embodiments of the present application;
FIG. 4 is a flow chart of another method for processing tasks provided by embodiments of the present application;
FIG. 5 is a schematic diagram of an apparatus for processing tasks according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an apparatus for processing tasks according to an embodiment of the present disclosure;
FIG. 7 is a system diagram for processing tasks according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1-1, an embodiment of the present application provides an apparatus cluster framework, where the framework includes:
the task device comprises a management device, a task device and a plurality of processing devices, wherein the management device can be connected with each processing device through a network, the management device can also be connected with the task device through the network, and the network connection can be wired connection or wireless connection.
Each Processing device includes a computing resource, which may be at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a memory, and the like.
For each processing device, the processing device is configured to obtain current resource situation information of the processing device, where the resource situation information at least includes a current amount of free resources in the processing device, and may also include a current amount of used resources in the processing device, and may also send the resource situation information to the management device.
Optionally, the amount of idle resources may include at least one of an idle CPU number, an idle GPU number, an idle memory capacity, and the like. The used resource amount includes at least one of the number of used CPUs, the number of used GPUs, and the used memory capacity.
And the management device is used for receiving the resource condition information sent by the processing device, generating a resource unit according to the resource condition information, wherein the resource unit comprises the number of idle resources in the processing device and the device identifier of the processing device, and sending the resource unit to the task device.
The task equipment comprises at least one task to be processed, at least one processing program for processing the task to be processed and resource demand information corresponding to each task to be processed. The resource requirement information corresponding to the to-be-processed task may include the number of resources required for processing the to-be-processed task.
Optionally, the amount of resources required for processing the task to be processed includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
The task equipment is used for receiving the resource unit, matching the resource unit with the resource demand information corresponding to each task to be processed to obtain resource demand information corresponding to one task to be processed in a matching mode, and the resource unit can meet the resource demand information corresponding to the task to be processed; selecting one processing program for processing the task to be processed from the at least one processing program, and sending a processing request message to the management device, wherein the processing request message comprises a device identifier in the resource unit, resource demand information corresponding to the task to be processed, the selected processing program and indication information for indicating use of the container.
The resource unit being capable of meeting the resource requirement information corresponding to the task to be processed means that the number of idle resources included in the resource unit is greater than or equal to the number of resources required for processing the task to be processed and included in the resource requirement information.
And the management device is used for receiving the processing request message and forwarding the processing request message to the processing device corresponding to the device identifier according to the device identifier included in the processing request message.
The processing device is used for receiving the processing request message, and the processing request message comprises task resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating the use of the container; allocating a container according to the indication information, and allocating computing resources for processing the task to be processed according to the task resource demand information; the pending task is processed in the container by the computing resource and the handler.
Optionally, the task to be processed may be a deep learning task or the like. A plurality of tasks can be processed in the same processing device at the same time, but each task is processed by the processing device in a respective container, so that the tasks are isolated from each other through the containers, and the mutual influence among the tasks is avoided.
Optionally, the computing resources may include GPU resources, so that the GPU resources may be used to process the deep learning task when processing the deep learning task.
Optionally, referring to fig. 1-2, the device cluster framework may be a deep learning platform, where the deep learning platform includes a kube-tasks node, a Master node, and a plurality of Slave nodes. The task device may be a kube-tasks node, the management device may be a Master node, and the processing device may be a Slave node.
And a network connection is established between each Slave node and the Master node, and a network connection is suggested between the Master node and the kube-messos node. The kube-tasks node is a container arrangement framework which comprises resource demand information corresponding to a plurality of deep learning tasks.
For each Slave node, the Slave node sends current resource condition information to Master equipment, wherein the resource condition information at least comprises the current idle resource quantity of the Slave node and can also comprise the current used resource quantity of the Slave node; the Master node receives the resource condition information, generates a resource unit according to the resource condition information, wherein the resource unit comprises the idle resource quantity of the Slave node and the equipment identifier of the Slave node, and sends the resource unit to the kube-tasks node; the method comprises the steps that a kube-meso node receives a resource unit, the resource unit is matched with resource demand information corresponding to each deep learning task, the resource demand information corresponding to one deep learning task is matched, one exceltor used for processing the deep learning task is selected from at least one processing program (Executor), and a processing request message is sent to a Master node and comprises a device identifier in the resource unit, the resource demand information corresponding to the deep learning task, the selected Executor and indication information used for indicating a use container. The Master node receives the processing request message and forwards the processing request message to the Slave node; the Slave node receives the processing request message, allocates a container according to the indication information, and allocates computing resources for processing the deep learning task according to the task resource demand information; processing the deep learning task in the container through the computing resource and the Executor.
Referring to fig. 2, the present application provides a method of processing a task, the method comprising:
step 201: and receiving a processing request message, wherein the processing request message comprises resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task and indication information for indicating the use of the container.
Step 202: and allocating a container according to the indication information, and allocating resources for processing the tasks to be processed according to the resource demand information.
Step 203: the task to be processed is processed in the container by the resource and the handler.
In the embodiment of the application, the to-be-processed tasks are processed in the containers through the resources and the processing programs, so that when a plurality of tasks are simultaneously processed in the same processing device, each task is processed in the corresponding container by the processing device, the tasks are isolated from each other through the containers, and the mutual influence among the tasks is avoided.
In an alternative embodiment of the present application, the resource requirement information may indicate whether the number of idle resources meets a resource requirement required for processing the to-be-processed task, for example, a comparison result between the number of idle resources and the number of resources required for processing the to-be-processed task.
Optionally, the task to be processed may be a deep learning task or the like. The computing resources may include GPU resources such that the GPU resources may be used to process the deep learning task when processing the deep learning task. Referring to fig. 3, an embodiment of the present application provides a method for processing a task, where the method may be applied to a device cluster framework shown in fig. 1-1 or a deep learning platform shown in fig. 1-2, and the task processed by the method may be a deep learning task, including:
step 301: the processing device obtains its current resource status information and sends the resource status information to the management device.
The resource condition information at least includes the current free resource quantity in the processing device, may also include the current used resource quantity in the processing device, and may also send the resource condition information to the management device.
Optionally, the amount of idle resources may include at least one of an idle CPU number, an idle GPU number, an idle memory capacity, and the like. The used resource amount includes at least one of the number of used CPUs, the number of used GPUs, and the used memory capacity.
The processing device is any one of the processing devices in the device cluster framework, and the processing device can acquire current resource situation information of the processing device when the resource usage situation of the processing device changes, and send the resource situation information to the management device.
Step 302: the management equipment receives resource condition information sent by the processing equipment, generates a resource unit according to the resource condition information, wherein the resource unit comprises the number of idle resources in the processing equipment and the equipment identifier of the processing equipment, and sends the resource unit to the task equipment.
The task equipment comprises at least one task to be processed, at least one processing program for processing the task to be processed and resource demand information corresponding to each task to be processed. The resource requirement information corresponding to the to-be-processed task may include the number of resources required for processing the to-be-processed task.
Optionally, the task to be processed may be a deep learning task, and the task in the task device may be set in the task device by a technician. The amount of resources required for processing the task to be processed includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
Step 303: and the task equipment receives the resource unit, matches the resource unit with the resource demand information corresponding to each task to be processed to obtain the resource demand information corresponding to one task to be processed, and the resource unit can meet the resource demand information corresponding to the task to be processed.
Specifically, the task device compares the number of idle resources included in the resource unit with the number of resources required for processing the task to be processed included in each piece of resource requirement information, compares the resource requirement information that includes the number of resources that is less than or equal to the number of idle resources, and selects one piece of resource requirement information from the compared resource requirement information.
Step 304: and the management equipment receives the processing request message and forwards the processing request message to the processing equipment corresponding to the equipment identification according to the equipment identification included in the processing request message.
Step 305: the processing device receives the processing request message, wherein the processing request message comprises task resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating the use of the container.
Optionally, the container may include a storage location of the GPU driver, and may further include an identifier of the GPU. The identification of the GPU may be the number of the GPU.
Step 306: the processing equipment allocates a container according to the indication information and allocates computing resources for processing the task to be processed according to the task resource demand information; the pending task is processed in the container by the computing resource and the handler.
Optionally, when the computing resource includes a GPU, obtaining an identifier of the GPU from the container, mapping the GPU corresponding to the identifier of the GPU into the container, and the processing device may call a GPU driver according to a storage location of the GPU driver in the container; and driving the GPU to run the processing program in the container through the GPU driver, and processing the tasks to be processed by using the processing program.
Optionally, when processing the task to be processed, the processing device may further obtain current resource condition information of the processing device, and send the resource condition information to the management server, where the resource condition information at least includes the current idle resource quantity.
In the embodiment of the application, the processing request message sent by the management device to the processing device includes the indication information for indicating the use of the container, so that the processing device can allocate the container to the to-be-processed task according to the index information and process the to-be-processed task in the container, thereby realizing the purpose of processing the to-be-processed task in the container through the resource and the processing program. Therefore, when a plurality of tasks are processed in the same processing device, each task is processed by the processing device in the respective container, and the tasks are isolated from each other through the containers, so that the mutual influence among the tasks is avoided. In addition, the container comprises a storage position of the GPU driver, so that the GPU driver can be called through the storage position, the GPU is called into the container through the GPU driver, and the to-be-processed task is processed through the GPU in the container. The to-be-processed task may be a deep learning task, and thus the deep learning task may be processed using the GPU.
Referring to fig. 4, the present application provides a method of processing a task, the method comprising:
step 401: receiving a resource unit sent by a management device, where the resource unit includes the number of idle resources in a processing device and a device identifier of the processing device, and the resource unit is generated by the management device according to resource situation information sent by the processing device, where the resource situation information includes the number of the idle resources.
Step 402: and acquiring the task to be processed, wherein the resource quantity included in the resource demand information corresponding to the task to be processed is less than or equal to the idle resource quantity.
Step 403: and sending a processing request message to the processing device, wherein the processing request message comprises the resource requirement information, the tasks to be processed, the processing programs for processing the tasks to be processed and the indication information for indicating the use of the container, so that the processing device processes the tasks to be processed.
In the embodiment of the application, the processing request message sent to the processing device includes the resource demand information, the to-be-processed task, the processing program for processing the to-be-processed task, and the indication information for indicating to use the container, so that the processing device can allocate the container to the to-be-processed task according to the indicator information, process the to-be-processed task in the container, and isolate the tasks in the processing device from each other through the container, thereby avoiding the mutual influence among the tasks.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, the present application provides an apparatus 500 for processing a task, the apparatus 500 comprising:
a receiving module 501, configured to receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and indication information for indicating to use a container;
an allocating module 502, configured to allocate a container according to the indication information, and allocate resources for processing the to-be-processed task according to the resource demand information;
a processing module 503, configured to process the to-be-processed task in the container through the resource and the handler.
Optionally, the container includes a storage location of a GPU driver of the graphics processor, and the resource includes a GPU;
the processing module 503 includes:
the calling unit is used for calling the GPU driver according to the storage position of the GPU driver;
and the processing unit is used for driving the GPU to run the processing program in the container through the GPU driver, and processing the task to be processed by using the processing program.
Optionally, the apparatus 500 further includes:
and the sending module is used for sending resource condition information to the management server, wherein the resource condition information at least comprises the number of the current idle resources.
Optionally, the container further includes an identifier of the GPU, and the apparatus 500 further includes
And the mapping module is used for mapping the GPU corresponding to the identification of the GPU to the container.
Optionally, the task to be processed is a deep learning task.
In the embodiment of the application, the to-be-processed tasks are processed in the containers through the resources and the processing programs, so that when a plurality of tasks are simultaneously processed in the same device, each task is processed in the corresponding container by the processing equipment, and the tasks are isolated from each other through the containers, thereby avoiding the mutual influence among the tasks.
Referring to fig. 6, an embodiment of the present application provides an apparatus 600 for processing a task, where the apparatus 600 includes:
a receiving module 601, configured to receive a resource unit sent by a management device, where the resource unit includes an idle resource number in a processing device and a device identifier of the processing device, the resource unit is generated by the management device according to resource situation information sent by the processing device, and the resource situation information includes the idle resource number;
an obtaining module 602, configured to obtain a task to be processed, where resource requirement information corresponding to the task to be processed includes a resource quantity that is less than or equal to the idle resource quantity;
a sending module 603, configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and indication information indicating to use a container, so that the processing device processes the to-be-processed task.
In the embodiment of the application, the processing request message sent to the processing device includes the resource demand information, the to-be-processed task, the processing program for processing the to-be-processed task, and the indication information for indicating to use the container, so that the processing device can allocate the container to the to-be-processed task according to the indicator information, process the to-be-processed task in the container, and isolate the tasks in the processing device from each other through the container, thereby avoiding the mutual influence among the tasks.
Referring to fig. 7, an embodiment of the present invention provides a system 700 for processing tasks, where the system 700 includes an apparatus as described in fig. 5 and an apparatus as described in fig. 6, where the apparatus as described in fig. 5 may be a processing device 701, and the apparatus as described in fig. 6 may be a task device 702.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating a terminal 800 according to an exemplary embodiment of the present invention. The terminal 800 may be a processing device, a management device or a task device in any of the above embodiments. When the method is implemented, the terminal can be a mobile terminal, a notebook computer, a desktop computer or the like, and the mobile terminal can be a mobile phone, a tablet computer or the like. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (Location Based Service). The positioning component 808 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (12)
1. A method of processing a task, the method comprising:
receiving a processing request message, wherein the processing request message comprises resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating to use a container;
allocating a container according to the indication information, and allocating resources for processing the task to be processed according to the resource demand information;
and processing the task to be processed in the container through the resources and the processing program.
2. The method of claim 1, wherein the container includes a storage location for a Graphics Processor (GPU) driver, and wherein the resource includes a GPU;
the processing the task to be processed in the container through the resource and the processing program comprises:
calling the GPU driver according to the storage position of the GPU driver;
and driving the GPU to run the processing program in the container through the GPU driver, and processing the task to be processed by using the processing program.
3. The method of claim 1 or 2, wherein after processing the pending task in the container by the resource and the handler, further comprising:
and sending resource condition information to the management server, wherein the resource condition information at least comprises the number of the current idle resources.
4. The method of claim 2, wherein the container further includes an identification of a GPU, and further comprising, before driving the GPU in the container to run the handler via the GPU driver, further comprising
And mapping the GPU corresponding to the identification of the GPU to the container.
5. The method of claim 1, 2 or 4, wherein the task to be processed is a deep learning task.
6. A method of processing a task, the method comprising:
receiving a resource unit sent by a management device, where the resource unit includes an idle resource number in a processing device and a device identifier of the processing device, and the resource unit is generated by the management device according to resource situation information sent by the processing device, where the resource situation information includes the idle resource number;
acquiring a task to be processed, wherein the resource quantity included in the resource demand information corresponding to the task to be processed is less than or equal to the idle resource quantity;
sending a processing request message to the processing device, where the processing request message includes the resource demand information, the to-be-processed task, a processing program for processing the to-be-processed task, and indication information for indicating to use a container, so that the processing device processes the to-be-processed task.
7. An apparatus for processing a task, the apparatus comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a processing request message, and the processing request message comprises resource demand information, a task to be processed, a processing program for processing the task to be processed and indication information for indicating a container to be used;
the allocation module is used for allocating containers according to the indication information and allocating resources for processing the tasks to be processed according to the resource demand information;
and the processing module is used for processing the tasks to be processed in the container through the resources and the processing program.
8. The apparatus of claim 7, wherein the container includes a storage location for a Graphics Processor (GPU) driver, and wherein the resources include a GPU;
the processing module comprises:
the calling unit is used for calling the GPU driver according to the storage position of the GPU driver;
and the processing unit is used for driving the GPU to run the processing program in the container through the GPU driver, and processing the task to be processed by using the processing program.
9. The apparatus of claim 7 or 8, wherein the apparatus further comprises:
and the sending module is used for sending resource condition information to the management server, wherein the resource condition information at least comprises the number of the current idle resources.
10. The apparatus of claim 8, further comprising an identification of a GPU in the container, the apparatus further comprising
And the mapping module is used for mapping the GPU corresponding to the identification of the GPU to the container.
11. The apparatus of claim 7, 8 or 10, wherein the task to be processed is a deep learning task.
12. An apparatus for processing a task, the apparatus comprising:
a receiving module, configured to receive a resource unit sent by a management device, where the resource unit includes an idle resource number in a processing device and a device identifier of the processing device, the resource unit is generated by the management device according to resource situation information sent by the processing device, and the resource situation information includes the idle resource number;
the acquisition module is used for acquiring a task to be processed, and the resource quantity included in the resource demand information corresponding to the task to be processed is less than or equal to the idle resource quantity;
a sending module, configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and indication information indicating to use a container, so that the processing device processes the to-be-processed task.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810701224.3A CN110659127A (en) | 2018-06-29 | 2018-06-29 | Method, device and system for processing task |
PCT/CN2019/093391 WO2020001564A1 (en) | 2018-06-29 | 2019-06-27 | Method, apparatus, and system for processing tasks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810701224.3A CN110659127A (en) | 2018-06-29 | 2018-06-29 | Method, device and system for processing task |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110659127A true CN110659127A (en) | 2020-01-07 |
Family
ID=68985837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810701224.3A Pending CN110659127A (en) | 2018-06-29 | 2018-06-29 | Method, device and system for processing task |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110659127A (en) |
WO (1) | WO2020001564A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112559182A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Resource allocation method, device, equipment and storage medium |
CN112866404A (en) * | 2021-02-03 | 2021-05-28 | 视若飞信息科技(上海)有限公司 | Semi-cloud system and execution method |
CN113867970A (en) * | 2021-12-03 | 2021-12-31 | 苏州浪潮智能科技有限公司 | Container acceleration device, method and equipment and computer readable storage medium |
TWI783355B (en) * | 2020-08-12 | 2022-11-11 | 大陸商中國銀聯股份有限公司 | Distributed training method and apparatus of deep learning model |
CN115470915A (en) * | 2022-03-16 | 2022-12-13 | 合肥本源量子计算科技有限责任公司 | Server system of quantum computer and implementation method thereof |
WO2023160629A1 (en) * | 2022-02-25 | 2023-08-31 | 本源量子计算科技(合肥)股份有限公司 | Processing device and method for quantum control system, quantum computer, medium, and electronic device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114124405B (en) * | 2020-07-29 | 2023-06-09 | 腾讯科技(深圳)有限公司 | Service processing method, system, computer equipment and computer readable storage medium |
CN113656143A (en) * | 2021-08-16 | 2021-11-16 | 深圳市瑞驰信息技术有限公司 | Method and system for realizing direct display card of android container |
CN116755779B (en) * | 2023-08-18 | 2023-12-05 | 腾讯科技(深圳)有限公司 | Method, device, equipment, storage medium and chip for determining cycle interval |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083380A1 (en) * | 2015-09-18 | 2017-03-23 | Salesforce.Com, Inc. | Managing resource allocation in a stream processing framework |
CN106708622A (en) * | 2016-07-18 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Cluster resource processing method and system, and resource processing cluster |
CN106886455A (en) * | 2017-02-23 | 2017-06-23 | 北京图森未来科技有限公司 | A kind of method and system for realizing user isolation |
CN107247629A (en) * | 2017-07-04 | 2017-10-13 | 北京百度网讯科技有限公司 | Cloud computing system and cloud computing method and device for controlling server |
CN107343000A (en) * | 2017-07-04 | 2017-11-10 | 北京百度网讯科技有限公司 | Method and apparatus for handling task |
CN107682206A (en) * | 2017-11-02 | 2018-02-09 | 北京中电普华信息技术有限公司 | The dispositions method and system of business process management system based on micro services |
CN107783818A (en) * | 2017-10-13 | 2018-03-09 | 北京百度网讯科技有限公司 | Deep learning task processing method, device, equipment and storage medium |
CN108062246A (en) * | 2018-01-25 | 2018-05-22 | 北京百度网讯科技有限公司 | For the resource regulating method and device of deep learning frame |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343045B (en) * | 2017-07-04 | 2021-03-19 | 北京百度网讯科技有限公司 | Cloud computing system and cloud computing method and device for controlling server |
CN107450961B (en) * | 2017-09-22 | 2020-10-16 | 济南浚达信息技术有限公司 | Distributed deep learning system based on Docker container and construction method and working method thereof |
-
2018
- 2018-06-29 CN CN201810701224.3A patent/CN110659127A/en active Pending
-
2019
- 2019-06-27 WO PCT/CN2019/093391 patent/WO2020001564A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170083380A1 (en) * | 2015-09-18 | 2017-03-23 | Salesforce.Com, Inc. | Managing resource allocation in a stream processing framework |
CN106708622A (en) * | 2016-07-18 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Cluster resource processing method and system, and resource processing cluster |
CN106886455A (en) * | 2017-02-23 | 2017-06-23 | 北京图森未来科技有限公司 | A kind of method and system for realizing user isolation |
CN107247629A (en) * | 2017-07-04 | 2017-10-13 | 北京百度网讯科技有限公司 | Cloud computing system and cloud computing method and device for controlling server |
CN107343000A (en) * | 2017-07-04 | 2017-11-10 | 北京百度网讯科技有限公司 | Method and apparatus for handling task |
CN107783818A (en) * | 2017-10-13 | 2018-03-09 | 北京百度网讯科技有限公司 | Deep learning task processing method, device, equipment and storage medium |
CN107682206A (en) * | 2017-11-02 | 2018-02-09 | 北京中电普华信息技术有限公司 | The dispositions method and system of business process management system based on micro services |
CN108062246A (en) * | 2018-01-25 | 2018-05-22 | 北京百度网讯科技有限公司 | For the resource regulating method and device of deep learning frame |
Non-Patent Citations (1)
Title |
---|
肖熠等: "一种针对GPU资源的深度学习容器云研究", 《中国传媒大学学报自然科学版》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI783355B (en) * | 2020-08-12 | 2022-11-11 | 大陸商中國銀聯股份有限公司 | Distributed training method and apparatus of deep learning model |
CN112559182A (en) * | 2020-12-16 | 2021-03-26 | 北京百度网讯科技有限公司 | Resource allocation method, device, equipment and storage medium |
CN112559182B (en) * | 2020-12-16 | 2024-04-09 | 北京百度网讯科技有限公司 | Resource allocation method, device, equipment and storage medium |
CN112866404A (en) * | 2021-02-03 | 2021-05-28 | 视若飞信息科技(上海)有限公司 | Semi-cloud system and execution method |
CN112866404B (en) * | 2021-02-03 | 2023-01-24 | 视若飞信息科技(上海)有限公司 | Semi-cloud system and execution method |
CN113867970A (en) * | 2021-12-03 | 2021-12-31 | 苏州浪潮智能科技有限公司 | Container acceleration device, method and equipment and computer readable storage medium |
WO2023160629A1 (en) * | 2022-02-25 | 2023-08-31 | 本源量子计算科技(合肥)股份有限公司 | Processing device and method for quantum control system, quantum computer, medium, and electronic device |
CN115470915A (en) * | 2022-03-16 | 2022-12-13 | 合肥本源量子计算科技有限责任公司 | Server system of quantum computer and implementation method thereof |
CN115470915B (en) * | 2022-03-16 | 2024-04-05 | 本源量子计算科技(合肥)股份有限公司 | Server system of quantum computer and its realizing method |
Also Published As
Publication number | Publication date |
---|---|
WO2020001564A1 (en) | 2020-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111225042B (en) | Data transmission method and device, computer equipment and storage medium | |
CN110659127A (en) | Method, device and system for processing task | |
CN110288689B (en) | Method and device for rendering electronic map | |
CN110784370B (en) | Method and device for testing equipment, electronic equipment and medium | |
CN108848492B (en) | Method, device, terminal and storage medium for starting user identity identification card | |
CN113076051A (en) | Slave control terminal synchronization method, device, terminal and storage medium | |
CN110673944A (en) | Method and device for executing task | |
CN108401194B (en) | Time stamp determination method, apparatus and computer-readable storage medium | |
CN110086814B (en) | Data acquisition method and device and storage medium | |
CN112612539A (en) | Data model unloading method and device, electronic equipment and storage medium | |
CN111324293B (en) | Storage system, data storage method, data reading method and device | |
CN111881423A (en) | Method, device and system for limiting function use authorization | |
CN111914985A (en) | Configuration method and device of deep learning network model and storage medium | |
CN112181915A (en) | Method, device, terminal and storage medium for executing service | |
CN111694521B (en) | Method, device and system for storing file | |
CN114594885A (en) | Application icon management method, device and equipment and computer readable storage medium | |
CN110471613B (en) | Data storage method, data reading method, device and system | |
CN108632459B (en) | Communication information notification method and device and computer readable storage medium | |
CN112260845A (en) | Method and device for accelerating data transmission | |
CN111222124B (en) | Method, device, equipment and storage medium for using authority distribution | |
CN111193600B (en) | Method, device and system for taking over service | |
CN111163262B (en) | Method, device and system for controlling mobile terminal | |
CN109981310B (en) | Resource management method, device and storage medium | |
CN110764808B (en) | Client upgrade detection method, device and computer readable storage medium | |
CN111580892B (en) | Method, device, terminal and storage medium for calling service components |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |
|
RJ01 | Rejection of invention patent application after publication |