WO2020001564A1 - 一种处理任务的方法、装置及系统 - Google Patents
一种处理任务的方法、装置及系统 Download PDFInfo
- Publication number
- WO2020001564A1 WO2020001564A1 PCT/CN2019/093391 CN2019093391W WO2020001564A1 WO 2020001564 A1 WO2020001564 A1 WO 2020001564A1 CN 2019093391 W CN2019093391 W CN 2019093391W WO 2020001564 A1 WO2020001564 A1 WO 2020001564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processing
- resource
- task
- container
- gpu
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 227
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000008569 process Effects 0.000 claims description 35
- 238000013135 deep learning Methods 0.000 claims description 28
- 238000013507 mapping Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
Definitions
- the present application relates to the field of computer technology, and in particular, to a method, an apparatus, and a system for processing tasks.
- the server cluster framework consists of multiple servers.
- the server cluster framework can provide a large number of computing resources, which can be used for processing tasks. For example, server clusters can currently be used to handle deep learning tasks.
- the server cluster framework includes a management server and multiple processing servers for processing tasks.
- the management server can collect the resources in each processing server, and assign tasks to be processed to a certain processing according to the resource conditions of each processing server. Server that processes the task using the resources it includes.
- Multiple tasks are often assigned to the processing server, and the multiple tasks may be processed locally at the same time, and there may be interaction between the multiple tasks when processing the multiple tasks.
- the embodiments of the present application provide a method, a device, and a system for processing tasks, so as to avoid the problem that there may be interaction between multiple tasks.
- the technical solution is as follows:
- the present application provides a method for processing a task, the method includes:
- the processing request message includes resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container;
- the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
- the processing the to-be-processed task through the resource and the processing program in the container includes:
- the GPU is used to drive the GPU to run the processing program in the container, and use the processing program to process the to-be-processed task.
- the method further includes:
- the container further includes an identifier of the GPU, and before driving the GPU to run the processing program by the GPU driver in the container, the method further includes:
- the to-be-processed task is a deep learning task.
- the present application provides a method for processing a task, the method includes:
- the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is generated by the management device according to resource situation information sent by the processing device
- the resource situation information includes the number of the idle resources
- the resource requirement information corresponding to the to-be-processed task includes a quantity of resources less than or equal to the quantity of idle resources;
- the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container to Causing the processing device to process the pending task.
- the present application provides a device for processing tasks, the device including:
- a receiving module configured to receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing use of a container;
- An allocation module configured to allocate a container according to the instruction information, and allocate a resource for processing the to-be-processed task according to the resource demand information;
- a processing module configured to process the to-be-processed task in the container through the resource and the processing program.
- the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
- the processing module includes:
- a calling unit configured to call the GPU driver according to the storage location of the GPU driver
- a processing unit configured to drive the GPU to run the processing program through the GPU driver in the container, and use the processing program to process the to-be-processed task.
- the device further includes:
- a sending module is configured to send resource situation information to the management server, where the resource situation information includes at least a current amount of idle resources.
- the container further includes a GPU identifier
- the device further includes
- a mapping module configured to map a GPU corresponding to the identifier of the GPU into the container.
- the to-be-processed task is a deep learning task.
- the present application provides a device for processing tasks, the device including:
- a receiving module configured to receive a resource unit sent by a management device, where the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is sent by the management device according to the processing device Generated by resource condition information, where the resource condition information includes the number of idle resources;
- An acquisition module configured to acquire a task to be processed, and the resource requirement information corresponding to the task to be processed includes a resource quantity less than or equal to the idle resource quantity;
- a sending module configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the pending task, a processing program for processing the pending task, and an instruction for using a container Instruction information to enable the processing device to process the pending task.
- the present application provides a system for processing tasks, and the system includes the device described above.
- the embodiments of the present application provide a non-volatile computer-readable storage medium for storing a computer program, and the computer program is loaded by a processor to execute instructions of any one of the foregoing methods.
- this embodiment provides an electronic device, where the electronic device includes a processor and a memory,
- the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement instructions of any one of the foregoing methods.
- each task is processed by the processing device in its own container, so that multiple tasks are isolated from each other by the container. To avoid mutual influence between multiple tasks.
- FIG. 1-1 is a schematic diagram of a device cluster framework provided by an embodiment of the present application.
- 1-2 is a schematic diagram of a deep learning platform architecture provided by an embodiment of the present application.
- FIG. 2 is a flowchart of a method for processing a task according to an embodiment of the present application
- FIG. 3 is a flowchart of another method for processing a task according to an embodiment of the present application.
- FIG. 4 is a flowchart of another method for processing a task according to an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of an apparatus for processing tasks according to an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of another apparatus for processing tasks according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a system for processing tasks according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
- an embodiment of the present application provides a device cluster framework.
- the framework includes:
- the management device can establish a network connection with each processing device.
- the management device can also establish a network connection with the task device.
- the network connection can be a wired connection or a wireless connection. Wait.
- Each processing device includes a computing resource, and the computing resource may be at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a memory.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the processing device For each processing device, the processing device is used to obtain its current resource status information.
- the resource status information includes at least the current number of idle resources in the processing device, and may also include the current number of used resources in the processing device.
- the resource condition information may also be sent to the management device.
- the number of idle resources may include at least one of the number of idle CPUs, the number of idle GPUs, and the amount of idle memory.
- the number of used resources includes at least one of the number of used CPUs, the number of GPUs used, and the amount of memory used.
- a management device configured to receive resource condition information sent by a processing device, and generate a resource unit according to the resource condition information.
- the resource unit includes the number of idle resources in the processing device and a device identifier of the processing device, and sends the resource to a task device. unit.
- the task device includes at least one pending task, at least one processing program for processing the pending task, and resource requirement information corresponding to each pending task.
- the resource requirement information corresponding to the to-be-processed task may include the amount of resources required for processing the to-be-processed task.
- the amount of resources required to process the to-be-processed task includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
- the task device is configured to receive the resource unit, match the resource unit with the resource requirement information corresponding to each pending task, and match the resource requirement information corresponding to a pending task, and the resource unit can satisfy the corresponding to the pending task.
- Resource requirement information of the server select a processing program for processing the to-be-processed task from the at least one processing program, and send a processing request message to the management device, where the processing request message includes a device identifier in the resource unit, the to-be-processed task Corresponding resource requirement information, the to-be-processed task, a selected processing program, and instruction information for instructing the use of a container.
- That the resource unit can satisfy the resource requirement information corresponding to the to-be-processed task means that the number of idle resources included in the resource unit is greater than or equal to the amount of resources required to process the to-be-processed task included in the resource demand information.
- the management device is configured to receive the processing request message, and forward the processing request message to a processing device corresponding to the device identifier according to a device identifier included in the processing request message.
- a processing device configured to receive the processing request message, where the processing request message includes task resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container; allocation according to the instruction information A container, and allocating a computing resource for processing the to-be-processed task according to the task resource requirement information; and processing the to-be-processed task in the container through the computing resource and the processing program.
- the to-be-processed task may be a deep learning task and the like. Multiple tasks may be processed in the same processing device at the same time, but each task is processed by the processing device in its own container, so that multiple tasks are isolated from each other by the container, avoiding the mutual influence between multiple tasks.
- the above computing resources may include GPU resources, so that GPU resources may be used to process deep learning tasks when processing deep learning tasks.
- the above device cluster framework can be a deep learning platform.
- the deep learning platform includes a kube-Mesos node, a master node, and multiple slave nodes.
- the task device may be a kube-Mesos node
- the management device may be a master node
- the processing device may be a slave node.
- a network connection is established between each Slave node and the Master node, and a network connection is established between the Master node and the kube-Mesos node.
- the kube-Mesos node is a container orchestration framework that includes resource requirement information corresponding to multiple deep learning tasks.
- the slave node For each slave node, the slave node sends its current resource status information to the master device.
- the resource status information includes at least the current idle resource quantity of the slave node and may also include the current used resource quantity of the slave node.
- the master node Receive the resource situation information, and generate a resource unit according to the resource situation information, the resource unit includes the idle resource quantity of the slave node and the device identifier of the slave node, and sends the resource unit to the kube-Mesos node; the kube-Mesos node receives
- the resource unit matches the resource requirement information corresponding to each deep learning task to match the resource requirement information corresponding to a deep learning task, and selects one of the at least one processing program (Executor) to process the resource requirement information.
- the Executor of a deep learning task sends a processing request message to the Master node.
- the processing request message includes the device identification in the resource unit, the resource requirement information corresponding to the deep learning task, the deep learning task, the selected Executor, and an instruction for indicating use.
- Container instructions The master node receives the processing request message and forwards the processing request message to the slave node; the slave node receives the processing request message, allocates a container according to the instruction information, and allocates a processing resource for processing the deep learning task according to the task resource requirement information.
- this application provides a method for processing a task.
- the method includes:
- Step 201 Receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
- Step 202 Allocate a container according to the instruction information, and allocate resources for processing pending tasks according to the resource demand information.
- Step 203 Process the to-be-processed tasks in the container through the resource and the processing program.
- each task is processed by a processing device in its own container.
- the container isolates multiple tasks from each other and prevents multiple tasks from affecting each other.
- the resource requirement information may indicate whether the amount of idle resources meets the resource requirements required to process the pending task, for example, a comparison result between the amount of idle resources and the amount of resources required to process the pending task. .
- the to-be-processed task may be a deep learning task and the like.
- the above computing resources may include GPU resources, so that GPU resources may be used to process deep learning tasks when processing deep learning tasks.
- FIG. 3 an embodiment of the present application provides a method for processing a task. The method can be applied to a device cluster framework shown in FIG. 1-1 or a deep learning platform shown in FIG. 1-2.
- the task can be a deep learning task, including:
- Step 301 The processing device acquires its current resource situation information and sends the resource situation information to the management device.
- the resource condition information includes at least the current amount of idle resources in the processing device, may also include the current number of resources used in the processing device, and may also send the resource condition information to the management device.
- the number of idle resources may include at least one of the number of idle CPUs, the number of idle GPUs, and the amount of idle memory.
- the number of used resources includes at least one of the number of used CPUs, the number of GPUs used, and the amount of memory used.
- the processing device is any processing device in a device cluster framework.
- the processing device can obtain current resource condition information when its resource usage changes, and send the resource condition information to a management device.
- Step 302 The management device receives the resource condition information sent by the processing device, and generates a resource unit according to the resource condition information.
- the resource unit includes the number of idle resources in the processing device and the device identifier of the processing device, and sends the resource to the task device. unit.
- the task device includes at least one pending task, at least one processing program for processing the pending task, and resource requirement information corresponding to each pending task.
- the resource requirement information corresponding to the to-be-processed task may include the amount of resources required for processing the to-be-processed task.
- the to-be-processed task may be a deep learning task, and the task in the task device may be set by a technician in the task device.
- the number of resources required to process the to-be-processed task includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
- Step 303 The task device receives the resource unit, matches the resource unit with the resource requirement information corresponding to each pending task, and matches the resource requirement information corresponding to a pending task, and the resource unit can satisfy the correspondence of the pending task. Resource requirements information.
- the task device compares the amount of idle resources included in the resource unit with the amount of resources required to process the to-be-processed task included in each resource requirement information, and compares the number of included resources to be less than or equal to the number of idle resources.
- the resource requirement information is selected from the compared resource requirement information.
- the selected resource requirement information is the resource requirement information corresponding to a task to be processed.
- the resource unit can satisfy the resource requirement information corresponding to the task to be processed. .
- Step 304 The management device receives the processing request message, and forwards the processing request message to the processing device corresponding to the device identifier according to the device identifier included in the processing request message.
- the processing request message includes task resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
- Step 305 The processing device receives the processing request message, and the processing request message includes task resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
- the container may include a storage location of a GPU driver, and may further include an identifier of the GPU.
- the identification of the GPU may be the number of the GPU.
- Step 306 The processing device allocates a container according to the instruction information, and allocates a computing resource for processing the to-be-processed task according to the task resource demand information; and processes the to-be-processed task in the container through the computing resource and the processing program.
- an identifier of the GPU is obtained from the container, and a GPU corresponding to the GPU identifier is mapped into the container.
- the processing device may store the GPU driver in the container according to the storage of the GPU driver. Position, call the GPU driver; use the GPU driver to drive the GPU to run the handler in the container, and use the handler to process pending tasks.
- the processing device may further obtain its current resource condition information and send the resource condition information to the management server, where the resource condition information includes at least the number of currently idle resources.
- the processing device can allocate a container to the task to be processed and process the task to be processed in the container, To handle pending tasks in the container through resources and handlers.
- the container includes a storage location of the GPU driver, so that the GPU driver can be called through the storage location, and the GPU can be called into the container through the GPU driver to implement the processing of pending tasks in the container by the GPU.
- To-be-processed tasks can be deep learning tasks, so GPUs can be used to process deep learning tasks.
- the present application provides a method for processing a task, the method includes:
- Step 401 Receive a resource unit sent by a management device.
- the resource unit includes a quantity of idle resources in the processing device and a device identifier of the processing device.
- the resource unit is generated by the management device according to the resource condition information sent by the processing device. Including the number of idle resources.
- Step 402 Obtain a to-be-processed task, and the resource requirement information corresponding to the to-be-processed task includes a resource quantity less than or equal to the idle resource quantity.
- Step 403 Send a processing request message to the processing device, where the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, so that the processing device processes the pending Processing tasks.
- the processing device since the processing request message sent to the processing device includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, the processing device may
- the indicator information allocates a container for the tasks to be processed, and processes the tasks in the container.
- the container isolates multiple tasks in the processing device from each other, avoiding the mutual influence between multiple tasks.
- the present application provides a device 500 for processing tasks.
- the device 500 includes:
- the receiving module 501 is configured to receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container;
- An allocation module 502 configured to allocate a container according to the instruction information, and allocate a resource for processing the to-be-processed task according to the resource demand information;
- the processing module 503 is configured to process the to-be-processed task in the container by using the resource and the processing program.
- the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
- the processing module 503 includes:
- a calling unit configured to call the GPU driver according to the storage location of the GPU driver
- a processing unit configured to drive the GPU to run the processing program through the GPU driver in the container, and use the processing program to process the to-be-processed task.
- the apparatus 500 further includes:
- a sending module is configured to send resource situation information to the management server, where the resource situation information includes at least a current amount of idle resources.
- the container further includes a GPU identifier
- the device 500 further includes
- a mapping module configured to map a GPU corresponding to the identifier of the GPU into the container.
- the to-be-processed task is a deep learning task.
- each task is processed by a processing device in its own container, and thus the container Isolate multiple tasks from each other to avoid mutual influence between multiple tasks.
- an embodiment of the present application provides an apparatus 600 for processing a task.
- the apparatus 600 includes:
- a receiving module 601 is configured to receive a resource unit sent by a management device, where the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is sent by the management device according to the processing device Generated by the resource situation information, the resource situation information includes the number of idle resources;
- An obtaining module 602 is configured to obtain a to-be-processed task, and the resource requirement information corresponding to the to-be-processed task includes a quantity of resources less than or equal to the quantity of idle resources;
- a sending module 603 configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the pending task, a processing program for processing the pending task, and an instruction for using Indication information of the container, so that the processing device processes the to-be-processed task.
- the processing device since the processing request message sent to the processing device includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, the processing device may
- the indicator information allocates a container for the tasks to be processed, and processes the tasks in the container.
- the container isolates multiple tasks in the processing device from each other, avoiding the mutual influence between multiple tasks.
- an embodiment of the present invention provides a system 700 for processing tasks.
- the system 700 includes the device described in FIG. 5 and the device described in 6.
- the device described in FIG. 5 may be a processing device 701.
- the apparatus described in FIG. 6 may be a task device 702.
- FIG. 8 shows a structural block diagram of a terminal 800 provided by an exemplary embodiment of the present invention.
- the terminal 800 may be a processing device, a management device, or a task device in any of the foregoing embodiments.
- the terminal may be a mobile terminal, a notebook computer or a desktop computer, and the mobile terminal may be a mobile phone, a tablet computer, or the like.
- the terminal 800 may also be called other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
- the terminal 800 includes a processor 801 and a memory 802.
- the processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- the processor 801 may use at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). achieve.
- the processor 801 may also include a main processor and a coprocessor.
- the main processor is a processor for processing data in the wake state, also called a CPU (Central Processing Unit).
- the coprocessor is Low-power processor for processing data in standby.
- the processor 801 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing content required to be displayed on the display screen.
- the processor 801 may further include an AI (Artificial Intelligence) processor, and the AI processor is configured to process computing operations related to machine learning.
- AI Artificial Intelligence
- the memory 802 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 802 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash storage devices.
- non-transitory computer-readable storage medium in the memory 802 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 801 to implement the processing task provided by the method embodiment in this application Methods.
- the terminal 800 may optionally include a peripheral device interface 803 and at least one peripheral device.
- the processor 801, the memory 802, and the peripheral device interface 803 may be connected through a bus or a signal line.
- Each peripheral device can be connected to the peripheral device interface 803 through a bus, a signal line, or a circuit board.
- the peripheral device includes at least one of a radio frequency circuit 804, a touch display screen 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
- the peripheral device interface 803 may be used to connect at least one peripheral device related to I / O (Input / Output) to the processor 801 and the memory 802.
- the processor 801, the memory 802, and the peripheral device interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 801, the memory 802, and the peripheral device interface 803 or Both can be implemented on separate chips or circuit boards, which is not limited in this embodiment.
- the radio frequency circuit 804 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 804 communicates with a communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 804 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
- the radio frequency circuit 804 can communicate with other terminals through at least one wireless communication protocol.
- the wireless communication protocols include, but are not limited to, the World Wide Web, metropolitan area networks, intranets, mobile communication networks (2G, 3G, 4G, and 5G) of various generations, wireless local area networks, and / or WiFi (Wireless Fidelity) networks.
- the radio frequency circuit 804 may further include NFC (Near Field Communication) circuits, which are not limited in this application.
- the display screen 805 is used to display a UI (User Interface).
- the UI may include graphics, text, icons, videos, and any combination thereof.
- the display screen 805 also has the ability to collect touch signals on or above the surface of the display screen 805.
- the touch signal can be input to the processor 801 as a control signal for processing.
- the display screen 805 may also be used to provide a virtual button and / or a virtual keyboard, which is also called a soft button and / or a soft keyboard.
- the display screen 805 may be one, and the front panel of the terminal 800 is provided.
- the display screen 805 may be at least two, which are respectively provided on different surfaces of the terminal 800 or have a folded design. In still other embodiments, the display screen 805 may be a flexible display screen disposed on a curved surface or a folded surface of the terminal 800. Furthermore, the display screen 805 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
- the display screen 805 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
- the camera component 806 is used for capturing images or videos.
- the camera component 806 includes a front camera and a rear camera.
- the front camera is disposed on the front panel of the terminal, and the rear camera is disposed on the back of the terminal.
- the camera assembly 806 may further include a flash.
- the flash can be a monochrome temperature flash or a dual color temperature flash.
- a dual color temperature flash is a combination of a warm light flash and a cold light flash, which can be used for light compensation at different color temperatures.
- the audio circuit 807 may include a microphone and a speaker.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 801 for processing, or input them to the radio frequency circuit 804 to implement voice communication.
- the microphone can also be an array microphone or an omnidirectional acquisition microphone.
- the speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves.
- the speaker can be a traditional film speaker or a piezoelectric ceramic speaker.
- the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes.
- the audio circuit 807 may further include a headphone jack.
- the positioning component 808 is used to locate the current geographic position of the terminal 800 to implement navigation or LBS (Location Based Service).
- the positioning component 808 may be a positioning component based on a US-based GPS (Global Positioning System), a Beidou system in China, or a Galileo system in Russia.
- the power supply 809 is used to power various components in the terminal 800.
- the power source 809 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
- the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
- the wired rechargeable battery is a battery charged through a wired line
- the wireless rechargeable battery is a battery charged through a wireless coil.
- the rechargeable battery can also be used to support fast charging technology.
- the terminal 800 further includes one or more sensors 810.
- the one or more sensors 810 include, but are not limited to, an acceleration sensor 811, a gyroscope sensor 812, a pressure sensor 813, a fingerprint sensor 814, an optical sensor 815, and a proximity sensor 816.
- the acceleration sensor 811 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established by the terminal 800.
- the acceleration sensor 811 may be used to detect components of the acceleration of gravity on three coordinate axes.
- the processor 801 may control the touch display screen 805 to display a user interface in a horizontal view or a vertical view according to a gravity acceleration signal collected by the acceleration sensor 811.
- the acceleration sensor 811 may also be used for collecting motion data of a game or a user.
- the gyro sensor 812 can detect the body direction and rotation angle of the terminal 800, and the gyro sensor 812 can cooperate with the acceleration sensor 811 to collect a 3D motion of the user on the terminal 800. Based on the data collected by the gyro sensor 812, the processor 801 can implement the following functions: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
- the pressure sensor 813 may be disposed on a side frame of the terminal 800 and / or a lower layer of the touch display screen 805.
- the processor 801 can perform left-right hand recognition or quick operation according to the holding signal collected by the pressure sensor 813.
- the processor 801 controls the operability controls on the UI interface according to the pressure operation of the touch display screen 805 by the user.
- the operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
- the fingerprint sensor 814 is used to collect a user's fingerprint, and the processor 801 recognizes the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 recognizes the identity of the user based on the collected fingerprint. When identifying the user's identity as a trusted identity, the processor 801 authorizes the user to perform related sensitive operations, such as unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
- the fingerprint sensor 814 may be provided on the front, back, or side of the terminal 800. When a physical button or a manufacturer's logo is set on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the manufacturer's logo.
- the optical sensor 815 is used to collect ambient light intensity.
- the processor 801 may control the display brightness of the touch display screen 805 according to the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 805 is decreased.
- the processor 801 may also dynamically adjust the shooting parameters of the camera component 806 according to the ambient light intensity collected by the optical sensor 815.
- the proximity sensor 816 also called a distance sensor, is usually disposed on the front panel of the terminal 800.
- the proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800.
- the processor 801 controls the touch display screen 805 to switch from the bright screen state to the closed screen state;
- the touch display screen 805 is controlled by the processor 801 to switch from the screen state to the bright screen state.
- FIG. 8 does not constitute a limitation on the terminal 800, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims (14)
- 一种处理任务的方法,其特征在于,所述方法包括:接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
- 如权利要求1所述的方法,其特征在于,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务,包括:根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
- 如权利要求1或2所述的方法,其特征在于,所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务之后,还包括:向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
- 如权利要求2所述的方法,其特征在于,所述容器中还包括GPU的标识,所述在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序之前,还包括将所述GPU的标识对应的GPU映射到所述容器中。
- 如权利要求1、2或4所述的方法,其特征在于,所述待处理任务为深度学习任务。
- 一种处理任务的方法,其特征在于,所述方法包括:接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
- 一种处理任务的装置,其特征在于,所述装置包括:接收模块,用于接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;分配模块,用于根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;处理模块,用于在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
- 如权利要求7所述的装置,其特征在于,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;所述处理模块包括:调用单元,用于根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;处理单元,用于在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
- 如权利要求7或8所述的装置,其特征在于,所述装置还包括:发送模块,用于向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
- 如权利要求8所述的装置,其特征在于,所述容器中还包括GPU的标识,所述装置还包括映射模块,用于将所述GPU的标识对应的GPU映射到所述容器中。
- 如权利要求7、8或10所述的装置,其特征在于,所述待处理任务为深度学习任务。
- 一种处理任务的装置,其特征在于,所述装置包括:接收模块,用于接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;获取模块,用于获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;发送模块,用于向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
- 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序通过处理器进行加载来执行如权利要求1至6任一项所述的方法的指令
- 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器存储至少一个指令,所述至少一个指令被所述处理器加载并执行行,以实现如权利要求1至6任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810701224.3A CN110659127A (zh) | 2018-06-29 | 2018-06-29 | 一种处理任务的方法、装置及系统 |
CN201810701224.3 | 2018-06-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020001564A1 true WO2020001564A1 (zh) | 2020-01-02 |
Family
ID=68985837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/093391 WO2020001564A1 (zh) | 2018-06-29 | 2019-06-27 | 一种处理任务的方法、装置及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110659127A (zh) |
WO (1) | WO2020001564A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112130983A (zh) * | 2020-10-27 | 2020-12-25 | 上海商汤临港智能科技有限公司 | 任务处理方法、装置、设备、系统及存储介质 |
CN113656143A (zh) * | 2021-08-16 | 2021-11-16 | 深圳市瑞驰信息技术有限公司 | 一种实现安卓容器直通显卡的方法及系统 |
CN114124405A (zh) * | 2020-07-29 | 2022-03-01 | 腾讯科技(深圳)有限公司 | 业务处理方法、系统、计算机设备及计算机可读存储介质 |
CN114462938A (zh) * | 2022-01-20 | 2022-05-10 | 北京声智科技有限公司 | 资源异常的处理方法、装置、设备及存储介质 |
CN116755779A (zh) * | 2023-08-18 | 2023-09-15 | 腾讯科技(深圳)有限公司 | 循环间隔的确定方法、装置、设备、存储介质及芯片 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112000473A (zh) * | 2020-08-12 | 2020-11-27 | 中国银联股份有限公司 | 深度学习模型的分布式训练方法以及装置 |
CN112559182B (zh) * | 2020-12-16 | 2024-04-09 | 北京百度网讯科技有限公司 | 资源分配方法、装置、设备及存储介质 |
CN112866404B (zh) * | 2021-02-03 | 2023-01-24 | 视若飞信息科技(上海)有限公司 | 一种半云系统及执行方法 |
CN113867970A (zh) * | 2021-12-03 | 2021-12-31 | 苏州浪潮智能科技有限公司 | 一种容器加速装置、方法、设备及计算机可读存储介质 |
WO2023160629A1 (zh) * | 2022-02-25 | 2023-08-31 | 本源量子计算科技(合肥)股份有限公司 | 量子控制系统的处理装置、方法、量子计算机、介质和电子装置 |
CN115470915B (zh) * | 2022-03-16 | 2024-04-05 | 本源量子计算科技(合肥)股份有限公司 | 量子计算机的服务器系统及其实现方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886455A (zh) * | 2017-02-23 | 2017-06-23 | 北京图森未来科技有限公司 | 一种实现用户隔离的方法及系统 |
CN107343045A (zh) * | 2017-07-04 | 2017-11-10 | 北京百度网讯科技有限公司 | 云计算系统及用于控制服务器的云计算方法和装置 |
CN107450961A (zh) * | 2017-09-22 | 2017-12-08 | 济南浚达信息技术有限公司 | 一种基于Docker容器的分布式深度学习系统及其搭建方法、工作方法 |
CN107783818A (zh) * | 2017-10-13 | 2018-03-09 | 北京百度网讯科技有限公司 | 深度学习任务处理方法、装置、设备及存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10146592B2 (en) * | 2015-09-18 | 2018-12-04 | Salesforce.Com, Inc. | Managing resource allocation in a stream processing framework |
CN106708622B (zh) * | 2016-07-18 | 2020-06-02 | 腾讯科技(深圳)有限公司 | 集群资源处理方法和系统、资源处理集群 |
CN107247629A (zh) * | 2017-07-04 | 2017-10-13 | 北京百度网讯科技有限公司 | 云计算系统及用于控制服务器的云计算方法和装置 |
CN107343000A (zh) * | 2017-07-04 | 2017-11-10 | 北京百度网讯科技有限公司 | 用于处理任务的方法和装置 |
CN107682206B (zh) * | 2017-11-02 | 2021-02-19 | 北京中电普华信息技术有限公司 | 基于微服务的业务流程管理系统的部署方法及系统 |
CN108062246B (zh) * | 2018-01-25 | 2019-06-14 | 北京百度网讯科技有限公司 | 用于深度学习框架的资源调度方法和装置 |
-
2018
- 2018-06-29 CN CN201810701224.3A patent/CN110659127A/zh active Pending
-
2019
- 2019-06-27 WO PCT/CN2019/093391 patent/WO2020001564A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886455A (zh) * | 2017-02-23 | 2017-06-23 | 北京图森未来科技有限公司 | 一种实现用户隔离的方法及系统 |
CN107343045A (zh) * | 2017-07-04 | 2017-11-10 | 北京百度网讯科技有限公司 | 云计算系统及用于控制服务器的云计算方法和装置 |
CN107450961A (zh) * | 2017-09-22 | 2017-12-08 | 济南浚达信息技术有限公司 | 一种基于Docker容器的分布式深度学习系统及其搭建方法、工作方法 |
CN107783818A (zh) * | 2017-10-13 | 2018-03-09 | 北京百度网讯科技有限公司 | 深度学习任务处理方法、装置、设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
XIAO, YI ET AL.: "A Deep Learning Container Cloud Study for GPU Resources", JOURNAL OF COMMUNICATION UNIVERSITY OF CHINA ( SCIENCE AND TECHNOLOGY, vol. 24, no. 6, 25 December 2017 (2017-12-25), pages 16 - 20 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114124405A (zh) * | 2020-07-29 | 2022-03-01 | 腾讯科技(深圳)有限公司 | 业务处理方法、系统、计算机设备及计算机可读存储介质 |
CN114124405B (zh) * | 2020-07-29 | 2023-06-09 | 腾讯科技(深圳)有限公司 | 业务处理方法、系统、计算机设备及计算机可读存储介质 |
CN112130983A (zh) * | 2020-10-27 | 2020-12-25 | 上海商汤临港智能科技有限公司 | 任务处理方法、装置、设备、系统及存储介质 |
CN113656143A (zh) * | 2021-08-16 | 2021-11-16 | 深圳市瑞驰信息技术有限公司 | 一种实现安卓容器直通显卡的方法及系统 |
CN113656143B (zh) * | 2021-08-16 | 2024-05-31 | 深圳市瑞驰信息技术有限公司 | 一种实现安卓容器直通显卡的方法及系统 |
CN114462938A (zh) * | 2022-01-20 | 2022-05-10 | 北京声智科技有限公司 | 资源异常的处理方法、装置、设备及存储介质 |
CN116755779A (zh) * | 2023-08-18 | 2023-09-15 | 腾讯科技(深圳)有限公司 | 循环间隔的确定方法、装置、设备、存储介质及芯片 |
CN116755779B (zh) * | 2023-08-18 | 2023-12-05 | 腾讯科技(深圳)有限公司 | 循环间隔的确定方法、装置、设备、存储介质及芯片 |
Also Published As
Publication number | Publication date |
---|---|
CN110659127A (zh) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020001564A1 (zh) | 一种处理任务的方法、装置及系统 | |
CN111225042B (zh) | 数据传输的方法、装置、计算机设备以及存储介质 | |
CN109976570B (zh) | 数据传输方法、装置及显示装置 | |
CN111614549B (zh) | 交互处理方法、装置、计算机设备及存储介质 | |
CN108762881B (zh) | 界面绘制方法、装置、终端及存储介质 | |
WO2021018297A1 (zh) | 一种基于p2p的服务通信方法、装置及系统 | |
WO2021120976A2 (zh) | 负载均衡控制方法及服务器 | |
CN109697113B (zh) | 请求重试的方法、装置、设备及可读存储介质 | |
CN109861966B (zh) | 处理状态事件的方法、装置、终端及存储介质 | |
WO2019205735A1 (zh) | 数据传输方法、装置、显示屏及显示装置 | |
CN110704324B (zh) | 应用调试方法、装置及存储介质 | |
CN110673944B (zh) | 执行任务的方法和装置 | |
CN111159604A (zh) | 图片资源加载方法及装置 | |
CN110290191B (zh) | 资源转移结果处理方法、装置、服务器、终端及存储介质 | |
CN111914985B (zh) | 深度学习网络模型的配置方法、装置及存储介质 | |
CN112181915B (zh) | 执行业务的方法、装置、终端和存储介质 | |
CN110086814B (zh) | 一种数据获取的方法、装置及存储介质 | |
CN111580892B (zh) | 一种业务组件调用的方法、装置、终端和存储介质 | |
CN113448692B (zh) | 分布式图计算的方法、装置、设备及存储介质 | |
CN113949692A (zh) | 地址分配方法、装置、电子设备及计算机可读存储介质 | |
WO2019214694A1 (zh) | 存储数据的方法、读取数据的方法、装置及系统 | |
CN112860365A (zh) | 内容显示方法、装置、电子设备和可读存储介质 | |
CN112260845A (zh) | 进行数据传输加速的方法和装置 | |
CN111222124B (zh) | 使用权限分配的方法、装置、设备以及存储介质 | |
CN115348262B (zh) | 基于跨链协议的跨链操作执行方法及网络系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19825225 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19825225 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19825225 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.07.2021) |