WO2020001564A1 - 一种处理任务的方法、装置及系统 - Google Patents

一种处理任务的方法、装置及系统 Download PDF

Info

Publication number
WO2020001564A1
WO2020001564A1 PCT/CN2019/093391 CN2019093391W WO2020001564A1 WO 2020001564 A1 WO2020001564 A1 WO 2020001564A1 CN 2019093391 W CN2019093391 W CN 2019093391W WO 2020001564 A1 WO2020001564 A1 WO 2020001564A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
resource
task
container
gpu
Prior art date
Application number
PCT/CN2019/093391
Other languages
English (en)
French (fr)
Inventor
何猛
杨威
叶挺群
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020001564A1 publication Critical patent/WO2020001564A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method, an apparatus, and a system for processing tasks.
  • the server cluster framework consists of multiple servers.
  • the server cluster framework can provide a large number of computing resources, which can be used for processing tasks. For example, server clusters can currently be used to handle deep learning tasks.
  • the server cluster framework includes a management server and multiple processing servers for processing tasks.
  • the management server can collect the resources in each processing server, and assign tasks to be processed to a certain processing according to the resource conditions of each processing server. Server that processes the task using the resources it includes.
  • Multiple tasks are often assigned to the processing server, and the multiple tasks may be processed locally at the same time, and there may be interaction between the multiple tasks when processing the multiple tasks.
  • the embodiments of the present application provide a method, a device, and a system for processing tasks, so as to avoid the problem that there may be interaction between multiple tasks.
  • the technical solution is as follows:
  • the present application provides a method for processing a task, the method includes:
  • the processing request message includes resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container;
  • the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
  • the processing the to-be-processed task through the resource and the processing program in the container includes:
  • the GPU is used to drive the GPU to run the processing program in the container, and use the processing program to process the to-be-processed task.
  • the method further includes:
  • the container further includes an identifier of the GPU, and before driving the GPU to run the processing program by the GPU driver in the container, the method further includes:
  • the to-be-processed task is a deep learning task.
  • the present application provides a method for processing a task, the method includes:
  • the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is generated by the management device according to resource situation information sent by the processing device
  • the resource situation information includes the number of the idle resources
  • the resource requirement information corresponding to the to-be-processed task includes a quantity of resources less than or equal to the quantity of idle resources;
  • the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container to Causing the processing device to process the pending task.
  • the present application provides a device for processing tasks, the device including:
  • a receiving module configured to receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing use of a container;
  • An allocation module configured to allocate a container according to the instruction information, and allocate a resource for processing the to-be-processed task according to the resource demand information;
  • a processing module configured to process the to-be-processed task in the container through the resource and the processing program.
  • the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
  • the processing module includes:
  • a calling unit configured to call the GPU driver according to the storage location of the GPU driver
  • a processing unit configured to drive the GPU to run the processing program through the GPU driver in the container, and use the processing program to process the to-be-processed task.
  • the device further includes:
  • a sending module is configured to send resource situation information to the management server, where the resource situation information includes at least a current amount of idle resources.
  • the container further includes a GPU identifier
  • the device further includes
  • a mapping module configured to map a GPU corresponding to the identifier of the GPU into the container.
  • the to-be-processed task is a deep learning task.
  • the present application provides a device for processing tasks, the device including:
  • a receiving module configured to receive a resource unit sent by a management device, where the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is sent by the management device according to the processing device Generated by resource condition information, where the resource condition information includes the number of idle resources;
  • An acquisition module configured to acquire a task to be processed, and the resource requirement information corresponding to the task to be processed includes a resource quantity less than or equal to the idle resource quantity;
  • a sending module configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the pending task, a processing program for processing the pending task, and an instruction for using a container Instruction information to enable the processing device to process the pending task.
  • the present application provides a system for processing tasks, and the system includes the device described above.
  • the embodiments of the present application provide a non-volatile computer-readable storage medium for storing a computer program, and the computer program is loaded by a processor to execute instructions of any one of the foregoing methods.
  • this embodiment provides an electronic device, where the electronic device includes a processor and a memory,
  • the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement instructions of any one of the foregoing methods.
  • each task is processed by the processing device in its own container, so that multiple tasks are isolated from each other by the container. To avoid mutual influence between multiple tasks.
  • FIG. 1-1 is a schematic diagram of a device cluster framework provided by an embodiment of the present application.
  • 1-2 is a schematic diagram of a deep learning platform architecture provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for processing a task according to an embodiment of the present application
  • FIG. 3 is a flowchart of another method for processing a task according to an embodiment of the present application.
  • FIG. 4 is a flowchart of another method for processing a task according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus for processing tasks according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of another apparatus for processing tasks according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a system for processing tasks according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • an embodiment of the present application provides a device cluster framework.
  • the framework includes:
  • the management device can establish a network connection with each processing device.
  • the management device can also establish a network connection with the task device.
  • the network connection can be a wired connection or a wireless connection. Wait.
  • Each processing device includes a computing resource, and the computing resource may be at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a memory.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the processing device For each processing device, the processing device is used to obtain its current resource status information.
  • the resource status information includes at least the current number of idle resources in the processing device, and may also include the current number of used resources in the processing device.
  • the resource condition information may also be sent to the management device.
  • the number of idle resources may include at least one of the number of idle CPUs, the number of idle GPUs, and the amount of idle memory.
  • the number of used resources includes at least one of the number of used CPUs, the number of GPUs used, and the amount of memory used.
  • a management device configured to receive resource condition information sent by a processing device, and generate a resource unit according to the resource condition information.
  • the resource unit includes the number of idle resources in the processing device and a device identifier of the processing device, and sends the resource to a task device. unit.
  • the task device includes at least one pending task, at least one processing program for processing the pending task, and resource requirement information corresponding to each pending task.
  • the resource requirement information corresponding to the to-be-processed task may include the amount of resources required for processing the to-be-processed task.
  • the amount of resources required to process the to-be-processed task includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
  • the task device is configured to receive the resource unit, match the resource unit with the resource requirement information corresponding to each pending task, and match the resource requirement information corresponding to a pending task, and the resource unit can satisfy the corresponding to the pending task.
  • Resource requirement information of the server select a processing program for processing the to-be-processed task from the at least one processing program, and send a processing request message to the management device, where the processing request message includes a device identifier in the resource unit, the to-be-processed task Corresponding resource requirement information, the to-be-processed task, a selected processing program, and instruction information for instructing the use of a container.
  • That the resource unit can satisfy the resource requirement information corresponding to the to-be-processed task means that the number of idle resources included in the resource unit is greater than or equal to the amount of resources required to process the to-be-processed task included in the resource demand information.
  • the management device is configured to receive the processing request message, and forward the processing request message to a processing device corresponding to the device identifier according to a device identifier included in the processing request message.
  • a processing device configured to receive the processing request message, where the processing request message includes task resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container; allocation according to the instruction information A container, and allocating a computing resource for processing the to-be-processed task according to the task resource requirement information; and processing the to-be-processed task in the container through the computing resource and the processing program.
  • the to-be-processed task may be a deep learning task and the like. Multiple tasks may be processed in the same processing device at the same time, but each task is processed by the processing device in its own container, so that multiple tasks are isolated from each other by the container, avoiding the mutual influence between multiple tasks.
  • the above computing resources may include GPU resources, so that GPU resources may be used to process deep learning tasks when processing deep learning tasks.
  • the above device cluster framework can be a deep learning platform.
  • the deep learning platform includes a kube-Mesos node, a master node, and multiple slave nodes.
  • the task device may be a kube-Mesos node
  • the management device may be a master node
  • the processing device may be a slave node.
  • a network connection is established between each Slave node and the Master node, and a network connection is established between the Master node and the kube-Mesos node.
  • the kube-Mesos node is a container orchestration framework that includes resource requirement information corresponding to multiple deep learning tasks.
  • the slave node For each slave node, the slave node sends its current resource status information to the master device.
  • the resource status information includes at least the current idle resource quantity of the slave node and may also include the current used resource quantity of the slave node.
  • the master node Receive the resource situation information, and generate a resource unit according to the resource situation information, the resource unit includes the idle resource quantity of the slave node and the device identifier of the slave node, and sends the resource unit to the kube-Mesos node; the kube-Mesos node receives
  • the resource unit matches the resource requirement information corresponding to each deep learning task to match the resource requirement information corresponding to a deep learning task, and selects one of the at least one processing program (Executor) to process the resource requirement information.
  • the Executor of a deep learning task sends a processing request message to the Master node.
  • the processing request message includes the device identification in the resource unit, the resource requirement information corresponding to the deep learning task, the deep learning task, the selected Executor, and an instruction for indicating use.
  • Container instructions The master node receives the processing request message and forwards the processing request message to the slave node; the slave node receives the processing request message, allocates a container according to the instruction information, and allocates a processing resource for processing the deep learning task according to the task resource requirement information.
  • this application provides a method for processing a task.
  • the method includes:
  • Step 201 Receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
  • Step 202 Allocate a container according to the instruction information, and allocate resources for processing pending tasks according to the resource demand information.
  • Step 203 Process the to-be-processed tasks in the container through the resource and the processing program.
  • each task is processed by a processing device in its own container.
  • the container isolates multiple tasks from each other and prevents multiple tasks from affecting each other.
  • the resource requirement information may indicate whether the amount of idle resources meets the resource requirements required to process the pending task, for example, a comparison result between the amount of idle resources and the amount of resources required to process the pending task. .
  • the to-be-processed task may be a deep learning task and the like.
  • the above computing resources may include GPU resources, so that GPU resources may be used to process deep learning tasks when processing deep learning tasks.
  • FIG. 3 an embodiment of the present application provides a method for processing a task. The method can be applied to a device cluster framework shown in FIG. 1-1 or a deep learning platform shown in FIG. 1-2.
  • the task can be a deep learning task, including:
  • Step 301 The processing device acquires its current resource situation information and sends the resource situation information to the management device.
  • the resource condition information includes at least the current amount of idle resources in the processing device, may also include the current number of resources used in the processing device, and may also send the resource condition information to the management device.
  • the number of idle resources may include at least one of the number of idle CPUs, the number of idle GPUs, and the amount of idle memory.
  • the number of used resources includes at least one of the number of used CPUs, the number of GPUs used, and the amount of memory used.
  • the processing device is any processing device in a device cluster framework.
  • the processing device can obtain current resource condition information when its resource usage changes, and send the resource condition information to a management device.
  • Step 302 The management device receives the resource condition information sent by the processing device, and generates a resource unit according to the resource condition information.
  • the resource unit includes the number of idle resources in the processing device and the device identifier of the processing device, and sends the resource to the task device. unit.
  • the task device includes at least one pending task, at least one processing program for processing the pending task, and resource requirement information corresponding to each pending task.
  • the resource requirement information corresponding to the to-be-processed task may include the amount of resources required for processing the to-be-processed task.
  • the to-be-processed task may be a deep learning task, and the task in the task device may be set by a technician in the task device.
  • the number of resources required to process the to-be-processed task includes at least one of the number of CPUs, the number of GPUs, and the memory capacity.
  • Step 303 The task device receives the resource unit, matches the resource unit with the resource requirement information corresponding to each pending task, and matches the resource requirement information corresponding to a pending task, and the resource unit can satisfy the correspondence of the pending task. Resource requirements information.
  • the task device compares the amount of idle resources included in the resource unit with the amount of resources required to process the to-be-processed task included in each resource requirement information, and compares the number of included resources to be less than or equal to the number of idle resources.
  • the resource requirement information is selected from the compared resource requirement information.
  • the selected resource requirement information is the resource requirement information corresponding to a task to be processed.
  • the resource unit can satisfy the resource requirement information corresponding to the task to be processed. .
  • Step 304 The management device receives the processing request message, and forwards the processing request message to the processing device corresponding to the device identifier according to the device identifier included in the processing request message.
  • the processing request message includes task resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
  • Step 305 The processing device receives the processing request message, and the processing request message includes task resource requirement information, a to-be-processed task and a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container.
  • the container may include a storage location of a GPU driver, and may further include an identifier of the GPU.
  • the identification of the GPU may be the number of the GPU.
  • Step 306 The processing device allocates a container according to the instruction information, and allocates a computing resource for processing the to-be-processed task according to the task resource demand information; and processes the to-be-processed task in the container through the computing resource and the processing program.
  • an identifier of the GPU is obtained from the container, and a GPU corresponding to the GPU identifier is mapped into the container.
  • the processing device may store the GPU driver in the container according to the storage of the GPU driver. Position, call the GPU driver; use the GPU driver to drive the GPU to run the handler in the container, and use the handler to process pending tasks.
  • the processing device may further obtain its current resource condition information and send the resource condition information to the management server, where the resource condition information includes at least the number of currently idle resources.
  • the processing device can allocate a container to the task to be processed and process the task to be processed in the container, To handle pending tasks in the container through resources and handlers.
  • the container includes a storage location of the GPU driver, so that the GPU driver can be called through the storage location, and the GPU can be called into the container through the GPU driver to implement the processing of pending tasks in the container by the GPU.
  • To-be-processed tasks can be deep learning tasks, so GPUs can be used to process deep learning tasks.
  • the present application provides a method for processing a task, the method includes:
  • Step 401 Receive a resource unit sent by a management device.
  • the resource unit includes a quantity of idle resources in the processing device and a device identifier of the processing device.
  • the resource unit is generated by the management device according to the resource condition information sent by the processing device. Including the number of idle resources.
  • Step 402 Obtain a to-be-processed task, and the resource requirement information corresponding to the to-be-processed task includes a resource quantity less than or equal to the idle resource quantity.
  • Step 403 Send a processing request message to the processing device, where the processing request message includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, so that the processing device processes the pending Processing tasks.
  • the processing device since the processing request message sent to the processing device includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, the processing device may
  • the indicator information allocates a container for the tasks to be processed, and processes the tasks in the container.
  • the container isolates multiple tasks in the processing device from each other, avoiding the mutual influence between multiple tasks.
  • the present application provides a device 500 for processing tasks.
  • the device 500 includes:
  • the receiving module 501 is configured to receive a processing request message, where the processing request message includes resource requirement information, a to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of a container;
  • An allocation module 502 configured to allocate a container according to the instruction information, and allocate a resource for processing the to-be-processed task according to the resource demand information;
  • the processing module 503 is configured to process the to-be-processed task in the container by using the resource and the processing program.
  • the container includes a storage location of a graphics processor GPU driver, and the resource includes a GPU;
  • the processing module 503 includes:
  • a calling unit configured to call the GPU driver according to the storage location of the GPU driver
  • a processing unit configured to drive the GPU to run the processing program through the GPU driver in the container, and use the processing program to process the to-be-processed task.
  • the apparatus 500 further includes:
  • a sending module is configured to send resource situation information to the management server, where the resource situation information includes at least a current amount of idle resources.
  • the container further includes a GPU identifier
  • the device 500 further includes
  • a mapping module configured to map a GPU corresponding to the identifier of the GPU into the container.
  • the to-be-processed task is a deep learning task.
  • each task is processed by a processing device in its own container, and thus the container Isolate multiple tasks from each other to avoid mutual influence between multiple tasks.
  • an embodiment of the present application provides an apparatus 600 for processing a task.
  • the apparatus 600 includes:
  • a receiving module 601 is configured to receive a resource unit sent by a management device, where the resource unit includes a quantity of idle resources in a processing device and a device identifier of the processing device, and the resource unit is sent by the management device according to the processing device Generated by the resource situation information, the resource situation information includes the number of idle resources;
  • An obtaining module 602 is configured to obtain a to-be-processed task, and the resource requirement information corresponding to the to-be-processed task includes a quantity of resources less than or equal to the quantity of idle resources;
  • a sending module 603 configured to send a processing request message to the processing device, where the processing request message includes the resource requirement information, the pending task, a processing program for processing the pending task, and an instruction for using Indication information of the container, so that the processing device processes the to-be-processed task.
  • the processing device since the processing request message sent to the processing device includes the resource requirement information, the to-be-processed task, a processing program for processing the to-be-processed task, and instruction information for instructing the use of the container, the processing device may
  • the indicator information allocates a container for the tasks to be processed, and processes the tasks in the container.
  • the container isolates multiple tasks in the processing device from each other, avoiding the mutual influence between multiple tasks.
  • an embodiment of the present invention provides a system 700 for processing tasks.
  • the system 700 includes the device described in FIG. 5 and the device described in 6.
  • the device described in FIG. 5 may be a processing device 701.
  • the apparatus described in FIG. 6 may be a task device 702.
  • FIG. 8 shows a structural block diagram of a terminal 800 provided by an exemplary embodiment of the present invention.
  • the terminal 800 may be a processing device, a management device, or a task device in any of the foregoing embodiments.
  • the terminal may be a mobile terminal, a notebook computer or a desktop computer, and the mobile terminal may be a mobile phone, a tablet computer, or the like.
  • the terminal 800 may also be called other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
  • the terminal 800 includes a processor 801 and a memory 802.
  • the processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 801 may use at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). achieve.
  • the processor 801 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the wake state, also called a CPU (Central Processing Unit).
  • the coprocessor is Low-power processor for processing data in standby.
  • the processor 801 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing content required to be displayed on the display screen.
  • the processor 801 may further include an AI (Artificial Intelligence) processor, and the AI processor is configured to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 802 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 802 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash storage devices.
  • non-transitory computer-readable storage medium in the memory 802 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 801 to implement the processing task provided by the method embodiment in this application Methods.
  • the terminal 800 may optionally include a peripheral device interface 803 and at least one peripheral device.
  • the processor 801, the memory 802, and the peripheral device interface 803 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 803 through a bus, a signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 804, a touch display screen 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
  • the peripheral device interface 803 may be used to connect at least one peripheral device related to I / O (Input / Output) to the processor 801 and the memory 802.
  • the processor 801, the memory 802, and the peripheral device interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 801, the memory 802, and the peripheral device interface 803 or Both can be implemented on separate chips or circuit boards, which is not limited in this embodiment.
  • the radio frequency circuit 804 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 804 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 804 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 804 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, the World Wide Web, metropolitan area networks, intranets, mobile communication networks (2G, 3G, 4G, and 5G) of various generations, wireless local area networks, and / or WiFi (Wireless Fidelity) networks.
  • the radio frequency circuit 804 may further include NFC (Near Field Communication) circuits, which are not limited in this application.
  • the display screen 805 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, videos, and any combination thereof.
  • the display screen 805 also has the ability to collect touch signals on or above the surface of the display screen 805.
  • the touch signal can be input to the processor 801 as a control signal for processing.
  • the display screen 805 may also be used to provide a virtual button and / or a virtual keyboard, which is also called a soft button and / or a soft keyboard.
  • the display screen 805 may be one, and the front panel of the terminal 800 is provided.
  • the display screen 805 may be at least two, which are respectively provided on different surfaces of the terminal 800 or have a folded design. In still other embodiments, the display screen 805 may be a flexible display screen disposed on a curved surface or a folded surface of the terminal 800. Furthermore, the display screen 805 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 805 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera component 806 is used for capturing images or videos.
  • the camera component 806 includes a front camera and a rear camera.
  • the front camera is disposed on the front panel of the terminal, and the rear camera is disposed on the back of the terminal.
  • the camera assembly 806 may further include a flash.
  • the flash can be a monochrome temperature flash or a dual color temperature flash.
  • a dual color temperature flash is a combination of a warm light flash and a cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 807 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 801 for processing, or input them to the radio frequency circuit 804 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves.
  • the speaker can be a traditional film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes.
  • the audio circuit 807 may further include a headphone jack.
  • the positioning component 808 is used to locate the current geographic position of the terminal 800 to implement navigation or LBS (Location Based Service).
  • the positioning component 808 may be a positioning component based on a US-based GPS (Global Positioning System), a Beidou system in China, or a Galileo system in Russia.
  • the power supply 809 is used to power various components in the terminal 800.
  • the power source 809 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • the wired rechargeable battery is a battery charged through a wired line
  • the wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 800 further includes one or more sensors 810.
  • the one or more sensors 810 include, but are not limited to, an acceleration sensor 811, a gyroscope sensor 812, a pressure sensor 813, a fingerprint sensor 814, an optical sensor 815, and a proximity sensor 816.
  • the acceleration sensor 811 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established by the terminal 800.
  • the acceleration sensor 811 may be used to detect components of the acceleration of gravity on three coordinate axes.
  • the processor 801 may control the touch display screen 805 to display a user interface in a horizontal view or a vertical view according to a gravity acceleration signal collected by the acceleration sensor 811.
  • the acceleration sensor 811 may also be used for collecting motion data of a game or a user.
  • the gyro sensor 812 can detect the body direction and rotation angle of the terminal 800, and the gyro sensor 812 can cooperate with the acceleration sensor 811 to collect a 3D motion of the user on the terminal 800. Based on the data collected by the gyro sensor 812, the processor 801 can implement the following functions: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 813 may be disposed on a side frame of the terminal 800 and / or a lower layer of the touch display screen 805.
  • the processor 801 can perform left-right hand recognition or quick operation according to the holding signal collected by the pressure sensor 813.
  • the processor 801 controls the operability controls on the UI interface according to the pressure operation of the touch display screen 805 by the user.
  • the operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 814 is used to collect a user's fingerprint, and the processor 801 recognizes the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 recognizes the identity of the user based on the collected fingerprint. When identifying the user's identity as a trusted identity, the processor 801 authorizes the user to perform related sensitive operations, such as unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 814 may be provided on the front, back, or side of the terminal 800. When a physical button or a manufacturer's logo is set on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the manufacturer's logo.
  • the optical sensor 815 is used to collect ambient light intensity.
  • the processor 801 may control the display brightness of the touch display screen 805 according to the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 805 is decreased.
  • the processor 801 may also dynamically adjust the shooting parameters of the camera component 806 according to the ambient light intensity collected by the optical sensor 815.
  • the proximity sensor 816 also called a distance sensor, is usually disposed on the front panel of the terminal 800.
  • the proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800.
  • the processor 801 controls the touch display screen 805 to switch from the bright screen state to the closed screen state;
  • the touch display screen 805 is controlled by the processor 801 to switch from the screen state to the bright screen state.
  • FIG. 8 does not constitute a limitation on the terminal 800, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请是关于一种处理任务的方法、装置及系统。所述方法包括:接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;在所述容器中通过所述资源和所述处理程序处理所述待处理任务。本申请避免多个任务之间可能存在相互影响的问题。

Description

一种处理任务的方法、装置及系统
本申请要求于2018年6月29日提交的申请号为201810701224.3、发明名称为“一种处理任务的方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种处理任务的方法、装置及系统。
背景技术
服务器集群框架由多个服务器组成,服务器集群框架可以提供大量的计算资源,这些计算资源可以用于处理任务。例如,目前可以使用服务器集群处理深度学习任务。
服务器集群框架包括管理服务器和多个用于处理任务的处理服务器,管理服务器可以收集到每个处理服务器中的资源情况,根据每个处理服务器的资源情况,将需要处理的任务分配给某个处理服务器,该处理服务器使用其包括的资源处理该任务。
发明人在实现本申请的过程中,发现上述方式至少存在如下缺陷:
处理服务器上往往被分配到多个任务,可能在其本地同时处理该多个任务,且在处理该多个任务时,该多个任务之间可能存在相互影响。
发明内容
本申请实施例提供了一种处理任务的方法、装置及系统,以避免多个任务之间可能存在相互影响的问题。所述技术方案如下:
一方面,本申请提供一种处理任务的方法,所述方法包括:
接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;
根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;
在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
可选的,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;
所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务,包括:
根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;
在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
可选的,所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务之后,还包括:
向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
可选的,所述容器中还包括GPU的标识,所述在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序之前,还包括
将所述GPU的标识对应的GPU映射到所述容器中。
可选的,所述待处理任务为深度学习任务。
另一方面,本申请提供一种处理任务的方法,所述方法包括:
接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;
获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;
向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
另一方面,本申请提供一种处理任务的装置,所述装置包括:
接收模块,用于接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;
分配模块,用于根据所述指示信息分配容器,以及根据所述资源需求信息, 分配用于处理所述待处理任务的资源;
处理模块,用于在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
可选的,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;
所述处理模块包括:
调用单元,用于根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;
处理单元,用于在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
可选的,所述装置还包括:
发送模块,用于向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
可选的,所述容器中还包括GPU的标识,所述装置还包括
映射模块,用于将所述GPU的标识对应的GPU映射到所述容器中。
可选的,所述待处理任务为深度学习任务。
另一方面,本申请提供一种处理任务的装置,所述装置包括:
接收模块,用于接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;
获取模块,用于获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;
发送模块,用于向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
另一方面,本申请提供一种处理任务的系统,所述系统包括上述所述的装置。
另一方面,本申请提实施例供了一种非易失性计算机可读存储介质,用于存储计算机程序,所述计算机程序通过处理器进行加载来执行上述任一种方法的指令。
另一方面,本申请提实施例供了一种电子设备,所述电子设备包括处理器和存储器,
所述存储器存储至少一个指令,所述至少一个指令被所述处理器加载并执行行,以实现上述任一种方法的指令。
本申请实施例提供的技术方案可以包括以下有益效果:
在容器中通过资源和处理程序处理待处理任务,这样在同一个处理设备中同时处理多个任务时,每个任务分别在各自的容器中被处理设备处理,这样通过容器将多个任务相互隔离,避免了多个任务之间相互产生影响。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1-1是本申请实施例提供的设备集群框架示意图;
图1-2是本申请实施例提供的一种深度学习平台架构示意图;
图2是本申请实施例提供的一种处理任务的方法流程图;
图3是本申请实施例提供的另一种处理任务的方法流程图;
图4是本申请实施例提供的另一种处理任务的方法流程图;
图5是本申请实施例提供的一种处理任务的装置结构示意图;
图6是本申请实施例提供的另一种处理任务的装置结构示意图;
图7是本申请实施例提供的一种处理任务的系统结构示意图;
图8是本申请实施例提供的一种终端结构示意图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
参见图1-1,本申请实施例提供了一种设备集群框架,该框架中包括:
管理设备、任务设备和多个处理设备,管理设备可以与每个处理设备之间建立有网络连接,管理设备还可以与任务设备之间建立有网络连接,该网络连接可以为有线连接或无线连接等。
每个处理设备中包括计算资源,计算资源可以为中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和内存等中的至少一个。
对于每个处理设备,该处理设备,用于获取其当前的资源情况信息,该资源情况信息至少包括该处理设备中当前的空闲资源数量,还可以包括该处理设备中当前的已使用资源数量,还可以向管理设备发送该资源情况信息。
可选的,空闲资源数量可以包括空闲的CPU数目、空闲的GPU数量和空闲的内存容量等中的至少一个。已使用资源数量包括已使用的CPU数目、已使用的GPU数目和已使用的内存容量等中的至少一个。
管理设备,用于接收处理设备发送的资源情况信息,根据该资源情况信息生成资源单元,该资源单元中包括该处理设备中的空闲资源数量和该处理设备的设备标识,向任务设备发送该资源单元。
任务设备包括至少一个待处理任务、至少一个用于处理待处理任务的处理程序和每个待处理任务对应的资源需求信息。待处理任务对应的资源需求信息可以包括用于处理该待处理任务所需要的资源数量。
可选的,用于处理该待处理任务所需要的资源数量包括CPU数目、GPU数目和内存容量等中的至少一个。
任务设备,用于接收该资源单元,将该资源单元与每个待处理任务对应的资源需求信息进行匹配,匹配出一个待处理任务对应的资源需求信息,该资源单元能够满足该待处理任务对应的资源需求信息;从该至少一个处理程序中选 择一个用于处理该待处理任务的处理程序,向管理设备发送处理请求消息,该处理请求消息包括该资源单元中的设备标识、该待处理任务对应的资源需求信息、该待处理任务、选择的处理程序和用于指示使用容器的指示信息。
该资源单元能够满足该待处理任务对应的资源需求信息是指该资源单元中包括的空闲资源数量大于或等于该资源需求信息包括的用于处理该待处理任务所需要的资源数量。
管理设备,用于接收该处理请求消息,根据该处理请求消息包括的设备标识,向该设备标识对应的处理设备转发该处理请求消息。
处理设备,用于接收该处理请求消息,该处理请求消息包括任务资源需求信息、待处理任务和用于处理该待处理任务的处理程序和用于指示使用容器的指示信息;根据该指示信息分配容器,以及根据该任务资源需求信息,分配用于处理该待处理任务的计算资源;在该容器中通过该计算资源和该处理程序处理该待处理任务。
可选的,该待处理任务可以为深度学习任务等。在同一个处理设备中可能同时处理多个任务,但每个任务分别在各自的容器中被处理设备处理,这样通过容器将多个任务相互隔离,避免了多个任务之间相互产生影响。
可选的,上述计算资源可以包括GPU资源,从而在处理深度学习任务时可以使用到GPU资源来处理深度学习任务。
可选的,参见图1-2,上述设备集群框架可以深度学习平台,在深度学平台中包括kube-Mesos节点、Master节点和多个Slave节点。上述任务设备可以为kube-Mesos节点,上述管理设备可以为Master节点,上述处理设备可以为Slave节点。
每个Slave节点与Master节点之间建立有网络连接,Master节点与kube-Mesos节点之间建立有网络连接。kube-Mesos节点是一个容器编排框架,其包括多个深度学习任务对应的资源需求信息。
对于每个Slave节点,该Slave节点向Master设备发送其当前的资源情况信息,该资源情况信息至少包括该Slave节点当前的空闲资源数量,还可以包括该Slave节点当前的已使用资源数量;Master节点接收该资源情况信息,根据该资源情况信息生成资源单元,该资源单元中包括该Slave节点的空闲资源数量和该Slave节点的设备标识,向kube-Mesos节点发送该资源单元;kube-Mesos节点接收该资源单元,将该资源单元与每个深度学习任务对应的资源需求信息进行 匹配,匹配出一个深度学习任务对应的资源需求信息,从该至少一个处理程序(Executor)中选择一个用于处理该深度学习任务的Executor,向Master节点发送处理请求消息,该处理请求消息包括该资源单元中的设备标识、该深度学习任务对应的资源需求信息、该深度学习任务、选择的Executor和用于指示使用容器的指示信息。Master节点接收该处理请求消息,向该Slave节点转发该处理请求消息;该Slave节点接收该处理请求消息,根据该指示信息分配容器,以及根据该任务资源需求信息,分配用于处理该深度学习任务的计算资源;在该容器中通过该计算资源和该Executor处理该深度学习任务。
参见图2,本申请提供一种处理任务的方法,所述方法包括:
步骤201:接收处理请求消息,该处理请求消息包括资源需求信息、待处理任务和用于处理待处理任务的处理程序和用于指示使用容器的指示信息。
步骤202:根据该指示信息分配容器,以及根据该资源需求信息,分配用于处理待处理任务的资源。
步骤203:在该容器中通过该资源和该处理程序处理待处理任务。
在本申请实施例,由于在容器中通过资源和处理程序处理待处理任务,这样在同一个处理设备中同时处理多个任务时,每个任务分别在各自的容器中被处理设备处理,这样通过容器将多个任务相互隔离,避免了多个任务之间相互产生影响。
在本申请的可选实施例种,资源需求信息可以指示空闲资源数量是否满足处理该待处理任务所需要的资源需求,例如,空闲资源数量和处理该待处理任务所需的资源数量的比较结果。
可选的,该待处理任务可以为深度学习任务等。上述计算资源可以包括GPU资源,从而在处理深度学习任务时可以使用到GPU资源来处理深度学习任务。参见图3,本申请实施例提供了一种处理任务的方法,该方法可以应用于如图1-1所示的设备集群框架中或图1-2所示的深度学习平台,该方法处理的任务可以为深度学习任务,包括:
步骤301:处理设备获取其当前的资源情况信息,并向管理设备发送该资源情况信息。
其中,该资源情况信息至少包括该处理设备中当前的空闲资源数量,还可以包括该处理设备中当前的已使用资源数量,还可以向管理设备发送该资源情 况信息。
可选的,空闲资源数量可以包括空闲的CPU数目、空闲的GPU数量和空闲的内存容量等中的至少一个。已使用资源数量包括已使用的CPU数目、已使用的GPU数目和已使用的内存容量等中的至少一个。
该处理设备是设备集群框架中的任一个处理设备,该处理设备可以在其资源使用情况发生变化时获取其当前的资源情况信息,并向管理设备发送该资源情况信息。
步骤302:管理设备接收处理设备发送的资源情况信息,根据该资源情况信息生成资源单元,该资源单元中包括该处理设备中的空闲资源数量和该处理设备的设备标识,向任务设备发送该资源单元。
任务设备包括至少一个待处理任务、至少一个用于处理待处理任务的处理程序和每个待处理任务对应的资源需求信息。待处理任务对应的资源需求信息可以包括用于处理该待处理任务所需要的资源数量。
可选的,待处理任务可以为深度学习任务,任务设备中的任务可以是技术人员设置在任务设备中的。用于处理该待处理任务所需要的资源数量包括CPU数目、GPU数目和内存容量等中的至少一个。
步骤303:任务设备接收该资源单元,将该资源单元与每个待处理任务对应的资源需求信息进行匹配,匹配出一个待处理任务对应的资源需求信息,该资源单元能够满足该待处理任务对应的资源需求信息。
具体地,任务设备将该资源单元中包括的空闲资源数量与每个资源需求信息包括的用于处理待处理任务所需要的资源数量进行比较,比较出包括的资源数量小于或等于该空闲资源数量的资源需求信息,从比较出的资源需求信息中选择一个资源需求信息,选择的一个资源需求信息是一个待处理任务对应的资源需求信息,该资源单元能够满足该待处理任务对应的资源需求信息。
步骤304:管理设备接收该处理请求消息,根据该处理请求消息包括的设备标识,向该设备标识对应的处理设备转发该处理请求消息。
其中,该处理请求消息包括任务资源需求信息、待处理任务和用于处理该待处理任务的处理程序和用于指示使用容器的指示信息。
步骤305:处理设备接收该处理请求消息,该处理请求消息包括任务资源需求信息、待处理任务和用于处理该待处理任务的处理程序和用于指示使用容器的指示信息。
可选的,该容器中可以包括GPU驱动程序的存储位置,还可以包括GPU的标识。GPU的标识可以为GPU的编号。
步骤306:处理设备根据该指示信息分配容器,以及根据该任务资源需求信息,分配用于处理该待处理任务的计算资源;在该容器中通过该计算资源和该处理程序处理该待处理任务。
可选的,在该计算资源包括GPU时,从该容器中获取该GPU的标识,将该GPU的标识对应的GPU映射到该容器中,该处理设备可以根据该容器中的GPU驱动程序的存储位置,调用GPU驱动程序;在容器中通过该GPU驱动程序驱动GPU运行该处理程序,并使用该处理程序处理待处理任务。
可选的,在处理该待处理任务时,处理设备还可以获取其当前的资源情况信息,向管理服务器发送资源情况信息,该资源情况信息至少包括当前空闲的资源数量。
在本申请实施例,由于管理设备向处理设备发送的处理请求消息包括用于指示使用容器的指示信息,这样处理设备可以根据指标信息为待处理任务分配容器,并在容器中处理待处理任务,实现在容器中通过资源和处理程序处理待处理任务。这样在同一个处理设备中同时处理多个任务时,每个任务分别在各自的容器中被处理设备处理,通过容器将多个任务相互隔离,避免了多个任务之间相互产生影响。另外,容器中包括GPU驱动程序的存储位置,这样通过该存储位置可以调用GPU驱动程序,通过该GPU驱动程序将GPU调用到容器内部,实现在容器中通过GPU处理待处理任务。待处理任务可以为深度学习任务,因此可以使用GPU处理深度学习任务。
参见图4,本申请提供一种处理任务的方法,所述方法包括:
步骤401:接收管理设备发送的资源单元,该资源单元包括处理设备中的空闲资源数量和处理设备的设备标识,该资源单元是管理设备根据处理设备发送的资源情况信息生成的,该资源情况信息包括该空闲资源数量。
步骤402:获取待处理任务,待处理任务对应的资源需求信息包括的资源数量小于或等于该空闲资源数量。
步骤403:向处理设备发送处理请求消息,该处理请求消息包括该资源需求信息、待处理任务、用于处理待处理任务的处理程序和用于指示使用容器的指示信息,以使处理设备处理待处理任务。
在本申请实施例中,由于向处理设备发送的处理请求消息包括该资源需求信息、待处理任务、用于处理待处理任务的处理程序和用于指示使用容器的指示信息,这样处理设备可以根据指标信息为待处理任务分配容器,并在容器中处理待处理任务,通过容器将处理设备中的多个任务相互隔离,避免了多个任务之间相互产生影响。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
参见图5,本申请提供了一种处理任务的装置500,所述装置500包括:
接收模块501,用于接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;
分配模块502,用于根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;
处理模块503,用于在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
可选的,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;
所述处理模块503包括:
调用单元,用于根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;
处理单元,用于在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
可选的,所述装置500还包括:
发送模块,用于向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
可选的,所述容器中还包括GPU的标识,所述装置500还包括
映射模块,用于将所述GPU的标识对应的GPU映射到所述容器中。
可选的,所述待处理任务为深度学习任务。
在本申请实施例,由于在容器中通过资源和处理程序处理待处理任务,这样在同一个装置中同时处理多个任务时,每个任务分别在各自的容器中被处理 设备处理,这样通过容器将多个任务相互隔离,避免了多个任务之间相互产生影响。
参见图6,本申请实施例提供了一种处理任务的装置600,所述装置600包括:
接收模块601,用于接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;
获取模块602,用于获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;
发送模块603,用于向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
在本申请实施例中,由于向处理设备发送的处理请求消息包括该资源需求信息、待处理任务、用于处理待处理任务的处理程序和用于指示使用容器的指示信息,这样处理设备可以根据指标信息为待处理任务分配容器,并在容器中处理待处理任务,通过容器将处理设备中的多个任务相互隔离,避免了多个任务之间相互产生影响。
参见图7,本发明实施例提供了一种处理任务的系统700,所述系统700包括如图5所述的装置和如6所述的装置,图5所述的装置可以为处理设备701,图6所述的装置可以为任务设备702。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图8示出了本发明一个示例性实施例提供的终端800的结构框图。该终端800可以是上述任一实施例中的处理设备、管理设备或任务设备。在实现时终端可以为移动终端、笔记本电脑或台式电脑等,移动终端可以为手机、平板电脑等。终端800还可能被称为用户设备、便携式终端、膝上型终端、台式终端等 其他名称。
通常,终端800包括有:处理器801和存储器802。
处理器801可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器801可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器801也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器801可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器801还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器802可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器802还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器802中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器801所执行以实现本申请中方法实施例提供的处理任务的方法。
在一些实施例中,终端800还可选包括有:外围设备接口803和至少一个外围设备。处理器801、存储器802和外围设备接口803之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口803相连。具体地,外围设备包括:射频电路804、触摸显示屏805、摄像头806、音频电路807、定位组件808和电源809中的至少一种。
外围设备接口803可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器801和存储器802。在一些实施例中,处理器801、存储器802和外围设备接口803被集成在同一芯片或电路板上;在一些其他实施例中,处理器801、存储器802和外围设备接口803中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路804用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路804通过电磁信号与通信网络以及其他通信设备进行通信。 射频电路804将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路804包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路804可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路804还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏805用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏805是触摸显示屏时,显示屏805还具有采集在显示屏805的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器801进行处理。此时,显示屏805还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏805可以为一个,设置终端800的前面板;在另一些实施例中,显示屏805可以为至少两个,分别设置在终端800的不同表面或呈折叠设计;在再一些实施例中,显示屏805可以是柔性显示屏,设置在终端800的弯曲表面上或折叠面上。甚至,显示屏805还可以设置成非矩形的不规则图形,也即异形屏。显示屏805可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件806用于采集图像或视频。可选地,摄像头组件806包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件806还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路807可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器801进行处理,或者输入至射频电路804以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别 设置在终端800的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器801或射频电路804的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路807还可以包括耳机插孔。
定位组件808用于定位终端800的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件808可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源809用于为终端800中的各个组件进行供电。电源809可以是交流电、直流电、一次性电池或可充电电池。当电源809包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端800还包括有一个或多个传感器810。该一个或多个传感器810包括但不限于:加速度传感器811、陀螺仪传感器812、压力传感器813、指纹传感器814、光学传感器815以及接近传感器816。
加速度传感器811可以检测以终端800建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器811可以用于检测重力加速度在三个坐标轴上的分量。处理器801可以根据加速度传感器811采集的重力加速度信号,控制触摸显示屏805以横向视图或纵向视图进行用户界面的显示。加速度传感器811还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器812可以检测终端800的机体方向及转动角度,陀螺仪传感器812可以与加速度传感器811协同采集用户对终端800的3D动作。处理器801根据陀螺仪传感器812采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器813可以设置在终端800的侧边框和/或触摸显示屏805的下层。当压力传感器813设置在终端800的侧边框时,可以检测用户对终端800的握持信号,由处理器801根据压力传感器813采集的握持信号进行左右手识别或快捷操作。当压力传感器813设置在触摸显示屏805的下层时,由处理器801 根据用户对触摸显示屏805的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器814用于采集用户的指纹,由处理器801根据指纹传感器814采集到的指纹识别用户的身份,或者,由指纹传感器814根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器801授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器814可以被设置终端800的正面、背面或侧面。当终端800上设置有物理按键或厂商Logo时,指纹传感器814可以与物理按键或厂商Logo集成在一起。
光学传感器815用于采集环境光强度。在一个实施例中,处理器801可以根据光学传感器815采集的环境光强度,控制触摸显示屏805的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏805的显示亮度;当环境光强度较低时,调低触摸显示屏805的显示亮度。在另一个实施例中,处理器801还可以根据光学传感器815采集的环境光强度,动态调整摄像头组件806的拍摄参数。
接近传感器816,也称距离传感器,通常设置在终端800的前面板。接近传感器816用于采集用户与终端800的正面之间的距离。在一个实施例中,当接近传感器816检测到用户与终端800的正面之间的距离逐渐变小时,由处理器801控制触摸显示屏805从亮屏状态切换为息屏状态;当接近传感器816检测到用户与终端800的正面之间的距离逐渐变大时,由处理器801控制触摸显示屏805从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图8中示出的结构并不构成对终端800的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本领域技术人员在考虑说明书及实践这里公开的申请后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (14)

  1. 一种处理任务的方法,其特征在于,所述方法包括:
    接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;
    根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;
    在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
  2. 如权利要求1所述的方法,其特征在于,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;
    所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务,包括:
    根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;
    在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
  3. 如权利要求1或2所述的方法,其特征在于,所述在所述容器中通过所述资源和所述处理程序处理所述待处理任务之后,还包括:
    向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
  4. 如权利要求2所述的方法,其特征在于,所述容器中还包括GPU的标识,所述在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序之前,还包括
    将所述GPU的标识对应的GPU映射到所述容器中。
  5. 如权利要求1、2或4所述的方法,其特征在于,所述待处理任务为深度学习任务。
  6. 一种处理任务的方法,其特征在于,所述方法包括:
    接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;
    获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;
    向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
  7. 一种处理任务的装置,其特征在于,所述装置包括:
    接收模块,用于接收处理请求消息,所述处理请求消息包括资源需求信息、待处理任务和用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息;
    分配模块,用于根据所述指示信息分配容器,以及根据所述资源需求信息,分配用于处理所述待处理任务的资源;
    处理模块,用于在所述容器中通过所述资源和所述处理程序处理所述待处理任务。
  8. 如权利要求7所述的装置,其特征在于,所述容器中包括图形处理器GPU驱动程序的存储位置,所述资源包括GPU;
    所述处理模块包括:
    调用单元,用于根据所述GPU驱动程序的存储位置,调用所述GPU驱动程序;
    处理单元,用于在所述容器中通过所述GPU驱动程序驱动所述GPU运行所述处理程序,并使用所述处理程序处理所述待处理任务。
  9. 如权利要求7或8所述的装置,其特征在于,所述装置还包括:
    发送模块,用于向所述管理服务器发送资源情况信息,所述资源情况信息至少包括当前空闲资源数量。
  10. 如权利要求8所述的装置,其特征在于,所述容器中还包括GPU的标识,所述装置还包括
    映射模块,用于将所述GPU的标识对应的GPU映射到所述容器中。
  11. 如权利要求7、8或10所述的装置,其特征在于,所述待处理任务为深度学习任务。
  12. 一种处理任务的装置,其特征在于,所述装置包括:
    接收模块,用于接收管理设备发送的资源单元,所述资源单元包括处理设备中的空闲资源数量和所述处理设备的设备标识,所述资源单元是所述管理设备根据所述处理设备发送的资源情况信息生成的,所述资源情况信息包括所述空闲资源数量;
    获取模块,用于获取待处理任务,所述待处理任务对应的资源需求信息包括的资源数量小于或等于所述空闲资源数量;
    发送模块,用于向所述处理设备发送处理请求消息,所述处理请求消息包括所述资源需求信息、所述待处理任务、用于处理所述待处理任务的处理程序和用于指示使用容器的指示信息,以使所述处理设备处理所述待处理任务。
  13. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序通过处理器进行加载来执行如权利要求1至6任一项所述的方法的指令
  14. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,
    所述存储器存储至少一个指令,所述至少一个指令被所述处理器加载并执行行,以实现如权利要求1至6任一项所述的方法。
PCT/CN2019/093391 2018-06-29 2019-06-27 一种处理任务的方法、装置及系统 WO2020001564A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810701224.3A CN110659127A (zh) 2018-06-29 2018-06-29 一种处理任务的方法、装置及系统
CN201810701224.3 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020001564A1 true WO2020001564A1 (zh) 2020-01-02

Family

ID=68985837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093391 WO2020001564A1 (zh) 2018-06-29 2019-06-27 一种处理任务的方法、装置及系统

Country Status (2)

Country Link
CN (1) CN110659127A (zh)
WO (1) WO2020001564A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112130983A (zh) * 2020-10-27 2020-12-25 上海商汤临港智能科技有限公司 任务处理方法、装置、设备、系统及存储介质
CN113656143A (zh) * 2021-08-16 2021-11-16 深圳市瑞驰信息技术有限公司 一种实现安卓容器直通显卡的方法及系统
CN114124405A (zh) * 2020-07-29 2022-03-01 腾讯科技(深圳)有限公司 业务处理方法、系统、计算机设备及计算机可读存储介质
CN114462938A (zh) * 2022-01-20 2022-05-10 北京声智科技有限公司 资源异常的处理方法、装置、设备及存储介质
CN116755779A (zh) * 2023-08-18 2023-09-15 腾讯科技(深圳)有限公司 循环间隔的确定方法、装置、设备、存储介质及芯片

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000473A (zh) * 2020-08-12 2020-11-27 中国银联股份有限公司 深度学习模型的分布式训练方法以及装置
CN112559182B (zh) * 2020-12-16 2024-04-09 北京百度网讯科技有限公司 资源分配方法、装置、设备及存储介质
CN112866404B (zh) * 2021-02-03 2023-01-24 视若飞信息科技(上海)有限公司 一种半云系统及执行方法
CN113867970A (zh) * 2021-12-03 2021-12-31 苏州浪潮智能科技有限公司 一种容器加速装置、方法、设备及计算机可读存储介质
WO2023160629A1 (zh) * 2022-02-25 2023-08-31 本源量子计算科技(合肥)股份有限公司 量子控制系统的处理装置、方法、量子计算机、介质和电子装置
CN115470915B (zh) * 2022-03-16 2024-04-05 本源量子计算科技(合肥)股份有限公司 量子计算机的服务器系统及其实现方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886455A (zh) * 2017-02-23 2017-06-23 北京图森未来科技有限公司 一种实现用户隔离的方法及系统
CN107343045A (zh) * 2017-07-04 2017-11-10 北京百度网讯科技有限公司 云计算系统及用于控制服务器的云计算方法和装置
CN107450961A (zh) * 2017-09-22 2017-12-08 济南浚达信息技术有限公司 一种基于Docker容器的分布式深度学习系统及其搭建方法、工作方法
CN107783818A (zh) * 2017-10-13 2018-03-09 北京百度网讯科技有限公司 深度学习任务处理方法、装置、设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10146592B2 (en) * 2015-09-18 2018-12-04 Salesforce.Com, Inc. Managing resource allocation in a stream processing framework
CN106708622B (zh) * 2016-07-18 2020-06-02 腾讯科技(深圳)有限公司 集群资源处理方法和系统、资源处理集群
CN107247629A (zh) * 2017-07-04 2017-10-13 北京百度网讯科技有限公司 云计算系统及用于控制服务器的云计算方法和装置
CN107343000A (zh) * 2017-07-04 2017-11-10 北京百度网讯科技有限公司 用于处理任务的方法和装置
CN107682206B (zh) * 2017-11-02 2021-02-19 北京中电普华信息技术有限公司 基于微服务的业务流程管理系统的部署方法及系统
CN108062246B (zh) * 2018-01-25 2019-06-14 北京百度网讯科技有限公司 用于深度学习框架的资源调度方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886455A (zh) * 2017-02-23 2017-06-23 北京图森未来科技有限公司 一种实现用户隔离的方法及系统
CN107343045A (zh) * 2017-07-04 2017-11-10 北京百度网讯科技有限公司 云计算系统及用于控制服务器的云计算方法和装置
CN107450961A (zh) * 2017-09-22 2017-12-08 济南浚达信息技术有限公司 一种基于Docker容器的分布式深度学习系统及其搭建方法、工作方法
CN107783818A (zh) * 2017-10-13 2018-03-09 北京百度网讯科技有限公司 深度学习任务处理方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAO, YI ET AL.: "A Deep Learning Container Cloud Study for GPU Resources", JOURNAL OF COMMUNICATION UNIVERSITY OF CHINA ( SCIENCE AND TECHNOLOGY, vol. 24, no. 6, 25 December 2017 (2017-12-25), pages 16 - 20 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124405A (zh) * 2020-07-29 2022-03-01 腾讯科技(深圳)有限公司 业务处理方法、系统、计算机设备及计算机可读存储介质
CN114124405B (zh) * 2020-07-29 2023-06-09 腾讯科技(深圳)有限公司 业务处理方法、系统、计算机设备及计算机可读存储介质
CN112130983A (zh) * 2020-10-27 2020-12-25 上海商汤临港智能科技有限公司 任务处理方法、装置、设备、系统及存储介质
CN113656143A (zh) * 2021-08-16 2021-11-16 深圳市瑞驰信息技术有限公司 一种实现安卓容器直通显卡的方法及系统
CN113656143B (zh) * 2021-08-16 2024-05-31 深圳市瑞驰信息技术有限公司 一种实现安卓容器直通显卡的方法及系统
CN114462938A (zh) * 2022-01-20 2022-05-10 北京声智科技有限公司 资源异常的处理方法、装置、设备及存储介质
CN116755779A (zh) * 2023-08-18 2023-09-15 腾讯科技(深圳)有限公司 循环间隔的确定方法、装置、设备、存储介质及芯片
CN116755779B (zh) * 2023-08-18 2023-12-05 腾讯科技(深圳)有限公司 循环间隔的确定方法、装置、设备、存储介质及芯片

Also Published As

Publication number Publication date
CN110659127A (zh) 2020-01-07

Similar Documents

Publication Publication Date Title
WO2020001564A1 (zh) 一种处理任务的方法、装置及系统
CN111225042B (zh) 数据传输的方法、装置、计算机设备以及存储介质
CN109976570B (zh) 数据传输方法、装置及显示装置
CN111614549B (zh) 交互处理方法、装置、计算机设备及存储介质
CN108762881B (zh) 界面绘制方法、装置、终端及存储介质
WO2021018297A1 (zh) 一种基于p2p的服务通信方法、装置及系统
WO2021120976A2 (zh) 负载均衡控制方法及服务器
CN109697113B (zh) 请求重试的方法、装置、设备及可读存储介质
CN109861966B (zh) 处理状态事件的方法、装置、终端及存储介质
WO2019205735A1 (zh) 数据传输方法、装置、显示屏及显示装置
CN110704324B (zh) 应用调试方法、装置及存储介质
CN110673944B (zh) 执行任务的方法和装置
CN111159604A (zh) 图片资源加载方法及装置
CN110290191B (zh) 资源转移结果处理方法、装置、服务器、终端及存储介质
CN111914985B (zh) 深度学习网络模型的配置方法、装置及存储介质
CN112181915B (zh) 执行业务的方法、装置、终端和存储介质
CN110086814B (zh) 一种数据获取的方法、装置及存储介质
CN111580892B (zh) 一种业务组件调用的方法、装置、终端和存储介质
CN113448692B (zh) 分布式图计算的方法、装置、设备及存储介质
CN113949692A (zh) 地址分配方法、装置、电子设备及计算机可读存储介质
WO2019214694A1 (zh) 存储数据的方法、读取数据的方法、装置及系统
CN112860365A (zh) 内容显示方法、装置、电子设备和可读存储介质
CN112260845A (zh) 进行数据传输加速的方法和装置
CN111222124B (zh) 使用权限分配的方法、装置、设备以及存储介质
CN115348262B (zh) 基于跨链协议的跨链操作执行方法及网络系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19825225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19825225

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19825225

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.07.2021)