WO2023201947A1 - Methods, systems, and storage media for task dispatch - Google Patents

Methods, systems, and storage media for task dispatch Download PDF

Info

Publication number
WO2023201947A1
WO2023201947A1 PCT/CN2022/114395 CN2022114395W WO2023201947A1 WO 2023201947 A1 WO2023201947 A1 WO 2023201947A1 CN 2022114395 W CN2022114395 W CN 2022114395W WO 2023201947 A1 WO2023201947 A1 WO 2023201947A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
processing modules
determining
priority
priorities
Prior art date
Application number
PCT/CN2022/114395
Other languages
French (fr)
Inventor
Jun Yin
Peng Huang
Xin CEN
Xiang Yu
Li Wu
Original Assignee
Zhejiang Dahua Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co., Ltd. filed Critical Zhejiang Dahua Technology Co., Ltd.
Publication of WO2023201947A1 publication Critical patent/WO2023201947A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to methods, systems, and storage media for task dispatch.
  • task dispatch or task execution in various scenarios becomes increasing common. For example, a plurality of tasks are arranged in a queue and are dispatched according to an order of the queue.
  • a dispatch mode only based on the order of the queue may be relatively inefficient and even may cause jamming.
  • the tasks are generally dispatched or executed according to a preset fixed thread strategy, which may also result in relatively low efficiency. Therefore, it is desirable to provide improved systems and methods for task dispatch, thereby improving processing efficiency.
  • An aspect of the present disclosure relates to a system for task dispatch.
  • the system may include at least one storage medium including a set of instructions and at least one processor in communication with the at least one storage medium.
  • the at least one processor is directed to cause the system to perform operations including: obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
  • the method may include obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
  • a yet another aspect of the present disclosure relates to a non-transitory computer readable medium.
  • the non-transitory computer readable medium may include executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method.
  • the method may include obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
  • FIG. 1 is a schematic diagram illustrating an exemplary task dispatch system according to some embodiments of the present disclosure
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
  • FIG. 3 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure
  • FIG. 4 is a flowchart illustrating an exemplary process for task dispatch according to some embodiments of the present disclosure
  • FIG. 5 is a schematic diagram illustrating exemplary configuration information according to some embodiments of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating an exemplary process for task dispatch according to a pipeline according to some embodiments of the present disclosure
  • FIGs. 7A-7D are schematic diagrams illustrating an exemplary process for updating a task queue according to some embodiments of the present disclosure
  • FIG. 8 is a flowchart illustrating an exemplary process for creating a thread pool strategy according to some embodiments of the present disclosure
  • FIG. 9 is a schematic diagram illustrating an exemplary first thread pool strategy according to some embodiments of the present disclosure.
  • FIGs. 10A and 10B are schematic diagrams illustrating an exemplary second thread pool strategy according to some embodiments of the present disclosure.
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowcharts may be implemented not in order. Conversely, the operations may be implemented in an inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • An aspect of the present disclosure relates to a method and a system for task dispatch.
  • the system may obtain configuration information of a plurality of processing modules in a pipeline and determine priorities of the plurality of processing modules based on the configuration information. Further, the system may dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities. For example, the system may compare a priority of a current task with a priority of a previous task in a task queue and update the task queue accordingly. In addition, the system may create different thread pool strategies at least based on a system computing capacity and dispatch the tasks according to different thread pool strategies under different situations.
  • priorities of a plurality of processing modules are determined and corresponding tasks are added into a task queue based on the priorities. Accordingly, the tasks can be dispatched according to the task queue, which can improve the overall work efficiency and can ensure enough resources for a final processing module for outputting data results.
  • different thread pool strategies are created according to the system computing capacity of the devices to make reasonable dispatching of tasks, which can make full use of the hardware resources of different devices and can improve the processing efficiency.
  • FIG. 1 is a schematic diagram illustrating an exemplary task dispatch system according to some embodiments of the present disclosure.
  • the task dispatch system 100 may be applied in various scenarios, such as monitoring, image processing, data processing, etc.
  • the task dispatch system 100 may include a processing device 110, a network 120, an acquisition device 130, a terminal device 140, and a storage device 150.
  • the processing device 110, the acquisition device 130, the terminal device 140, and/or the storage device 150 may be connected to and/or communicate with each other via a wireless connection, a wired connection, or a combination thereof.
  • the processing device 110 may process data and/or information obtained from the acquisition device 130, the terminal device 140, and/or the storage device 150.
  • the processing device 110 may obtain image data from the acquisition device 130 and/or the storage device 150, and process the image data.
  • the processing device 110 may obtain configuration information of a plurality of processing modules in a pipeline, determine priorities of the plurality of processing modules based on the configuration information, and dispatch tasks corresponding to the processing modules based on the priorities.
  • the processing device 110 may update a task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
  • the processing device 110 may determine a thread pool strategy based on system computing capacity, and dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
  • the processing device 110 may be a single server or a server group.
  • the server group may be centralized or distributed.
  • the processing device 110 may be local or remote.
  • the processing device 110 may access information and/or data from the acquisition device 130, the terminal device 140, and/or the storage device 150 via the network 120.
  • the processing device 110 may be directly connected to the acquisition device 130, the terminal device 140, and/or the storage device 150 to access information and/or data.
  • the processing device 110 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or a combination thereof.
  • the network 120 may include any suitable network that may facilitate the exchange of information and/or data for the task dispatch system 100.
  • one or more components e.g., the processing device 110, the acquisition device 130, the terminal device 140, the storage device 150
  • the processing device 110 may communicate information and/or data with one or more other components of the task dispatch system 100 via the network 120.
  • the processing device 110 may obtain image data from the acquisition device 130 via the network 120.
  • the processing device 110 may obtain user instruction (s) from the terminal device 140 via the network 120.
  • the network 120 may be and/or include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) ) ) , a wired network (e.g., an Ethernet network) , a wireless network (e.g., an 802.11 network, a Wi-Fi network) , a cellular network (e.g., a Long Term Evolution (LTE) network) , a frame relay network, a virtual private network (VPN) , a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof.
  • the network 120 may include one or more network access points.
  • the network 120 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the task dispatch system 100 may be connected to the network 120 to exchange data and/or information.
  • the acquisition device 130 may be used to acquire data and/or information to be processed.
  • the data and/or information to be processed may include images, videos, audios, or the like, or a combination thereof.
  • the acquisition device 130 may include an image acquisition device (e.g., a camera, a video camera, a drone) for acquiring images and/or videos.
  • the acquisition device 130 may include an audio acquisition device (e.g., a recorder, a microphone, a pickup) for acquiring audios.
  • the data and/or information acquired by the acquisition device 130 may be transmitted to the storage device 150 via the network 120, or be transmitted to the processing device 110 to be processed through the network 120.
  • the terminal device 140 may be connected to and/or communicate with the processing device 110, the acquisition device 130, and/or the storage device 150.
  • the terminal device 140 may obtain a processed image from the processing device 110.
  • the terminal device 140 may obtain image data acquired via the acquisition device 130 and transmit the image data to the processing device 110 to be processed.
  • the terminal device 140 may include a mobile device 140-1, a tablet computer 140-2, ..., a smart wearable device 140-3, or the like, or any combination thereof.
  • the terminal device 140 may include an input device, an output device, etc.
  • the input device may include a keyboard, a touch screen, a speech input device, an eye-tracking input device, a brain monitoring device, or the like, or a combination thereof.
  • the output device may include a display, a speaker, a printer, or the like, or a combination thereof.
  • the terminal device 140 may be part of the processing device 110.
  • the storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the processing device 110, the acquisition device 130, and/or the terminal device 140. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 110 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 150 may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memory may include a random access memory (RAM) .
  • Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • DRAM dynamic RAM
  • DDR SDRAM double date rate synchronous dynamic RAM
  • SRAM static RAM
  • T-RAM thyristor RAM
  • Z-RAM zero-capacitor RAM
  • Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • MROM mask ROM
  • PROM programmable ROM
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disk ROM
  • digital versatile disk ROM etc.
  • the storage device 150 may be implemented on a cloud platform. In some embodiments, the storage device 150 may integrated into the processing device 110 and/or the acquisition device 130.
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
  • one or more components of the task dispatch system 100 may be implemented on the computing device 200.
  • the processing device 110 may be implemented on the computing device 200 and configured to perform functions of the processing device 120 disclosed in this disclosure.
  • the computing device 200 may include COM ports 250 connected to and from a network connected thereto to facilitate data communications.
  • the computing device 200 may also include a processor 220, in the form of one or more processors, for executing program instructions.
  • the processor 220 may include interface circuits and processing circuits therein.
  • the interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
  • the computing device 200 may include a program storage and a data storage of different forms, for example, a disk 270, and a read-only memory (ROM) 230, or a random access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device200.
  • the computing device 200 may also include program instructions stored in the ROM 230, the RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220.
  • the methods and/or processes of the present disclosure may be implemented as the program instructions.
  • the computing device 200 may also include an I/O component 260, supporting input/output between the computing device 200 and other components.
  • the computing device 200 may also receive programming and data via network communications.
  • processor 220 is described in the computing device 200.
  • the computing device 200 in the present disclosure may also include plurality of processors, thus operations and/or method steps that are performed by one processor 220 as described in the present disclosure may also be jointly or separately performed by the plurality of CPUs/processors.
  • the processor 220 of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two different processors jointly or separately in the computing device 200 (e.g., the first processor executes step A and the second processor executes step B, or the first and second processors jointly execute steps A and B) .
  • FIG. 3 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
  • the processing device 110 may include an obtaining module 310, a determination module 320, and a dispatch module 330.
  • the obtaining module 310 may be configured to obtain configuration information of a plurality of processing modules in a pipeline.
  • the determination module 320 may be configured to determine priorities of the plurality of processing modules based on the configuration information. In some embodiments, the determination module 320 may determine the priorities of the plurality of processing modules according to an order of the plurality of processing modules in the pipeline.
  • the dispatch module 330 may be configured to dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities. In some embodiments, the dispatch module 330 may update a task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
  • the dispatch module 330 may also determine a thread pool strategy based at least in part on a system computing capacity. Further, the dispatch module 330 may dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
  • FIGS. 4-10B More descriptions may be found elsewhere in the present disclosure in the present disclosure (e.g., FIGS. 4-10B and relevant descriptions thereof) .
  • the modules in the processing device 110 may be connected to or communicate with each other via a wired connection or a wireless connection.
  • the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
  • the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
  • LAN Local Area Network
  • WAN Wide Area Network
  • Bluetooth a ZigBee
  • NFC Near Field Communication
  • two or more of the modules may be combined as a single module, and any one of the modules may be divided into two or more units.
  • the obtaining module 310 and the determination module 320 may be combined as a single module which may both obtain the configuration information of the plurality of processing modules and determine the priorities of the processing modules based on the configuration information.
  • the processing device 110 may include one or more additional modules.
  • the processing device 110 may also include a transmission module (not shown) configured to transmit signals (e.g., electrical signals, electromagnetic signals) to one or more components (e.g., the acquisition device 130, the storage device 150) of the task dispatch system 100.
  • the processing device 110 may include a storage module (not shown) used to store information and/or data (e.g., the priorities) associated with task dispatch.
  • FIG. 4 is a flowchart illustrating an exemplary process for task dispatch according to some embodiments of the present disclosure.
  • process 400 may be executed by the task dispatch system 100.
  • the process 400 may be implemented as a set of instructions stored in the storage device (e.g., the storage device 150) .
  • the processing device 110 e.g., the processor 220 of the computing device 200 and/or one or more modules illustrated in FIG. 3 may execute the set of instructions and may accordingly be directed to perform the process 400.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 400 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 400 illustrated in FIG. 4 and described below is not intended to be limiting.
  • the processing device 110 may obtain configuration information of a plurality of processing modules in a pipeline.
  • the pipeline may refer to a process for processing a plurality of objects through a plurality of processing modules in turn.
  • the processing modules in the pipeline may refer to modules (e.g., algorithm modules) configured to perform corresponding operations on the objects.
  • different application scenarios may correspond to different processing modules.
  • operations needed to be executed on images may include a pre-processing operation, an image segmentation operation, a feature extraction operation, an object detection operation, an object attribute recognition operation, an object tracking operation, etc.
  • the processing modules may include a pre-processing module, an image segmentation module, a feature extraction module, an object detection module, an object attribute recognition module, an object tracking module, etc.
  • different processing modules may use different algorithms or processing methods to perform corresponding functions or operations.
  • the image segmentation module may use a Full Convolutional Network (FCN) , a Semantic Segmentation (SegNet) , a DeepMask, or any other image segmentation algorithms.
  • the object detection module may use a Region Convolutional Neural Network (R-CNN) , a Spatial Pyramid Pooling Networks (SPP-Net) , You Only Look Once (Yolo) , Single Shot MultiBox Detector (SSD) , and other object detection algorithms.
  • the object attribute recognition module may use a Scale-Invariant Feature Transform (SIFT) or other recognition algorithms to perform object attribute recognition.
  • SIFT Scale-Invariant Feature Transform
  • the object tracking module may use a Model in the Loop Tracker, a Kernel Correlation Filter Tracker, a Generic Object Tracking Using Regression Networks Tracker (GOTURN Tracker) , or other algorithms to perform object tracking.
  • a Model in the Loop Tracker a Kernel Correlation Filter Tracker, a Generic Object Tracking Using Regression Networks Tracker (GOTURN Tracker) , or other algorithms to perform object tracking.
  • GOTURN Tracker Generic Object Tracking Using Regression Networks Tracker
  • the pipeline may be expressed in a line form and the processing modules may be represented by nodes thereof.
  • a pipeline 500 may include four nodes A, B, C, and D corresponding to four processing modules A, B, C, and D respectively.
  • a pipeline 620 includes three nodes A, B, and C corresponding to three processing modules A, B, and C respectively.
  • the processing modules A, B, and C may perform corresponding operations on object 1, object 2, and object 3 in turn.
  • the three processing modules A, B, and C may perform corresponding operations on object 1 in turn.
  • the processing module A completes the processing of object 1 the processing module A may immediately process the next object 2; when the processing module B completes the processing of object 1, the processing module B may immediately process the next object 2, and so on.
  • the pipeline may be represented by a directed acyclic graph (DAG) .
  • a starting point of the graph represents an input flow of the entire pipeline and an endpoint of the graph represents an output flow of the entire pipeline.
  • an input flow of a first processing module in the pipeline may be regarded as the input flow of the entire pipeline, and an output flow of a last processing module in the pipeline may be regarded as the output flow of the entire pipeline.
  • the input flow “input_A” of node A corresponding to the first processing module in the pipeline 500 may be regarded as the input flow of the pipeline 500
  • the output flow “output_res” of node D corresponding to the last processing module may be regarded as the output flow of the pipeline 500.
  • the configuration information of the plurality of processing modules may include a processing order of the plurality of processing modules.
  • the processing order may reflect a result dependency relationship among the plurality of processing modules.
  • each processing module may include an input and an output (also referred to as an “result” or a “processing result” ) , and an output (or result) of a previous module is used as an input of a next adjacent module.
  • the result dependency relationship among the plurality of processing modules may reflect that an output situation of a previous processing module is required for an execution of a next adjacent processing module.
  • the pipeline 500 includes four processing modules A, B, C, and D, and its processing order is A ⁇ B ⁇ C ⁇ D.
  • the result dependency relationship may reflect that an execution of the processing module B requires a participation of an output result of the processing module A, an execution of the processing module C requires a participation of an output result of the processing module B, and an execution of the processing module D requires a participation of an output result of the processing module C.
  • the configuration information may also include roles of the processing modules, importance degrees of the processing modules, relevancy degrees among the plurality of processing modules throughout an entire processing process, or the like, or a combination thereof.
  • the “role”” may reflect a type of processing module (e.g., a starting module, an intermediate module, an end module) , corresponding operations (e.g., a pre-processing operation, an image segmentation operation, a feature extraction operation, an object detection operation) , or the like, or any combination thereof.
  • the “importance degree” may reflect an impact degree of the processing module on a final processing result.
  • the importance degree of the object segmentation module may be larger than the importance degree of the pre-processing module.
  • the “related degree” may reflect a correlation relationship among the plurality of processing modules.
  • the processing device 110 may retrieve the configuration information corresponding to the processing modules from the storage device 150. In some embodiments, the processing device 110 may obtain the configuration information corresponding to the processing modules from the terminal device 140.
  • the processing device 110 may determine priorities of the plurality of processing modules based on the configuration information.
  • the processing device 110 may determine the priorities of the plurality of processing modules based on the processing order of the plurality of processing modules. Accordingly, the priorities are related to the result dependency relationship among the plurality of modules.
  • the processing device 110 may determine a data flowing direction according to the processing order of the plurality of processing modules.
  • the data flowing direction may refer to a transmission direction of processing results of the processing modules.
  • the processing results of the upstream processing modules may be transmitted to the downstream processing modules.
  • a processing result of node A (or processing module A) may be transmitted to node B (or processing module B)
  • a processing result of node B (or processing module B) may be transmitted to node C (or processing module C) , and so on.
  • the processing device 110 may determine the priorities of the plurality of processing modules based on the data flowing direction from small to large.
  • the priority of a downstream processing module is larger than a priority of a upstream processing module.
  • the priority of node B is larger than the priority of node A (or processing module A) .
  • the priority of node C is larger than the priority of node B (or processing module B) .
  • the processing device 110 may also determine the priorities of the plurality of processing modules based on the roles of the processing modules, the importance degrees of the processing modules, and/or the relevancy degrees among the plurality of processing modules throughout the entire processing process. For example, the priority of a middle module may be larger than the priority of a starting node. As another example, the priority of a processing module (e.g., the image segmentation module) used for a specific object may be larger than the priority of a processing module (e.g., the pre-processing module) used for an entire image. As another example, the priority of a processing module with a larger important degree may be larger than the priority of a processing module with a smaller important degree. As another example, the priorities of multiple processing modules with a larger relevancy degree may be the same.
  • processing device 110 may dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities.
  • the processing device 110 may dispatch and process each processing module (or a corresponding operation performed by each processing module) as a task.
  • the processing device 110 may perform task dispatch in the form of a task queue.
  • the processing device 110 may add each processing module (or the operation performed by each processing module) as a task into a task queue in turn based on the priority of the processing module, and dispatch the plurality of the tasks in the task queue.
  • the processing device 110 may determine a priority order of the processing modules A, B, and C as C > B> A according to the processing order of the processing modules A, B, and C corresponding to the nodes A, B, and C in the pipeline 620. Further, the processing device 110 may add the operation performed by each processing module as a task into a task queue based on the priority order.
  • a task of the node A processing the object 1 may be added into the task queue 630 as task 1
  • a task of the node B processing the object 1 may be added into the task queue 630 as task 2
  • a task of the node C processing the object 1 may be added into the task queue 630 as task 3
  • a task of the node A processing the object 2 may be added into the task queue 630 as task 4
  • a task of the node B processing the object 2 may be added into the task queue 630 as task 5
  • a task of the node C processing the object 2 may be added into the task queue 630 as task 6
  • a task of the node A processing the object 3 may be added into the task queue 630 as task 7
  • a task of the node B processing the object 3 may be added into the task queue 630 as task 8
  • a task of the node C processing the object 3 may be added into the task queue 630 as task 9.
  • the plurality of processing modules process a plurality of image frames in turn, and “the priority of a downstream processing module is larger than that of an upstream processing module” means that the priority of the downstream processing module processing a previous image frame is larger than the priority of the upstream module processing a current image frame.
  • the priority of the task of the node B processing the object 1 is larger than the priority of the task of the node A processing the object 2; the priority of the task of the node C processing the object 1 is larger than the priority of the task of the node A processing the object 2, and so on.
  • the processing device 110 may update the task queue based on the priority of the new task. For example, the processing device 110 may update the task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
  • the previous task may refer to a task that has been added to the task queue before.
  • the processing device 110 may compare the priority of the current task with a priority of a previously adjacent task (also referred to as a “last task” ) in the task queue; and in response to determining that the priority of the current task is larger than the priority of the previously adjacent task, the processing device 110 may update the task queue by swapping the current task and the previously adjacent task.
  • a priority of a previously adjacent task also referred to as a “last task”
  • the processing device 110 may update the task queue by comparing the priority of the current task with the priority of a first task (also referred to as a “top task” ) in the task queue, the priority of any task in the task queue, the priorities of any two adjacent tasks, etc. More descriptions regarding updating the task queue may be found elsewhere in the present disclosure, for example, FIGs. 7A-7D and relevant descriptions thereof.
  • the processing device 110 may determine a thread pool strategy based at least in part on a system computing capacity, and dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
  • the thread pool may include a plurality of threads used to perform the corresponding tasks respectively. More descriptions regarding the thread pool strategy may be found elsewhere in the present disclosure, for example, FIGs. 8-10 and relevant descriptions thereof.
  • priorities of a plurality of processing modules are determined and corresponding tasks are added into a task queue based on the priorities. Accordingly, the tasks can be dispatched according to the task queue, which can improve the overall work efficiency and can ensure enough resources for a final processing module for outputting data results.
  • one or more other optional operations may be added elsewhere in the process 400.
  • the processing device 110 may store information and/or data (e.g., the priorities) associated with the task dispatch in a storage device (e.g., the storage device 150) disclosed elsewhere in the present disclosure.
  • the processing device 110 may transmit the information and/or to the user device 140.
  • operation 410 and operation 420 may be combined into a single operation in which the processing device 110 may obtain the configuration information of the plurality of processing modules and determine the priorities of the processing modules based on the configuration information.
  • FIGs. 7A-7D are schematic diagrams illustrating an exemplary process for updating a task queue according to some embodiments of the present disclosure.
  • the processing device 110 may update the task queue by comparing the priority of a current task with the priority of a first task (or a top task) in a task queue.
  • the task queue 720 includes task 1, task 2, ..., task M, task M+1, ..., and task N, wherein task 1 is the first task of the task queue 720.
  • the processing device 110 may determine whether the priority of the current task 710 is larger than the priority of task 1. If the priority of the current task 710 is larger than the priority of task 1, the processing device 110 may row the current task 720 before task 1 to obtain an updated task queue 730.
  • the processing device 110 may further compare the priority of the current task 710 with the priorities of other tasks (e.g., any task in the task queue 720, the last task in the task queue 720) in the task queue 720 until determining an appropriate position of the current task 710, and update the task queue 720 accordingly.
  • the “appropriate position” may refer to that the priority of the current task 710 is smaller than the priority of a previous adjacent task and is larger than the priority of a next adjacent task.
  • the processing device 110 may update the task queue by comparing the priority of the current task with the priority of any task in the task queue.
  • the processing device 110 may update the task queue by determining whether the priority of the current task 710 is larger than the priority of any task m. If the priority of the current task 710 is larger than the priority of task m, the processing device 110 may swap the current task 710 and task m. Further, the processing device 110 may compare the priority of the current task 710 with the priorities of other tasks (e.g., task m-1, task m-2) located before task m until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
  • other tasks e.g., task m-1, task m-2
  • the processing device 110 may further determine the priority of the current task 710 with other tasks (e.g., task m+1, task m+2) located after task m until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
  • other tasks e.g., task m+1, task m+2 located after task m until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
  • the processing device 110 may update the task queue by comparing the priority of the current task with the priorities of any two adjacent tasks in the task queue.
  • task m and task m+1 are any two adjacent tasks in the task queue 720. Accordingly, the processing device 110 may determine whether the priority of the current task 710 is located between the priorities of task m and task m+1. If the priority of the current task 710 is between the priorities of task m and task m+1, the processing device 110 may row the current task 720 between task m and task m+1 to obtain an updated task queue 740.
  • the processing device 110 may further compare the priority of the current task 710 with the priorities of other tasks in the task queue 720 until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
  • the processing device 110 may update the task queue by comparing the priority of the current task with the priority of the last task (also referred to as a “previously adjacent task” ) in the task queue.
  • task n is the last task of the task queue 720. Accordingly, the processing device 110 may determine whether the priority of the current task 710 is smaller than the priority of task n. If the priority of the current task 710 is smaller than the priority of task n, the processing device 110 may add the current task 720 after task n directly. If the priority of the current task 710 is larger than the priority of the task n, the processing device 110 may swap the current task 710 and task n. Further, the processing device 110 may compare the priority of the current task with the priorities of other tasks (e.g., a previous task of the last task) in the task queue 720 until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
  • the priorities of other tasks e.g., a previous task of the last task
  • the priority of the current task is compared with those of tasks in the task queue and then the task queue is updated based on the comparing results, which can ensure that a task with a relatively high priority can be dispatched or executed first, thereby improving processing efficiency.
  • FIG. 8 is a flowchart illustrating an exemplary process for creating a thread pool strategy according to some embodiments of the present disclosure.
  • process 800 may be executed by the task dispatch system 100.
  • the process 800 may be implemented as a set of instructions stored in the storage device (e.g., the storage device 150) .
  • the processing device 110 e.g., the processor 220 of the computing device 200 and/or one or more modules illustrated in FIG. 3
  • the process 800 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 800 illustrated in FIG. 8 and described below is not intended to be limiting.
  • processing device 110 may determine whether the system computing capacity is larger than a preset threshold.
  • the system computing capacity may be reflected by a count of computing kernels of the system. For example, if the count of the computing kernels is relatively small, the corresponding system computing capacity is relatively weak; whereas, if the count of the computing kernels is relatively large, the corresponding system computing capacity is relatively strong.
  • the preset threshold may be a default value of the system, an experience value, an artificial pre-set value, or the like, or any combination thereof. In some embodiments, the preset threshold may be set according to actual needs, which is not limited by the present disclosure. For example, the preset threshold may be 32 kernels.
  • the processing device 110 in response to determining that the system computing capacity is smaller than or equal to the preset threshold, the processing device 110 (e.g., the dispatch module 330) may determine a first thread pool strategy including a plurality of threads in a single thread pool. For example, if the count of computing kernels of the system is smaller than or equal to 32, the processing device 110 may determine the first thread pool strategy.
  • the first thread pool strategy may include a single thread pool, which includes a plurality of threads in the single thread pool. That is to say, when the system computing capacity is relatively weak (e.g., an embedded device) , a single thread pool may be created to make that device resources can meet processing needs as much as possible.
  • the unified management of creation, dispatch execution, and destruction of the threads in a single thread pool can save system time and improve overall program stability.
  • the processing device 110 may perform the task dispatch based on idle conditions of the threads in the thread pool. Specifically, when a plurality of threads in the single thread pool are all idle, the processing device 110 may randomly dispatch tasks to threads based on the priority order of the tasks in the task queue. Further, when some of the plurality of threads complete current tasks, the processing device 110 may obtain other tasks in the task queue and dispatch them to corresponding threads based on the priority order until all tasks in the task queue have been dispatched.
  • a task queue 910 includes 6 tasks and a priority order of the 6 tasks is “task 1 ⁇ task 2 ⁇ task 3 ⁇ task 5 ⁇ task 6” .
  • the single thread pool 920 includes thread A, thread B, and thread C.
  • the processing device 110 may firstly dispatch task 1 to thread A, task 2 to thread B, and task 3 to thread C. If the thread A completes task 1, the processing device 110 may dispatch task 4 to thread A, and then if the thread C completes task 3, the processing device 110 may dispatch task 5 to thread C, and so on.
  • the processing device 110 in response to determining that the system computing capacity is larger than the preset threshold, the processing device 110 (e.g., the dispatch module 330) may determine a second thread pool strategy including a plurality of thread pools each of which includes a plurality of threads. For example, if the count of computing kernels of the system is larger than 32, the processing device 110 may determine the second thread pool strategy.
  • the second thread pool strategy includes a plurality of thread pools, and each thread pool includes a plurality of threads.
  • the system computing capacity is relatively strong (e.g., a server device)
  • a multi-thread pool strategy may be created to maximize the use of the hardware resources of the device and speed up the analysis and processing efficiency.
  • a count of thread pools in the second thread pool strategy may be smaller than or equal to the count of computing kernels corresponding to the system computing capacity. Accordingly, it can be ensured that each thread pool can monopolize resources of a single computing kernel. In other words, the thread pools and the computing kernels have affinity and match with each other, accordingly, it can be ensured that tasks in a same thread pool use a same computing kernel. Accordingly, there is no need to switch computing kernels during the process and resource loss can be reduced.
  • the processing device 110 may perform the task dispatch based on the priority order of the tasks in the task queue.
  • a task queue 1010 includes 8 tasks and a priority order of the 8 tasks in the task queue 1010 is “task 1 ⁇ task 2 ⁇ task 3 ⁇ task 5 ⁇ task 7 ⁇ task 8” .
  • the plurality of thread pools 1020 include thread pool 1, thread pool 2, and thread pool 3. Each thread pool includes two threads.
  • the processing device 110 may firstly dispatch task 1 to thread 1 in thread pool 1, task 2 to thread 3 in thread pool 2, and task 3 to thread 5 in thread pool 3. Then the processing device 110 may dispatch task 4 to thread 2 in thread pool 1, task 5 to thread 4 in thread pool 2, task 1 to thread 1 in thread pool 1, and task 6 to thread 6 in thread pool 3. Further, the processing device 110 may dispatch task 7 to a queuing area of thread pool 1 and task 8 to a queuing area of thread pool 2.
  • the processing device 110 may dispatch task 1 and task 2 to thread 1 and thread 2 respectively in thread pool 1, task 3 and task 4 to thread 3 and thread 4 respectively in thread pool 2, and task 5 and task 6 to thread 5 and thread 6 respectively in thread pool 3. Further, the processing device 110 may dispatch task 7 and task 8 to a queuing area of thread pool 1.
  • the processing device 110 may equally dispatch the tasks in the task queue to the plurality of threads in the plurality of thread pools. In some embodiments, the processing device 110 may randomly dispatch the tasks in the task queue to the plurality of threads in the plurality of thread pools.
  • the processing device 110 may dispatch a task to be executed to the thread based on the priority of the task to be executed. Specifically, when a specific thread completes its corresponding task, the processing device 110 may steal and dispatch a task that has not been executed to the current thread from tasks corresponding to other threads. For example, as shown in FIG. 10A, when thread 5 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed to thread 5 from the tasks corresponding to thread pool 1. When thread 6 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 8 that has not been executed to thread 6 from the tasks corresponding to thread pool 2.
  • the processing device 110 may steal tasks from other threads in the thread pool where the current thread is located. For example, as shown in FIG. 10 A, when thread 2 in thread pool 1 completes its corresponding task, the processing device 110 may steal and dispatch task 7 to thread 2 from the tasks corresponding to thread 1.
  • the processing device 110 may also steal tasks from other thread pools. For example, as shown in FIG. 10 B, when thread 3 in thread pool 2 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed from the tasks corresponding to thread 1 in thread pool 1. When thread 5 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 8 that has not been executed from the tasks corresponding to thread 2 in thread pool 1.
  • the processing device 110 may steal tasks from other threads or other thread pools from a queue tail in a queuing area. In some embodiments, when the processing device 110 steals tasks from other threads or other thread pools, the processing device 110 may preferentially steal a task in the queue tail with a larger priority. For example, as shown in FIG. 10A or FIG. 10B, the task queue tail may include task 7 and task 8, and the priority of task 7 is larger than task 8. Accordingly, when thread 3 in thread pool 2 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed to thread 3.
  • different thread pool strategies are created according to the system computing capacity of the devices to make reasonable dispatching of tasks, which can make full use of the hardware resources of different devices and can improve the processing efficiency.
  • the processing device 110 may create a suitable thread pool strategy based on the complexity of the processing plan (e.g., a count of processing modules in the pipeline (or a count of tasks in the task queue) , execution complexities of the processing modules) . For example, if the processing plan is relatively complicated, the processing device 110 may create a multi-thread pool strategy; if the processing plan is relatively simple, the processing device 110 may create a single thread pool strategy.
  • the complexity of the processing plan e.g., a count of processing modules in the pipeline (or a count of tasks in the task queue) , execution complexities of the processing modules. For example, if the processing plan is relatively complicated, the processing device 110 may create a multi-thread pool strategy; if the processing plan is relatively simple, the processing device 110 may create a single thread pool strategy.
  • the processing device 110 may create a single thread pool strategy; if the count of the tasks in the task queue is relatively large and/or the execution complexities of the processing modules are relatively high, the processing device 110 may create a multi-thread pool strategy.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.
  • a non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service
  • the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ”
  • “about, ” “approximate, ” or “substantially” may indicate ⁇ 20%variation of the value it describes, unless otherwise stated.
  • the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment.
  • the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Processing (AREA)
  • Stored Programmes (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiments of the present disclosure provide methods, systems and storage media for task dispatch. The method includes obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.

Description

METHODS, SYSTEMS, AND STORAGE MEDIA FOR TASK DISPATCH
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 202210412394.6 filed on April 19, 2022, the entire contents of which are hereby incorporated by reference.
TECHNICAL FIELD
The present disclosure relates to the field of computer technology, and in particular, to methods, systems, and storage media for task dispatch.
BACKGROUND
With the development of computer technology and communication technology, task dispatch or task execution in various scenarios becomes increasing common. For example, a plurality of tasks are arranged in a queue and are dispatched according to an order of the queue. However, since different tasks may have different dependencies, a dispatch mode only based on the order of the queue may be relatively inefficient and even may cause jamming. In addition, the tasks are generally dispatched or executed according to a preset fixed thread strategy, which may also result in relatively low efficiency. Therefore, it is desirable to provide improved systems and methods for task dispatch, thereby improving processing efficiency.
SUMMARY
An aspect of the present disclosure relates to a system for task dispatch. The system may include at least one storage medium including a set of instructions and at least one processor in communication with the at least one storage medium. When executing the set of instructions, the at least one processor is directed to cause the system to perform operations including: obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
Another aspect of the present disclosure relates to a method for task dispatch. The method may include obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
A yet another aspect of the present disclosure relates to a non-transitory computer readable medium. The non-transitory computer readable medium may include executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method. The method may include obtaining configuration information of a plurality of processing modules in a pipeline; determining priorities of the plurality of processing modules based on the configuration information; and dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
Additional features will be set forth in part in the description which follows, and in part will  become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary task dispatch system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure;
FIG. 4 is a flowchart illustrating an exemplary process for task dispatch according to some embodiments of the present disclosure;
FIG. 5 is a schematic diagram illustrating exemplary configuration information according to some embodiments of the present disclosure;
FIG. 6 is a schematic diagram illustrating an exemplary process for task dispatch according to a pipeline according to some embodiments of the present disclosure;
FIGs. 7A-7D are schematic diagrams illustrating an exemplary process for updating a task queue according to some embodiments of the present disclosure;
FIG. 8 is a flowchart illustrating an exemplary process for creating a thread pool strategy according to some embodiments of the present disclosure;
FIG. 9 is a schematic diagram illustrating an exemplary first thread pool strategy according to some embodiments of the present disclosure; and
FIGs. 10A and 10B are schematic diagrams illustrating an exemplary second thread pool strategy according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
The following description is presented to enable any person skilled in the art to make and use the present disclosure and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises, ” “comprising, ” “includes, ” and/or “including” when used in this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, and characteristics of the present disclosure, as well as the methods of operations and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawing (s) , all of which form part of this specification. It is to be expressly understood, however, that the drawing (s) is for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowcharts may be implemented not in order. Conversely, the operations may be implemented in an inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
An aspect of the present disclosure relates to a method and a system for task dispatch. The system may obtain configuration information of a plurality of processing modules in a pipeline and determine priorities of the plurality of processing modules based on the configuration information. Further, the system may dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities. For example, the system may compare a priority of a current task with a priority of a previous task in a task queue and update the task queue accordingly. In addition, the system may create different thread pool strategies at least based on a system computing capacity and dispatch the tasks according to different thread pool strategies under different situations.
According to the embodiments of the present disclosure, priorities of a plurality of processing modules are determined and corresponding tasks are added into a task queue based on the priorities. Accordingly, the tasks can be dispatched according to the task queue, which can improve the overall work efficiency and can ensure enough resources for a final processing module for outputting data results. In addition, different thread pool strategies are created according to the system computing capacity of the devices to make reasonable dispatching of tasks, which can make full use of the hardware resources of different devices and can improve the processing efficiency.
FIG. 1 is a schematic diagram illustrating an exemplary task dispatch system according to some embodiments of the present disclosure. In some embodiments, the task dispatch system 100 may be applied in various scenarios, such as monitoring, image processing, data processing, etc.
As illustrated in FIG. 1, the task dispatch system 100 may include a processing device 110, a network 120, an acquisition device 130, a terminal device 140, and a storage device 150. In some  embodiments, the processing device 110, the acquisition device 130, the terminal device 140, and/or the storage device 150 may be connected to and/or communicate with each other via a wireless connection, a wired connection, or a combination thereof.
The processing device 110 may process data and/or information obtained from the acquisition device 130, the terminal device 140, and/or the storage device 150. For example, the processing device 110 may obtain image data from the acquisition device 130 and/or the storage device 150, and process the image data. Specifically, the processing device 110 may obtain configuration information of a plurality of processing modules in a pipeline, determine priorities of the plurality of processing modules based on the configuration information, and dispatch tasks corresponding to the processing modules based on the priorities. For example, the processing device 110 may update a task queue by comparing a priority of a current task with a priority of a previous task in the task queue. As another example, the processing device 110 may determine a thread pool strategy based on system computing capacity, and dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
In some embodiments, the processing device 110 may be a single server or a server group. The server group may be centralized or distributed. In some embodiments, the processing device 110 may be local or remote. For example, the processing device 110 may access information and/or data from the acquisition device 130, the terminal device 140, and/or the storage device 150 via the network 120. As another example, the processing device 110 may be directly connected to the acquisition device 130, the terminal device 140, and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 110 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or a combination thereof.
The network 120 may include any suitable network that may facilitate the exchange of information and/or data for the task dispatch system 100. In some embodiments, one or more components (e.g., the processing device 110, the acquisition device 130, the terminal device 140, the storage device 150) of the task dispatch system 100 may communicate information and/or data with one or more other components of the task dispatch system 100 via the network 120. For example, the processing device 110 may obtain image data from the acquisition device 130 via the network 120. As another example, the processing device 110 may obtain user instruction (s) from the terminal device 140 via the network 120.
In some embodiments, the network 120 may be and/or include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) ) ) , a wired network (e.g., an Ethernet network) , a wireless network (e.g., an 802.11 network, a Wi-Fi network) , a cellular network (e.g., a Long Term Evolution (LTE) network) , a frame relay network, a virtual private network (VPN) , a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof. In some embodiments, the network 120 may include one or more network access points. For example, the network 120 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more  components of the task dispatch system 100 may be connected to the network 120 to exchange data and/or information.
The acquisition device 130 may be used to acquire data and/or information to be processed. In some embodiments, the data and/or information to be processed may include images, videos, audios, or the like, or a combination thereof. In some embodiments, the acquisition device 130 may include an image acquisition device (e.g., a camera, a video camera, a drone) for acquiring images and/or videos. In some embodiments, the acquisition device 130 may include an audio acquisition device (e.g., a recorder, a microphone, a pickup) for acquiring audios. In some embodiments, the data and/or information acquired by the acquisition device 130 may be transmitted to the storage device 150 via the network 120, or be transmitted to the processing device 110 to be processed through the network 120.
The terminal device 140 may be connected to and/or communicate with the processing device 110, the acquisition device 130, and/or the storage device 150. For example, the terminal device 140 may obtain a processed image from the processing device 110. As another example, the terminal device 140 may obtain image data acquired via the acquisition device 130 and transmit the image data to the processing device 110 to be processed. In some embodiments, the terminal device 140 may include a mobile device 140-1, a tablet computer 140-2, …, a smart wearable device 140-3, or the like, or any combination thereof. In some embodiments, the terminal device 140 may include an input device, an output device, etc. The input device may include a keyboard, a touch screen, a speech input device, an eye-tracking input device, a brain monitoring device, or the like, or a combination thereof. The output device may include a display, a speaker, a printer, or the like, or a combination thereof. In some embodiments, the terminal device 140 may be part of the processing device 110.
The storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the processing device 110, the acquisition device 130, and/or the terminal device 140. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 110 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 150 may include a mass storage, removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM) . Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc. In some embodiments, the storage device 150 may be implemented on a cloud platform. In some embodiments, the storage device 150 may integrated into the processing device 110 and/or the acquisition device 130.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, plurality of variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. In some embodiments, one or more components of the task dispatch system 100 may be implemented on the computing device 200. For example, the processing device 110 may be implemented on the computing device 200 and configured to perform functions of the processing device 120 disclosed in this disclosure.
The computing device 200, for example, may include COM ports 250 connected to and from a network connected thereto to facilitate data communications. The computing device 200 may also include a processor 220, in the form of one or more processors, for executing program instructions. For example, the processor 220 may include interface circuits and processing circuits therein. The interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
The computing device 200 may include a program storage and a data storage of different forms, for example, a disk 270, and a read-only memory (ROM) 230, or a random access memory (RAM) 240, for storing various data files to be processed and/or transmitted by the computing device200. The computing device 200 may also include program instructions stored in the ROM 230, the RAM 240, and/or another type of non-transitory storage medium to be executed by the processor 220. The methods and/or processes of the present disclosure may be implemented as the program instructions. The computing device 200 may also include an I/O component 260, supporting input/output between the computing device 200 and other components. The computing device 200 may also receive programming and data via network communications.
Merely for illustration, only one processor 220 is described in the computing device 200.However, it should be noted that the computing device 200 in the present disclosure may also include plurality of processors, thus operations and/or method steps that are performed by one processor 220 as described in the present disclosure may also be jointly or separately performed by the plurality of CPUs/processors. For example, if in the present disclosure the processor 220 of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two different processors jointly or separately in the computing device 200 (e.g., the first processor executes step A and the second processor executes step B, or the first and second processors jointly execute steps A and B) .
FIG. 3 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure. In some embodiments, as shown in FIG. 3, the processing  device 110 may include an obtaining module 310, a determination module 320, and a dispatch module 330.
The obtaining module 310 may be configured to obtain configuration information of a plurality of processing modules in a pipeline.
The determination module 320 may be configured to determine priorities of the plurality of processing modules based on the configuration information. In some embodiments, the determination module 320 may determine the priorities of the plurality of processing modules according to an order of the plurality of processing modules in the pipeline.
The dispatch module 330 may be configured to dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities. In some embodiments, the dispatch module 330 may update a task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
In some embodiments, the dispatch module 330 may also determine a thread pool strategy based at least in part on a system computing capacity. Further, the dispatch module 330 may dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
More descriptions may be found elsewhere in the present disclosure in the present disclosure (e.g., FIGS. 4-10B and relevant descriptions thereof) .
The modules in the processing device 110 may be connected to or communicate with each other via a wired connection or a wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof. The wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
In some embodiments, two or more of the modules may be combined as a single module, and any one of the modules may be divided into two or more units. For example, the obtaining module 310 and the determination module 320 may be combined as a single module which may both obtain the configuration information of the plurality of processing modules and determine the priorities of the processing modules based on the configuration information.
In some embodiments, the processing device 110 may include one or more additional modules. For example, the processing device 110 may also include a transmission module (not shown) configured to transmit signals (e.g., electrical signals, electromagnetic signals) to one or more components (e.g., the acquisition device 130, the storage device 150) of the task dispatch system 100. As another example, the processing device 110 may include a storage module (not shown) used to store information and/or data (e.g., the priorities) associated with task dispatch.
FIG. 4 is a flowchart illustrating an exemplary process for task dispatch according to some embodiments of the present disclosure. In some embodiments, process 400 may be executed by the task dispatch system 100. For example, the process 400 may be implemented as a set of instructions stored in the storage device (e.g., the storage device 150) . In some embodiments, the processing device 110 (e.g., the processor 220 of the computing device 200 and/or one or more modules illustrated  in FIG. 3) may execute the set of instructions and may accordingly be directed to perform the process 400. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 400 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 400 illustrated in FIG. 4 and described below is not intended to be limiting.
In 410, the processing device 110 (e.g., the obtaining module 310) may obtain configuration information of a plurality of processing modules in a pipeline.
The pipeline may refer to a process for processing a plurality of objects through a plurality of processing modules in turn. The processing modules in the pipeline may refer to modules (e.g., algorithm modules) configured to perform corresponding operations on the objects.
In some embodiments, different application scenarios may correspond to different processing modules. Taking the scenario of “image processing” as an example, operations needed to be executed on images may include a pre-processing operation, an image segmentation operation, a feature extraction operation, an object detection operation, an object attribute recognition operation, an object tracking operation, etc. Accordingly, the processing modules may include a pre-processing module, an image segmentation module, a feature extraction module, an object detection module, an object attribute recognition module, an object tracking module, etc.
In some embodiments, different processing modules may use different algorithms or processing methods to perform corresponding functions or operations. For example, the image segmentation module may use a Full Convolutional Network (FCN) , a Semantic Segmentation (SegNet) , a DeepMask, or any other image segmentation algorithms. As another example, the object detection module may use a Region Convolutional Neural Network (R-CNN) , a Spatial Pyramid Pooling Networks (SPP-Net) , You Only Look Once (Yolo) , Single Shot MultiBox Detector (SSD) , and other object detection algorithms. As another example, the object attribute recognition module may use a Scale-Invariant Feature Transform (SIFT) or other recognition algorithms to perform object attribute recognition. As a further example, the object tracking module may use a Model in the Loop Tracker, a Kernel Correlation Filter Tracker, a Generic Object Tracking Using Regression Networks Tracker (GOTURN Tracker) , or other algorithms to perform object tracking.
In some embodiments, the pipeline may be expressed in a line form and the processing modules may be represented by nodes thereof. Merely by way of example, as shown in FIG. 5, a pipeline 500 may include four nodes A, B, C, and D corresponding to four processing modules A, B, C, and D respectively. Further, as shown in FIG. 6, a pipeline 620 includes three nodes A, B, and C corresponding to three processing modules A, B, and C respectively. When the pipeline 620 processes a plurality of objects 610 (e.g., object 1, object 2, object 3) , the processing modules A, B, and C may perform corresponding operations on object 1, object 2, and object 3 in turn. For example, the three processing modules A, B, and C may perform corresponding operations on object 1 in turn. When the processing module A completes the processing of object 1, the processing module A may immediately process the next object 2; when the processing module B completes the processing of object 1, the processing module B may immediately process the next object 2, and so on.
In some embodiments, the pipeline may be represented by a directed acyclic graph (DAG) . A starting point of the graph represents an input flow of the entire pipeline and an endpoint of the graph represents an output flow of the entire pipeline. In some embodiments, an input flow of a first processing module in the pipeline may be regarded as the input flow of the entire pipeline, and an output flow of a last processing module in the pipeline may be regarded as the output flow of the entire pipeline. Merely by way of example, as shown in FIG. 5, the input flow “input_A” of node A corresponding to the first processing module in the pipeline 500 may be regarded as the input flow of the pipeline 500, and the output flow “output_res” of node D corresponding to the last processing module may be regarded as the output flow of the pipeline 500.
In some embodiments, the configuration information of the plurality of processing modules may include a processing order of the plurality of processing modules. In some embodiments, the processing order may reflect a result dependency relationship among the plurality of processing modules. In some embodiments, each processing module may include an input and an output (also referred to as an “result” or a “processing result” ) , and an output (or result) of a previous module is used as an input of a next adjacent module. Accordingly, the result dependency relationship among the plurality of processing modules may reflect that an output situation of a previous processing module is required for an execution of a next adjacent processing module. Merely by way of example, as shown in FIG. 5, the pipeline 500 includes four processing modules A, B, C, and D, and its processing order is A → B → C → D. The result dependency relationship may reflect that an execution of the processing module B requires a participation of an output result of the processing module A, an execution of the processing module C requires a participation of an output result of the processing module B, and an execution of the processing module D requires a participation of an output result of the processing module C.
In some embodiments, the configuration information may also include roles of the processing modules, importance degrees of the processing modules, relevancy degrees among the plurality of processing modules throughout an entire processing process, or the like, or a combination thereof. In some embodiments, the “role"” may reflect a type of processing module (e.g., a starting module, an intermediate module, an end module) , corresponding operations (e.g., a pre-processing operation, an image segmentation operation, a feature extraction operation, an object detection operation) , or the like, or any combination thereof. In some embodiments, the “importance degree” may reflect an impact degree of the processing module on a final processing result. For example, taking “image processing” in a monitoring scenario as an example, the importance degree of the object segmentation module may be larger than the importance degree of the pre-processing module. In some embodiments, the “related degree" may reflect a correlation relationship among the plurality of processing modules.
In some embodiments, the processing device 110 may retrieve the configuration information corresponding to the processing modules from the storage device 150. In some embodiments, the processing device 110 may obtain the configuration information corresponding to the processing modules from the terminal device 140.
In 420, the processing device 110 (e.g., the determination module 320) may determine priorities of the plurality of processing modules based on the configuration information.
In some embodiments, the processing device 110 may determine the priorities of the plurality of processing modules based on the processing order of the plurality of processing modules. Accordingly, the priorities are related to the result dependency relationship among the plurality of modules.
In some embodiments, the processing device 110 may determine a data flowing direction according to the processing order of the plurality of processing modules. The data flowing direction may refer to a transmission direction of processing results of the processing modules. In some embodiments, the processing results of the upstream processing modules may be transmitted to the downstream processing modules. Merely by way of example, as shown in FIG. 5, a processing result of node A (or processing module A) may be transmitted to node B (or processing module B) , and a processing result of node B (or processing module B) may be transmitted to node C (or processing module C) , and so on.
Further, the processing device 110 may determine the priorities of the plurality of processing modules based on the data flowing direction from small to large. In other words, the priority of a downstream processing module is larger than a priority of a upstream processing module. Merely by way of example, as shown in FIG. 5, the priority of node B (or processing module B) is larger than the priority of node A (or processing module A) . The priority of node C (or processing module C) is larger than the priority of node B (or processing module B) .
In some embodiments, the processing device 110 may also determine the priorities of the plurality of processing modules based on the roles of the processing modules, the importance degrees of the processing modules, and/or the relevancy degrees among the plurality of processing modules throughout the entire processing process. For example, the priority of a middle module may be larger than the priority of a starting node. As another example, the priority of a processing module (e.g., the image segmentation module) used for a specific object may be larger than the priority of a processing module (e.g., the pre-processing module) used for an entire image. As another example, the priority of a processing module with a larger important degree may be larger than the priority of a processing module with a smaller important degree. As another example, the priorities of multiple processing modules with a larger relevancy degree may be the same.
In 430, processing device 110 (e.g., the dispatch module 330) may dispatch tasks corresponding to the plurality of processing modules respectively based on the priorities.
In some embodiments, the processing device 110 may dispatch and process each processing module (or a corresponding operation performed by each processing module) as a task. In some embodiments, the processing device 110 may perform task dispatch in the form of a task queue. In some embodiments, the processing device 110 may add each processing module (or the operation performed by each processing module) as a task into a task queue in turn based on the priority of the processing module, and dispatch the plurality of the tasks in the task queue. Merely by way of example, as shown in FIG. 6, the processing device 110 may determine a priority order of the  processing modules A, B, and C as C > B> A according to the processing order of the processing modules A, B, and C corresponding to the nodes A, B, and C in the pipeline 620. Further, the processing device 110 may add the operation performed by each processing module as a task into a task queue based on the priority order.
Specifically, a task of the node A processing the object 1 may be added into the task queue 630 as task 1, a task of the node B processing the object 1 may be added into the task queue 630 as task 2, a task of the node C processing the object 1 may be added into the task queue 630 as task 3, a task of the node A processing the object 2 may be added into the task queue 630 as task 4, a task of the node B processing the object 2 may be added into the task queue 630 as task 5, a task of the node C processing the object 2 may be added into the task queue 630 as task 6, a task of the node A processing the object 3 may be added into the task queue 630 as task 7, a task of the node B processing the object 3 may be added into the task queue 630 as task 8, and a task of the node C processing the object 3 may be added into the task queue 630 as task 9.
As described in connection with above, taking “image processing” as an example, the plurality of processing modules process a plurality of image frames in turn, and “the priority of a downstream processing module is larger than that of an upstream processing module” means that the priority of the downstream processing module processing a previous image frame is larger than the priority of the upstream module processing a current image frame. In other words, the priority of the task of the node B processing the object 1 is larger than the priority of the task of the node A processing the object 2; the priority of the task of the node C processing the object 1 is larger than the priority of the task of the node A processing the object 2, and so on.
In some embodiments, when a new task is added into the task queue, the processing device 110 may update the task queue based on the priority of the new task. For example, the processing device 110 may update the task queue by comparing a priority of a current task with a priority of a previous task in the task queue. The previous task may refer to a task that has been added to the task queue before.
In some embodiments, the processing device 110 may compare the priority of the current task with a priority of a previously adjacent task (also referred to as a “last task” ) in the task queue; and in response to determining that the priority of the current task is larger than the priority of the previously adjacent task, the processing device 110 may update the task queue by swapping the current task and the previously adjacent task.
In some embodiments, the processing device 110 may update the task queue by comparing the priority of the current task with the priority of a first task (also referred to as a “top task” ) in the task queue, the priority of any task in the task queue, the priorities of any two adjacent tasks, etc. More descriptions regarding updating the task queue may be found elsewhere in the present disclosure, for example, FIGs. 7A-7D and relevant descriptions thereof.
In some embodiments, the processing device 110 may determine a thread pool strategy based at least in part on a system computing capacity, and dispatch the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy. In  some embodiments, the thread pool may include a plurality of threads used to perform the corresponding tasks respectively. More descriptions regarding the thread pool strategy may be found elsewhere in the present disclosure, for example, FIGs. 8-10 and relevant descriptions thereof.
According to some embodiments of the present disclosure, priorities of a plurality of processing modules are determined and corresponding tasks are added into a task queue based on the priorities. Accordingly, the tasks can be dispatched according to the task queue, which can improve the overall work efficiency and can ensure enough resources for a final processing module for outputting data results.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, plurality of variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, one or more other optional operations (e.g., a storing operation, a transmitting operation) may be added elsewhere in the process 400. In the storing operation, the processing device 110 may store information and/or data (e.g., the priorities) associated with the task dispatch in a storage device (e.g., the storage device 150) disclosed elsewhere in the present disclosure. In the transmitting operation, the processing device 110 may transmit the information and/or to the user device 140. As another example, operation 410 and operation 420 may be combined into a single operation in which the processing device 110 may obtain the configuration information of the plurality of processing modules and determine the priorities of the processing modules based on the configuration information.
FIGs. 7A-7D are schematic diagrams illustrating an exemplary process for updating a task queue according to some embodiments of the present disclosure.
In some embodiments, the processing device 110 may update the task queue by comparing the priority of a current task with the priority of a first task (or a top task) in a task queue. Merely by way of example, as shown in FIG. 7A, the task queue 720 includes task 1, task 2, ..., task M, task M+1, ..., and task N, wherein task 1 is the first task of the task queue 720. Accordingly, the processing device 110 may determine whether the priority of the current task 710 is larger than the priority of task 1. If the priority of the current task 710 is larger than the priority of task 1, the processing device 110 may row the current task 720 before task 1 to obtain an updated task queue 730. If the priority of the current task 710 is smaller than the priority of task 1 in the task queue 720, the processing device 110 may further compare the priority of the current task 710 with the priorities of other tasks (e.g., any task in the task queue 720, the last task in the task queue 720) in the task queue 720 until determining an appropriate position of the current task 710, and update the task queue 720 accordingly. In some embodiments, the “appropriate position” may refer to that the priority of the current task 710 is smaller than the priority of a previous adjacent task and is larger than the priority of a next adjacent task.
In some embodiments, the processing device 110 may update the task queue by comparing the priority of the current task with the priority of any task in the task queue. Merely by way of example, as shown in FIG. 7B, the processing device 110 may update the task queue by determining whether the priority of the current task 710 is larger than the priority of any task m. If the priority of the  current task 710 is larger than the priority of task m, the processing device 110 may swap the current task 710 and task m. Further, the processing device 110 may compare the priority of the current task 710 with the priorities of other tasks (e.g., task m-1, task m-2) located before task m until determining the appropriate position of the current task 710, and update the task queue 720 accordingly. If the priority of the current task 710 is smaller than the priority of task m, the processing device 110 may further determine the priority of the current task 710 with other tasks (e.g., task m+1, task m+2) located after task m until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
In some embodiments, the processing device 110 may update the task queue by comparing the priority of the current task with the priorities of any two adjacent tasks in the task queue. Merely by way of example, as shown in FIG. 7C, task m and task m+1 are any two adjacent tasks in the task queue 720. Accordingly, the processing device 110 may determine whether the priority of the current task 710 is located between the priorities of task m and task m+1. If the priority of the current task 710 is between the priorities of task m and task m+1, the processing device 110 may row the current task 720 between task m and task m+1 to obtain an updated task queue 740. If the priority of the current task 710 is not located between the priorities of task m and task m+1, the processing device 110 may further compare the priority of the current task 710 with the priorities of other tasks in the task queue 720 until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
In some embodiments, the processing device 110 may update the task queue by comparing the priority of the current task with the priority of the last task (also referred to as a “previously adjacent task” ) in the task queue. Merely by way of example, as shown in FIG. 7D, task n is the last task of the task queue 720. Accordingly, the processing device 110 may determine whether the priority of the current task 710 is smaller than the priority of task n. If the priority of the current task 710 is smaller than the priority of task n, the processing device 110 may add the current task 720 after task n directly. If the priority of the current task 710 is larger than the priority of the task n, the processing device 110 may swap the current task 710 and task n. Further, the processing device 110 may compare the priority of the current task with the priorities of other tasks (e.g., a previous task of the last task) in the task queue 720 until determining the appropriate position of the current task 710, and update the task queue 720 accordingly.
According to some embodiments of the present disclosure, the priority of the current task is compared with those of tasks in the task queue and then the task queue is updated based on the comparing results, which can ensure that a task with a relatively high priority can be dispatched or executed first, thereby improving processing efficiency.
FIG. 8 is a flowchart illustrating an exemplary process for creating a thread pool strategy according to some embodiments of the present disclosure. In some embodiments, process 800 may be executed by the task dispatch system 100. For example, the process 800 may be implemented as a set of instructions stored in the storage device (e.g., the storage device 150) . In some embodiments, the processing device 110 (e.g., the processor 220 of the computing device 200 and/or one or more  modules illustrated in FIG. 3) may execute the set of instructions and may accordingly be directed to perform the process 800. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 800 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 800 illustrated in FIG. 8 and described below is not intended to be limiting.
In 810, processing device 110 (e.g., the dispatch module 330) may determine whether the system computing capacity is larger than a preset threshold.
In some embodiments, the system computing capacity may be reflected by a count of computing kernels of the system. For example, if the count of the computing kernels is relatively small, the corresponding system computing capacity is relatively weak; whereas, if the count of the computing kernels is relatively large, the corresponding system computing capacity is relatively strong. In some embodiments, the preset threshold may be a default value of the system, an experience value, an artificial pre-set value, or the like, or any combination thereof. In some embodiments, the preset threshold may be set according to actual needs, which is not limited by the present disclosure. For example, the preset threshold may be 32 kernels.
In 820, in response to determining that the system computing capacity is smaller than or equal to the preset threshold, the processing device 110 (e.g., the dispatch module 330) may determine a first thread pool strategy including a plurality of threads in a single thread pool. For example, if the count of computing kernels of the system is smaller than or equal to 32, the processing device 110 may determine the first thread pool strategy.
In some embodiments, the first thread pool strategy may include a single thread pool, which includes a plurality of threads in the single thread pool. That is to say, when the system computing capacity is relatively weak (e.g., an embedded device) , a single thread pool may be created to make that device resources can meet processing needs as much as possible. In addition, the unified management of creation, dispatch execution, and destruction of the threads in a single thread pool can save system time and improve overall program stability.
In some embodiments, when the first thread pool strategy is used to perform task dispatch, the processing device 110 may perform the task dispatch based on idle conditions of the threads in the thread pool. Specifically, when a plurality of threads in the single thread pool are all idle, the processing device 110 may randomly dispatch tasks to threads based on the priority order of the tasks in the task queue. Further, when some of the plurality of threads complete current tasks, the processing device 110 may obtain other tasks in the task queue and dispatch them to corresponding threads based on the priority order until all tasks in the task queue have been dispatched.
Merely by way of example, as shown in FIG. 9, a task queue 910 includes 6 tasks and a priority order of the 6 tasks is “task 1 → task 2 → task 3 → task 5 → task 6” . The single thread pool 920 includes thread A, thread B, and thread C. The processing device 110 may firstly dispatch task 1 to thread A, task 2 to thread B, and task 3 to thread C. If the thread A completes task 1, the processing device 110 may dispatch task 4 to thread A, and then if the thread C completes task 3, the  processing device 110 may dispatch task 5 to thread C, and so on.
In 830, in response to determining that the system computing capacity is larger than the preset threshold, the processing device 110 (e.g., the dispatch module 330) may determine a second thread pool strategy including a plurality of thread pools each of which includes a plurality of threads. For example, if the count of computing kernels of the system is larger than 32, the processing device 110 may determine the second thread pool strategy.
In some embodiments, the second thread pool strategy includes a plurality of thread pools, and each thread pool includes a plurality of threads. In other words, if the system computing capacity is relatively strong (e.g., a server device) , a multi-thread pool strategy may be created to maximize the use of the hardware resources of the device and speed up the analysis and processing efficiency.
In some embodiments, a count of thread pools in the second thread pool strategy may be smaller than or equal to the count of computing kernels corresponding to the system computing capacity. Accordingly, it can be ensured that each thread pool can monopolize resources of a single computing kernel. In other words, the thread pools and the computing kernels have affinity and match with each other, accordingly, it can be ensured that tasks in a same thread pool use a same computing kernel. Accordingly, there is no need to switch computing kernels during the process and resource loss can be reduced.
In some embodiments, when the second thread pool strategy is used for task dispatch, the processing device 110 may perform the task dispatch based on the priority order of the tasks in the task queue.
Merely by way of example, as shown in FIG. 10A, a task queue 1010 includes 8 tasks and a priority order of the 8 tasks in the task queue 1010 is “task 1 → task 2 → task 3 → task 5 → task 7 →task 8” . The plurality of thread pools 1020 include thread pool 1, thread pool 2, and thread pool 3. Each thread pool includes two threads. According to the priority order of the tasks, the processing device 110 may firstly dispatch task 1 to thread 1 in thread pool 1, task 2 to thread 3 in thread pool 2, and task 3 to thread 5 in thread pool 3. Then the processing device 110 may dispatch task 4 to thread 2 in thread pool 1, task 5 to thread 4 in thread pool 2, task 1 to thread 1 in thread pool 1, and task 6 to thread 6 in thread pool 3. Further, the processing device 110 may dispatch task 7 to a queuing area of thread pool 1 and task 8 to a queuing area of thread pool 2.
As another example, as shown in FIG. 10B, according to the priority order of the tasks, the processing device 110 may dispatch task 1 and task 2 to thread 1 and thread 2 respectively in thread pool 1, task 3 and task 4 to thread 3 and thread 4 respectively in thread pool 2, and task 5 and task 6 to thread 5 and thread 6 respectively in thread pool 3. Further, the processing device 110 may dispatch task 7 and task 8 to a queuing area of thread pool 1.
In some embodiments, the processing device 110 may equally dispatch the tasks in the task queue to the plurality of threads in the plurality of thread pools. In some embodiments, the processing device 110 may randomly dispatch the tasks in the task queue to the plurality of threads in the plurality of thread pools.
In some embodiments, for each thread in the plurality of thread pools, when the task being  executed is completed, the processing device 110 may dispatch a task to be executed to the thread based on the priority of the task to be executed. Specifically, when a specific thread completes its corresponding task, the processing device 110 may steal and dispatch a task that has not been executed to the current thread from tasks corresponding to other threads. For example, as shown in FIG. 10A, when thread 5 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed to thread 5 from the tasks corresponding to thread pool 1. When thread 6 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 8 that has not been executed to thread 6 from the tasks corresponding to thread pool 2.
In some embodiments, the processing device 110 may steal tasks from other threads in the thread pool where the current thread is located. For example, as shown in FIG. 10 A, when thread 2 in thread pool 1 completes its corresponding task, the processing device 110 may steal and dispatch task 7 to thread 2 from the tasks corresponding to thread 1.
In some embodiments, the processing device 110 may also steal tasks from other thread pools. For example, as shown in FIG. 10 B, when thread 3 in thread pool 2 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed from the tasks corresponding to thread 1 in thread pool 1. When thread 5 in thread pool 3 completes its corresponding task, the processing device 110 may steal and dispatch task 8 that has not been executed from the tasks corresponding to thread 2 in thread pool 1.
In some embodiments, the processing device 110 may steal tasks from other threads or other thread pools from a queue tail in a queuing area. In some embodiments, when the processing device 110 steals tasks from other threads or other thread pools, the processing device 110 may preferentially steal a task in the queue tail with a larger priority. For example, as shown in FIG. 10A or FIG. 10B, the task queue tail may include task 7 and task 8, and the priority of task 7 is larger than task 8. Accordingly, when thread 3 in thread pool 2 completes its corresponding task, the processing device 110 may steal and dispatch task 7 that has not been executed to thread 3.
According to some embodiments of the present disclosure, different thread pool strategies are created according to the system computing capacity of the devices to make reasonable dispatching of tasks, which can make full use of the hardware resources of different devices and can improve the processing efficiency.
It should be noted that the above description is merely provided for the purpose of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the processing device 110 may create a suitable thread pool strategy based on the complexity of the processing plan (e.g., a count of processing modules in the pipeline (or a count of tasks in the task queue) , execution complexities of the processing modules) . For example, if the processing plan is relatively complicated, the processing device 110 may create a multi-thread pool strategy; if the processing plan is relatively simple, the processing device 110 may create a single  thread pool strategy. As another example, if the count of the tasks in the task queue is relatively small and/or the execution complexities of the processing modules are relatively low, the processing device 110 may create a single thread pool strategy; if the count of the tasks in the task queue is relatively large and/or the execution complexities of the processing modules are relatively high, the processing device 110 may create a multi-thread pool strategy.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.
A non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented  programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (e.g., through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution, e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.
In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ” For example, “about, ” “approximate, ” or “substantially” may indicate ±20%variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting effect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.

Claims (20)

  1. A system for task dispatch, comprising:
    at least one storage medium including a set of instructions; and
    at least one processor in communication with the at least one storage medium, wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations including:
    obtaining configuration information of a plurality of processing modules in a pipeline;
    determining priorities of the plurality of processing modules based on the configuration information; and
    dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
  2. The system of claim 1, wherein
    the configuration information includes an order of the plurality of modules, the order indicating a result dependency among the plurality of processing modules; and
    determining priorities of the plurality of processing modules based on the configuration information includes:
    determining priorities of the plurality of processing modules based on the order of the plurality of modules.
  3. The system of claim 2, wherein determining priorities of the plurality of processing modules based on the order of the plurality of modules includes:
    determining a data flowing direction based on the order of the plurality of processing modules; and
    determining the priorities of the plurality of processing modules according to the data flowing direction from small to large.
  4. The system of claim 1, wherein the dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities includes:
    updating a task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
  5. The system of claim 4, wherein the updating a task queue by comparing a priority of a current task with a priority of a previous task in the task queue includes:
    comparing the priority of the current task with a priority of a previously adjacent task in the task queue; and
    in response to determining that the priority of the current task is larger than the priority of the previously adjacent task, updating the task queue by swapping the current task and the previously adjacent task.
  6. The system of claim 1, wherein the dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities includes:
    determining a thread pool strategy based at least in part on a system computing capacity; and
    dispatching the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
  7. The system of claim 6, wherein the determining a thread pool strategy based at least in part on a system computing capacity includes:
    determining whether the system computing capacity is larger than a preset threshold;
    in response to determining that the system computing capacity is smaller than or equal to the preset threshold, determining a first thread pool strategy, wherein the first thread pool strategy includes a plurality of threads in a single thread pool.
  8. The system of claim 7, wherein the determining a thread pool strategy based at least in part on a system computing capacity includes:
    in response to determining that the system computing capacity is larger than the preset threshold, determining a second thread pool strategy, wherein the second thread pool strategy includes a plurality of thread pools each of which includes a plurality of threads.
  9. The system of claim 8, wherein a count of the plurality of thread pools is less than or equal to a count of computing kernels corresponding to the system computing capacity.
  10. The system of claim 8, wherein the dispatching the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy includes:
    dispatching the tasks to the plurality of thread pools based on the corresponding priorities; and
    for each thread in the plurality of thread pools, when a task being executed is completed, dispatching a task to be executed to the thread based on a priority of the task to be executed.
  11. A method for task dispatch, comprising:
    obtaining configuration information of a plurality of processing modules in a pipeline;
    determining priorities of the plurality of processing modules based on the configuration information; and
    dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
  12. The method of claim 11, wherein
    the configuration information includes an order of the plurality of modules, the order indicating a result dependency among the plurality of processing modules; and
    determining priorities of the plurality of processing modules based on the configuration information includes:
    determining priorities of the plurality of processing modules based on the order of the plurality of modules.
  13. The method of claim 12, wherein determining priorities of the plurality of processing modules based on the order of the plurality of modules includes:
    determining a data flowing direction based on the order of the plurality of processing modules; and
    determining the priorities of the plurality of processing modules according to the data flowing direction from small to large.
  14. The method of claim 11, wherein the dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities includes:
    updating a task queue by comparing a priority of a current task with a priority of a previous task in the task queue.
  15. The method of claim 14, wherein the updating a task queue by comparing a priority of a current task with a priority of a previous task in the task queue includes:
    comparing the priority of the current task with a priority of a previously adjacent task in the task queue; and
    in response to determining that the priority of the current task is larger than the priority of the previously adjacent task, updating the task queue by swapping the current task and the previously adjacent task.
  16. The method of claim 11, wherein the dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities includes:
    determining a thread pool strategy based at least in part on a system computing capacity; and
    dispatching the tasks corresponding to the plurality of processing modules respectively based on the priorities according to the thread pool strategy.
  17. The method of claim 16, wherein the determining a thread pool strategy based at least in part on a system computing capacity includes:
    determining whether the system computing capacity is larger than a preset threshold;
    in response to determining that the system computing capacity is smaller than or equal to the preset threshold, determining a first thread pool strategy, wherein the first thread pool strategy includes a plurality of threads in a single thread pool.
  18. The method of claim 17, wherein the determining a thread pool strategy based at least in part on a system computing capacity includes:
    in response to determining that the system computing capacity is larger than the preset threshold, determining a second thread pool strategy, wherein the second thread pool strategy includes a plurality of thread pools each of which includes a plurality of threads.
  19. The method of claim 18, wherein a count of the plurality of thread pools is less than or equal to a count of computing kernels corresponding to the system computing capacity.
  20. A non-transitory computer readable medium, comprising executable instructions that, when executed by at least one processor, direct the at least one processor to perform a method, the method comprising:
    obtaining configuration information of a plurality of processing modules in a pipeline;
    determining priorities of the plurality of processing modules based on the configuration information; and
    dispatching tasks corresponding to the plurality of processing modules respectively based on the priorities.
PCT/CN2022/114395 2022-04-19 2022-08-24 Methods, systems, and storage media for task dispatch WO2023201947A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210412394.6 2022-04-19
CN202210412394.6A CN114489867B (en) 2022-04-19 2022-04-19 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Publications (1)

Publication Number Publication Date
WO2023201947A1 true WO2023201947A1 (en) 2023-10-26

Family

ID=81489501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114395 WO2023201947A1 (en) 2022-04-19 2022-08-24 Methods, systems, and storage media for task dispatch

Country Status (2)

Country Link
CN (1) CN114489867B (en)
WO (1) WO2023201947A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489867B (en) * 2022-04-19 2022-09-06 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN114675863B (en) * 2022-05-27 2022-10-04 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
CN114880395B (en) * 2022-07-05 2022-10-28 浙江大华技术股份有限公司 Algorithm scheme operation method, visualization system, terminal device and storage medium
CN116149830B (en) * 2023-04-20 2023-07-04 北京邮电大学 Mass data processing method and device based on double-scale node scheduling strategy

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871011B1 (en) * 2000-09-28 2005-03-22 Matsushita Electric Industrial Co., Ltd. Providing quality of service for disks I/O sub-system with simultaneous deadlines and priority
CN101692208A (en) * 2009-10-15 2010-04-07 北京交通大学 Task scheduling method and task scheduling system for processing real-time traffic information
CN111104211A (en) * 2019-12-05 2020-05-05 山东师范大学 Task dependency based computation offload method, system, device and medium
CN111737075A (en) * 2020-06-19 2020-10-02 浙江大华技术股份有限公司 Execution sequence determination method and device, storage medium and electronic device
CN111813554A (en) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 Task scheduling processing method and device, electronic equipment and storage medium
CN113722103A (en) * 2021-09-10 2021-11-30 奇安信科技集团股份有限公司 Encryption card calling control method and communication equipment
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9535759B2 (en) * 2014-05-23 2017-01-03 Osr Open Systems Resources, Inc. Work queue thread balancing
CN105511956B (en) * 2014-09-24 2019-04-16 中国电信股份有限公司 A kind of method for scheduling task and system based on shared scheduling information
CN105791254B (en) * 2014-12-26 2020-01-03 阿里巴巴集团控股有限公司 Network request processing method and device and terminal
CN107092962B (en) * 2016-02-17 2021-01-26 创新先进技术有限公司 Distributed machine learning method and platform
CN105992008B (en) * 2016-03-30 2019-08-30 南京邮电大学 A kind of multi-level multi-task parallel coding/decoding method in multi-core processor platform
CN106681840A (en) * 2016-12-30 2017-05-17 郑州云海信息技术有限公司 Tasking scheduling method and device for cloud operating system
CN108268319A (en) * 2016-12-31 2018-07-10 中国移动通信集团河北有限公司 Method for scheduling task, apparatus and system
CN107168781A (en) * 2017-04-07 2017-09-15 广东银禧科技股份有限公司 A kind of 3D printing subtask scheduling method and apparatus
CN111290842A (en) * 2018-12-10 2020-06-16 北京京东尚科信息技术有限公司 Task execution method and device
CN111367785A (en) * 2018-12-26 2020-07-03 中兴通讯股份有限公司 SDN-based fault detection method and device and server
US11163792B2 (en) * 2019-05-29 2021-11-02 International Business Machines Corporation Work assignment in parallelized database synchronization
CN110837410B (en) * 2019-10-30 2022-05-24 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium
CN112988340A (en) * 2019-12-18 2021-06-18 湖南亚信软件有限公司 Task scheduling method, device and system
CN111209099B (en) * 2019-12-31 2022-07-08 苏州浪潮智能科技有限公司 Multi-thread pool scheduling method and scheduling terminal based on ganesha service
CN111382177A (en) * 2020-03-09 2020-07-07 中国邮政储蓄银行股份有限公司 Service data task processing method, device and system
CN111949386A (en) * 2020-07-09 2020-11-17 北京齐尔布莱特科技有限公司 Task scheduling method, system, computing device and readable storage medium
CN113179304B (en) * 2021-04-22 2022-10-28 平安消费金融有限公司 Message issuing method, system, device and storage medium
CN113535367B (en) * 2021-09-07 2022-01-25 北京达佳互联信息技术有限公司 Task scheduling method and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871011B1 (en) * 2000-09-28 2005-03-22 Matsushita Electric Industrial Co., Ltd. Providing quality of service for disks I/O sub-system with simultaneous deadlines and priority
CN101692208A (en) * 2009-10-15 2010-04-07 北京交通大学 Task scheduling method and task scheduling system for processing real-time traffic information
CN111104211A (en) * 2019-12-05 2020-05-05 山东师范大学 Task dependency based computation offload method, system, device and medium
CN111737075A (en) * 2020-06-19 2020-10-02 浙江大华技术股份有限公司 Execution sequence determination method and device, storage medium and electronic device
CN111813554A (en) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 Task scheduling processing method and device, electronic equipment and storage medium
CN113722103A (en) * 2021-09-10 2021-11-30 奇安信科技集团股份有限公司 Encryption card calling control method and communication equipment
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Also Published As

Publication number Publication date
CN114489867A (en) 2022-05-13
CN114489867B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
WO2023201947A1 (en) Methods, systems, and storage media for task dispatch
US11557147B2 (en) Systems and methods for selecting a best facial image of a target human face
US20190279088A1 (en) Training method, apparatus, chip, and system for neural network model
CN109409513B (en) Task processing method based on neural network and related equipment
CN110321958B (en) Training method of neural network model and video similarity determination method
CN112699991A (en) Method, electronic device, and computer-readable medium for accelerating information processing for neural network training
CN110674936A (en) Neural network processing method and device, computer equipment and storage medium
EP3620982B1 (en) Sample processing method and device
CN104794194A (en) Distributed heterogeneous parallel computing system facing large-scale multimedia retrieval
US20230128276A1 (en) Managing artificial-intelligence derived image attributes
US11967150B2 (en) Parallel video processing systems
US12001513B2 (en) Self-optimizing video analytics pipelines
EP3401846B1 (en) Method and device for analyzing sensor data
WO2019232723A1 (en) Systems and methods for cleaning data
CN111985597A (en) Model compression method and device
US20220207427A1 (en) Method for training data processing model, electronic device and storage medium
US20240127406A1 (en) Image quality adjustment method and apparatus, device, and medium
US11600068B2 (en) Systems, methods, and storage media for processing digital video
US12014202B2 (en) Method and apparatus with accelerator
US10586171B2 (en) Parallel ensemble of support vector machines
WO2023202006A1 (en) Systems and methods for task execution
CN117037772A (en) Voice audio segmentation method, device, computer equipment and storage medium
US10741173B2 (en) Artificial intelligence (AI) based voice response system etiquette
CN115830362A (en) Image processing method, apparatus, device, medium, and product
JP7073686B2 (en) Neural network coupling reduction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22938172

Country of ref document: EP

Kind code of ref document: A1