CN114518917B - Algorithm module scheduling method, algorithm module scheduling device and readable storage medium - Google Patents

Algorithm module scheduling method, algorithm module scheduling device and readable storage medium Download PDF

Info

Publication number
CN114518917B
CN114518917B CN202210415147.1A CN202210415147A CN114518917B CN 114518917 B CN114518917 B CN 114518917B CN 202210415147 A CN202210415147 A CN 202210415147A CN 114518917 B CN114518917 B CN 114518917B
Authority
CN
China
Prior art keywords
processing result
algorithm module
result data
data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210415147.1A
Other languages
Chinese (zh)
Other versions
CN114518917A (en
Inventor
黄鹏
殷俊
虞响
岑鑫
吴立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210415147.1A priority Critical patent/CN114518917B/en
Publication of CN114518917A publication Critical patent/CN114518917A/en
Application granted granted Critical
Publication of CN114518917B publication Critical patent/CN114518917B/en
Priority to PCT/CN2022/125356 priority patent/WO2023202006A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Abstract

The application provides an algorithm module scheduling method, an algorithm module scheduling device and a computer readable storage medium. The algorithm module scheduling method comprises the following steps: obtaining a result obtaining strategy of the current algorithm module; acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy; and scheduling the current algorithm module to process the first processing result data to obtain second processing result data. Through the method, the algorithm module scheduling method flexibly determines the subsequent data processing mode according to the result obtaining strategy of the current algorithm module, the development of the integrated encapsulation module which needs to be continuously adjusted and modified due to the business difference can be omitted, and the rapid landing and delivery of different algorithm schemes under different business scenes can be flexibly supported.

Description

Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
Technical Field
The present application relates to the technical field of algorithm models, and in particular, to an algorithm module scheduling method, an algorithm module scheduling apparatus, and a computer-readable storage medium.
Background
During the execution of the image processing algorithm program, the interaction of result data among the algorithm functional modules in the algorithm program pipeline is performed through the integrated packaging module which is strongly coupled with the program.
When the algorithm scheme changes, the integrated packaging module needs to be correspondingly adjusted and modified according to the scheme, so that the maintenance, development and time cost is very high, different algorithm schemes under different service scenes cannot be quickly and flexibly supported, and the algorithm scheme cannot be quickly delivered on the ground.
Disclosure of Invention
The application provides an algorithm module scheduling method, an algorithm module scheduling device and a computer readable storage medium.
The application provides an algorithm module scheduling method, which comprises the following steps:
obtaining a result obtaining strategy of the current algorithm module;
acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy;
and scheduling the current algorithm module to process the first processing result data to obtain second processing result data.
The algorithm module scheduling method further comprises the following steps:
when the result obtaining strategy is an interframe asynchronous strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
and after first processing result data output by a pre-algorithm module of the single data pipeline is obtained, scheduling the current algorithm module to process the first processing result data to obtain second processing result data.
The algorithm module scheduling method further comprises the following steps:
when the result obtaining strategy is an interframe synchronization strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, third processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
and scheduling the current algorithm module to process the first processing result data and the third processing result data until third processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, so as to obtain second processing result data.
The algorithm module scheduling method further comprises the following steps:
when the result obtaining strategy is an intra-frame synchronization strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, fourth processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
judging whether the fourth processing result data output by the pre-algorithm modules of the other data pipelines and the first processing result data are the same frame result or not;
and if so, scheduling the current algorithm module to process the first processing result data and the fourth processing result data to obtain second processing result data.
After judging whether the fourth processing result data output by the pre-algorithm modules of the other data pipelines and the first processing result data are the same frame result, the algorithm module scheduling method further comprises the following steps:
if not, continuously waiting for fifth processing result data output by the pre-algorithm modules of the other data pipelines until the fifth processing result data and the first processing result data are the same frame of result, and scheduling the current algorithm module to process the first processing result data and the fifth processing result data to obtain the second processing result data.
The algorithm module scheduling method further comprises the following steps:
when the result obtaining strategy is not an interframe asynchronous strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, sixth processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
until sixth processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, whether the first processing result data and the sixth processing result data are the same frame result or not is judged;
and if so, scheduling the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
After determining whether the first processing result data and the sixth processing result data are the same frame result, the algorithm module scheduling method further includes:
if not, judging whether the result acquisition strategy of the current algorithm module is an interframe synchronization strategy or not;
and when the result obtaining strategy of the current algorithm module is an interframe synchronization strategy, scheduling the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
After judging whether the result obtaining strategy of the current algorithm module is an interframe synchronization strategy, the algorithm module scheduling method further comprises the following steps:
and outputting program abnormal alarm information when the result acquisition strategy of the current algorithm module is not the interframe synchronization strategy.
After the scheduling of the current algorithm module processes the first processing result data to obtain the second processing result data, the algorithm module scheduling method further includes:
placing the second processing result data in a result queue of a data pipeline where the current algorithm module is located;
and scheduling a post algorithm module of the same data pipeline as the current algorithm module to process second processing result data in the result queue.
The current algorithm module and the at least one preposed algorithm module are algorithm modules with a front-back sequence dependency relationship in a data pipeline;
the data system comprises a plurality of data pipelines, the data pipelines share at least one initial algorithm module and at least one result algorithm module, intermediate algorithm modules among different data pipelines are connected in parallel, and the current algorithm module is any algorithm module in the data system, which is not the first initial algorithm module.
Each data pipeline is provided with a result queue for storing the result data of the algorithm module;
the obtaining of the first processing result data output by at least one pre-algorithm module based on the result obtaining strategy includes:
acquiring a current result queue corresponding to a data pipeline where the current algorithm module is located based on the result acquisition strategy;
extracting first processing result data output by at least one pre-algorithm module of the same data pipeline from the current result queue;
after the current algorithm module is scheduled to process the first processing result data to obtain second processing result data, the algorithm module scheduling method further comprises the following steps:
and placing the second processing result data in the current result queue.
The application also provides an algorithm module scheduling device, which comprises an acquisition strategy module, an acquisition data module and a scheduling processing module; wherein the content of the first and second substances,
the obtaining strategy module is used for obtaining the result obtaining strategy of the current algorithm module;
the data acquisition module is used for acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy;
and the scheduling processing module is used for scheduling the current algorithm module to process the first processing result data to obtain the second processing result data.
The application also provides another algorithm module scheduling device, which comprises a processor and a memory, wherein the memory is stored with program data, and the processor is used for executing the program data to realize the algorithm module scheduling method.
The present application also provides a computer-readable storage medium for storing program data which, when executed by a processor, is used to implement the algorithm module scheduling method described above.
The beneficial effect of this application is: the algorithm module scheduling device acquires a result acquisition strategy of the current algorithm module; acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy; and scheduling the current algorithm module to process the first processing result data to obtain second processing result data. Through the method, the algorithm module scheduling method flexibly determines the subsequent data processing mode according to the result obtaining strategy of the current algorithm module, the development of the integrated encapsulation module which needs to be continuously adjusted and modified due to the business difference can be omitted, and the rapid landing and delivery of different algorithm schemes under different business scenes can be flexibly supported.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an algorithm module scheduling method provided herein;
FIG. 2 is a schematic flow chart of a specific method for scheduling algorithm modules shown in FIG. 1;
FIG. 3 is a schematic diagram of an embodiment of a data pipeline provided herein;
FIG. 4 is a schematic block diagram of another embodiment of a data pipeline provided herein;
FIG. 5 is a schematic structural diagram of an embodiment of an algorithm module scheduling apparatus provided in the present application;
FIG. 6 is a schematic structural diagram of another embodiment of an algorithm module scheduling device provided in the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problems in the prior art, the data pipeline method is provided for managing and circulating result data of each algorithm module in an image processing algorithm scheme, and three different algorithm result acquisition strategies of the algorithm modules are supported. The method carries out self-adaptive adjustment and support by analyzing the configuration file of the algorithm scheme assembly line, saves the development of an integrated packaging module which needs to be continuously adjusted and modified due to business difference, and finally achieves the purpose of flexibly supporting the rapid landing and delivery of different algorithm schemes under different business scenes.
The scheduling framework algorithm module scheduling device analyzes the configuration file to analyze and schedule all algorithm modules.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flowchart of an embodiment of an algorithm module scheduling method provided in the present application, and fig. 2 is a schematic flowchart of a specific flowchart of the algorithm module scheduling method shown in fig. 1.
The algorithm module scheduling method is applied to an algorithm module scheduling device, wherein the algorithm module scheduling device can be a server or a system formed by the cooperation of the server and terminal equipment. Correspondingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the algorithm module scheduling apparatus may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the algorithm module scheduling method of the embodiments of the present application may be implemented by a processor calling a computer readable instruction stored in a memory.
Specifically, as shown in fig. 1, the algorithm module scheduling method in the embodiment of the present application specifically includes the following steps:
step S11: and obtaining a result obtaining strategy of the current algorithm module.
In the embodiment of the present application, the concept and workflow of the data pipeline provided by the present application are described first. Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a data pipeline provided in the present application.
The image processing algorithm scheme of the application connects algorithm modules with different functions according to a pipeline mode so as to represent the dependency relationship among the algorithm modules and the data transmission route. In the application, a route from a data starting end to a result output end connected with the algorithm module is called a data pipeline, and data are transmitted through the data pipeline. Each data pipeline is provided with a result queue for storing the result data of the algorithm module, and each algorithm module places the algorithm result of the module in the result queue. Each algorithm module can acquire the algorithm module through which the data pipeline has flowed, namely the algorithm result of the pre-algorithm module, from the result queue according to specific requirements. Based on this, the present application provides a data system comprising a plurality of data pipes, wherein the plurality of data pipes share at least one starting algorithm module and at least one result algorithm module, and intermediate algorithm modules between different data pipes are connected in parallel.
As shown in fig. 3, the algorithm module scheduler forms a directed acyclic graph connecting the algorithm modules as shown in fig. 3 according to a pipeline of the algorithm scheme. The directed acyclic graph can represent the position relation of each algorithm module on one hand, and can represent the data flow direction of each algorithm module on the other hand.
In fig. 3, seven algorithm modules form three data pipes, which are data pipe a, data pipe B, and data pipe C. The initial algorithm module is used for processing initial input, and as can be seen from fig. 3, the initial algorithm module is respectively connected with different algorithm modules through a data pipeline a, a data pipeline B and a data pipeline C, and is finally summarized in the result output module.
In the example, each of the data pipelines includes an algorithm result flowing through the algorithm module. Taking the data pipeline a as an example, the algorithm module No. 2 can obtain data of the start algorithm module, the algorithm module No. 3 can obtain data of the start algorithm module and the algorithm module No. 2, and the result output module can obtain results of all algorithm modules in the data pipeline a and results of all algorithm modules in the data pipelines B and C.
Based on the data pipelines shown in fig. 3 and 4, in order to meet different requirements of algorithm modules in different algorithm schemes on required input data in content and sequence under different service scenarios, three algorithm result acquisition strategies are provided in the present application, and in other embodiments, similar algorithm result acquisition strategies of the three algorithm result acquisition strategies provided in the present application embodiment and combined algorithm result acquisition strategies may also be used, which are not described herein again. The following specifically describes three algorithm result obtaining strategies provided by the present application with reference to step S12:
step S12: and acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy.
In the embodiment of the present application, as shown in fig. 4, fig. 4 includes algorithm module No. 1, algorithm module No. 2, and algorithm module No. 3. The No. 1 algorithm module and the No. 3 algorithm module form a data pipeline, and the No. 2 algorithm module and the No. 3 algorithm module form another data pipeline.
The algorithm modules in the image processing algorithm scheme are all processed by taking one frame of image data as a unit, so that the acquisition strategy of the algorithm result is also formulated on the dimension of the frame. Taking fig. 4 as an example, the algorithm module No. 3 in the algorithm scheme needs to rely on the processing results of the algorithm module No. 1 and the algorithm module No. 2, but in order to guarantee the processing efficiency of the overall algorithm scheme under different service requirements, the algorithm module No. 3 needs to obtain the algorithm result of the pre-algorithm module to implement different strategies.
When the result obtaining strategy of the current algorithm module is an interframe asynchronous strategy, the No. 3 algorithm module needs to obtain the result data of any one of the No. 1 algorithm module and the No. 2 algorithm module, and then carries out algorithm processing after the result data is output. The strategy that any pre-algorithm module outputs the processing result data and then is processed by the algorithm module is called as an inter-frame asynchronous strategy.
Specifically, after acquiring the first processing result data output by one pre-algorithm module of the single data pipeline, the algorithm module scheduling device may schedule the current algorithm module to process the first processing result data to obtain the second processing result data, without waiting for the processing result data output by other pre-algorithm modules.
When the result obtaining strategy is an interframe synchronization strategy, the No. 3 algorithm module needs to obtain the processing result data of the No. 1 algorithm module and the No. 2 algorithm module which are output before processing, and whether the processing result data of the same frame of image is output by the two algorithm modules is not concerned. The strategy that the processing results of the same frame of image data are processed by the algorithm modules after synchronous operation is carried out on a plurality of parallel pre-algorithm modules without concern is called an interframe synchronous strategy.
Specifically, the algorithm module scheduling device continues to wait for third processing result data output by the pre-algorithm modules of the other data pipelines after acquiring the first processing result data output by the pre-algorithm module of the single data pipeline. And the algorithm module scheduling device can schedule the current algorithm module to process the first processing result data and the third processing result data until third processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, so as to obtain second processing result data.
When the result obtaining strategy is an intra-frame synchronization strategy, the No. 3 algorithm module can process the processed result data of the same frame of image only after the No. 1 algorithm module and the No. 2 algorithm module are obtained and output. The strategy that the processing results of the same frame of image data of a plurality of parallel pre-algorithm modules are processed by the algorithm modules after synchronous operation is called intra-frame synchronous strategy.
Specifically, the algorithm module scheduling device continues to wait for fourth processing result data output by the pre-algorithm modules of the other data pipelines after acquiring the first processing result data output by the pre-algorithm module of the single data pipeline. The algorithm module scheduling device further judges whether the fourth processing result data output by the pre-algorithm modules of the obtained other data pipelines and the first processing result data are the processing result of the same frame of image data.
If so, the algorithm module scheduling device schedules the current algorithm module to process the first processing result data and the fourth processing result data to obtain second processing result data. If not, continuously waiting for fifth processing result data output by the pre-algorithm modules of the other data pipelines until the fifth processing result data and the first processing result data are the same frame result, and scheduling the current algorithm module by an algorithm module scheduling device to process the first processing result data and the fifth processing result data to obtain the second processing result data.
In other embodiments, the present application further provides another result obtaining policy processing logic. Referring to fig. 2, after the algorithm module scheduling device obtains the algorithm result data of the single data pipeline based on the current algorithm module, it determines whether the result obtaining policy of the current algorithm module is the inter-frame asynchronous policy.
When the result obtaining strategy of the current algorithm module is an interframe asynchronous strategy, the algorithm module scheduling device can integrate the algorithm result data of the single data pipeline, so as to schedule the current algorithm module for processing.
And when the result acquiring strategy of the current algorithm module is not the interframe asynchronous strategy, the algorithm module scheduling device continues to wait for sixth processing result data output by the pre-algorithm modules of the other data pipelines after acquiring the first processing result data output by the pre-algorithm module of the single data pipeline. And until sixth processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, judging whether the first processing result data and the sixth processing result data are the same frame result. If so, the algorithm module scheduling device schedules the current algorithm module to process the first processing result data and the sixth processing result data to obtain the second processing result data. If not, the algorithm module scheduling device further judges whether the result obtaining strategy of the current algorithm module is an interframe synchronization strategy.
And when the result obtaining strategy of the current algorithm module is an interframe synchronization strategy, the algorithm module scheduling device schedules the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
And when the result acquisition strategy of the current algorithm module is not the interframe synchronization strategy, the algorithm module scheduling device outputs program abnormal alarm information and ends.
Step S13: and the current scheduling algorithm module processes the first processing result data to obtain second processing result data.
In this embodiment of the present application, after the algorithm module scheduling device obtains a policy based on one result in step S12 and schedules the current algorithm module to process the processing result data of the pre-algorithm model, the processing result data obtained by the processing is placed in the result queue of the data pipeline where the current algorithm module is located, so that the algorithm module scheduling device schedules the post-algorithm module in the same data pipeline as the current algorithm module to process the second processing result data in the result queue until the algorithm result output of the entire data pipeline is completed.
In the embodiment of the application, the algorithm module scheduling device acquires a result acquisition strategy of the current algorithm module; acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy; and scheduling the current algorithm module to process the first processing result data to obtain second processing result data. By the method, the algorithm module scheduling method, namely the data pipeline method, manages the algorithm result data in the image processing algorithm scheme, supports three different algorithm result acquisition strategies of the algorithm module to manage and integrate the algorithm result, and combines the algorithm result acquisition strategies and the algorithm result acquisition strategies to flexibly support different image processing algorithm schemes under different service scenes, so that the development cost of different schemes is finally reduced, and the landing and delivery efficiency of the algorithm scheme is greatly improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
To implement the algorithm module scheduling method of the foregoing embodiment, the present application further provides an algorithm module scheduling apparatus, and please refer to fig. 5 specifically, where fig. 5 is a schematic structural diagram of an embodiment of the algorithm module scheduling apparatus provided in the present application.
The algorithm module scheduling apparatus 300 of the embodiment of the present application includes an obtaining policy module 31, a data obtaining module 32, and a scheduling processing module 33.
The obtaining policy module 31 is configured to obtain a result obtaining policy of the current algorithm module.
The data obtaining module 32 is configured to obtain first processing result data output by at least one pre-algorithm module based on the result obtaining policy.
The scheduling processing module 33 is configured to schedule the current algorithm module to process the first processing result data, so as to obtain the second processing result data.
To implement the algorithm module scheduling method of the foregoing embodiment, the present application further provides another algorithm module scheduling apparatus, and please refer to fig. 6 specifically, where fig. 6 is a schematic structural diagram of another embodiment of the algorithm module scheduling apparatus provided in the present application.
The algorithm module scheduling apparatus 400 of the embodiment of the present application includes a memory 41 and a processor 42, wherein the memory 41 and the processor 42 are coupled.
The memory 41 is used for storing program data and the processor 42 is used for executing the program data to implement the algorithm module scheduling method described in the above embodiments.
In the present embodiment, the processor 42 may also be referred to as a CPU (Central Processing Unit). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 42 may be any conventional processor or the like.
To implement the algorithm module scheduling method of the above embodiment, the present application further provides a computer-readable storage medium, as shown in fig. 7, the computer-readable storage medium 500 is used for storing program data 51, and when being executed by a processor, the program data 51 is used for implementing the algorithm module scheduling method of the above embodiment.
The present application also provides a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to execute the algorithm module scheduling method according to the embodiment of the present application. The computer program product may be a software installation package.
The algorithm module scheduling method according to the above embodiment of the present application may be stored in a device, for example, a computer readable storage medium, when the algorithm module scheduling method is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (13)

1. An algorithm module scheduling method, characterized in that the algorithm module scheduling method comprises:
obtaining a result obtaining strategy of the current algorithm module;
acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy;
scheduling the current algorithm module to process the first processing result data to obtain second processing result data;
the scheduling the current algorithm module to process the first processing result data to obtain second processing result data, including:
when the result obtaining strategy is not an interframe asynchronous strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, sixth processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
until sixth processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, whether the first processing result data and the sixth processing result data are the same frame result or not is judged;
and if so, scheduling the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
2. The algorithmic module scheduling method of claim 1,
the scheduling the current algorithm module to process the first processing result data to obtain second processing result data, including:
when the result obtaining strategy is an interframe asynchronous strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
and after first processing result data output by a pre-algorithm module of the single data pipeline is obtained, scheduling the current algorithm module to process the first processing result data to obtain second processing result data.
3. The algorithmic module scheduling method of claim 1,
the scheduling the current algorithm module to process the first processing result data to obtain second processing result data, including:
when the result obtaining strategy is an interframe synchronization strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, third processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
and scheduling the current algorithm module to process the first processing result data and the third processing result data until third processing result data output by the pre-algorithm modules of all the other data pipelines are obtained, so as to obtain second processing result data.
4. The algorithmic module scheduling method of claim 1,
the scheduling the current algorithm module to process the first processing result data to obtain second processing result data, including:
when the result obtaining strategy is an intra-frame synchronization strategy, obtaining first processing result data output by a pre-algorithm module of a single data pipeline;
after first processing result data output by the pre-algorithm module of the single data pipeline is obtained, fourth processing result data output by the pre-algorithm modules of the other data pipelines are continuously waited;
judging whether the fourth processing result data output by the pre-algorithm modules of the other data pipelines and the first processing result data are the same frame result;
and if so, scheduling the current algorithm module to process the first processing result data and the fourth processing result data to obtain second processing result data.
5. The algorithm module scheduling method of claim 4,
after judging whether the fourth processing result data output by the pre-algorithm modules of the other data pipelines and the first processing result data are the same frame result, the algorithm module scheduling method further comprises the following steps:
if not, continuously waiting for fifth processing result data output by the pre-algorithm modules of the other data pipelines until the fifth processing result data and the first processing result data are the same frame of result, and scheduling the current algorithm module to process the first processing result data and the fifth processing result data to obtain the second processing result data.
6. The algorithmic module scheduling method of claim 1,
after determining whether the first processing result data and the sixth processing result data are the same frame result, the algorithm module scheduling method further includes:
if not, judging whether the result acquisition strategy of the current algorithm module is an interframe synchronization strategy or not;
and when the result obtaining strategy of the current algorithm module is an interframe synchronization strategy, scheduling the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
7. The algorithmic module scheduling method of claim 6,
after judging whether the result obtaining strategy of the current algorithm module is an interframe synchronization strategy, the algorithm module scheduling method further comprises the following steps:
and outputting program abnormal alarm information when the result acquisition strategy of the current algorithm module is not the interframe synchronization strategy.
8. The algorithmic module scheduling method of claim 1,
after the scheduling of the current algorithm module processes the first processing result data to obtain the second processing result data, the algorithm module scheduling method further includes:
placing the second processing result data in a result queue of a data pipeline where the current algorithm module is located;
and scheduling a post algorithm module of the same data pipeline as the current algorithm module to process second processing result data in the result queue.
9. The algorithmic module scheduling method of claim 1,
the current algorithm module and the at least one preposed algorithm module are algorithm modules with a front-back sequence dependency relationship in a data pipeline;
the data system comprises a plurality of data pipelines, the data pipelines share at least one initial algorithm module and at least one result algorithm module, intermediate algorithm modules among different data pipelines are connected in parallel, and the current algorithm module is any algorithm module in the data system, which is not the first initial algorithm module.
10. The algorithm module scheduling method of claim 9,
each data pipeline is provided with a result queue for storing the result data of the algorithm module;
the obtaining of the first processing result data output by at least one pre-algorithm module based on the result obtaining strategy includes:
acquiring a current result queue corresponding to a data pipeline where the current algorithm module is located based on the result acquisition strategy;
extracting first processing result data output by at least one pre-algorithm module of the same data pipeline from the current result queue;
after the current algorithm module is scheduled to process the first processing result data to obtain second processing result data, the algorithm module scheduling method further comprises:
and placing the second processing result data in the current result queue.
11. An algorithm module scheduling device is characterized by comprising an acquisition strategy module, an acquisition data module and a scheduling processing module; wherein the content of the first and second substances,
the obtaining strategy module is used for obtaining the result obtaining strategy of the current algorithm module;
the data acquisition module is used for acquiring first processing result data output by at least one pre-algorithm module based on the result acquisition strategy;
the scheduling processing module is used for scheduling the current algorithm module to process the first processing result data to obtain second processing result data;
the data acquisition module is further used for acquiring first processing result data output by the pre-algorithm module of the single data pipeline when the result acquisition strategy is not an interframe asynchronous strategy;
the data acquisition module is further used for continuing waiting for sixth processing result data output by the pre-algorithm modules of the other data pipelines after acquiring the first processing result data output by the pre-algorithm module of the single data pipeline;
the scheduling processing module is further configured to determine whether the first processing result data and the sixth processing result data are the same frame result until sixth processing result data output by the pre-algorithm module of all the other data pipelines are obtained; and if so, scheduling the current algorithm module to process the first processing result data and the sixth processing result data to obtain second processing result data.
12. An algorithmic module scheduling means comprising a processor and a memory, the memory having stored therein program data, the processor being configured to execute the program data to perform the algorithmic module scheduling method as defined in any of claims 1 to 10.
13. A computer-readable storage medium for storing program data which, when executed by a processor, is adapted to implement the algorithm module scheduling method of any of claims 1-10.
CN202210415147.1A 2022-04-20 2022-04-20 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium Active CN114518917B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210415147.1A CN114518917B (en) 2022-04-20 2022-04-20 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
PCT/CN2022/125356 WO2023202006A1 (en) 2022-04-20 2022-10-14 Systems and methods for task execution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210415147.1A CN114518917B (en) 2022-04-20 2022-04-20 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Publications (2)

Publication Number Publication Date
CN114518917A CN114518917A (en) 2022-05-20
CN114518917B true CN114518917B (en) 2022-08-09

Family

ID=81600206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210415147.1A Active CN114518917B (en) 2022-04-20 2022-04-20 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Country Status (2)

Country Link
CN (1) CN114518917B (en)
WO (1) WO2023202006A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114518917B (en) * 2022-04-20 2022-08-09 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135699A1 (en) * 2019-12-31 2021-07-08 思必驰科技股份有限公司 Decision scheduling customization method and device based on information flow
WO2021179588A1 (en) * 2020-03-13 2021-09-16 北京旷视科技有限公司 Computing resource scheduling method and apparatus, electronic device, and computer readable storage medium
CN113886092A (en) * 2021-12-07 2022-01-04 苏州浪潮智能科技有限公司 Computation graph execution method and device and related equipment

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6430590B1 (en) * 1999-01-29 2002-08-06 International Business Machines Corporation Method and apparatus for processing executable program modules having multiple dependencies
US8015564B1 (en) * 2005-04-27 2011-09-06 Hewlett-Packard Development Company, L.P. Method of dispatching tasks in multi-processor computing environment with dispatching rules and monitoring of system status
CN105744279A (en) * 2014-12-10 2016-07-06 北京君正集成电路股份有限公司 Method and device for achieving interframe synchronization in video coding and decoding
CN106293971A (en) * 2016-08-15 2017-01-04 张家林 A kind of method and apparatus of distributed task dispatching
CN106815071A (en) * 2017-01-12 2017-06-09 上海轻维软件有限公司 Big data job scheduling system based on directed acyclic graph
CN109933422A (en) * 2017-12-19 2019-06-25 北京京东尚科信息技术有限公司 Method, apparatus, medium and the electronic equipment of processing task
CN109409513B (en) * 2018-10-10 2021-03-12 广州市百果园信息技术有限公司 Task processing method based on neural network and related equipment
CN109491777A (en) * 2018-11-12 2019-03-19 北京字节跳动网络技术有限公司 Task executing method, device, equipment and storage medium
US11288601B2 (en) * 2019-03-21 2022-03-29 International Business Machines Corporation Self-learning selection of information-analysis runtimes
CN113010276A (en) * 2020-06-11 2021-06-22 深圳市科脉技术股份有限公司 Task scheduling method and device, terminal equipment and storage medium
CN111737075A (en) * 2020-06-19 2020-10-02 浙江大华技术股份有限公司 Execution sequence determination method and device, storage medium and electronic device
CN111767149B (en) * 2020-06-29 2024-03-05 百度在线网络技术(北京)有限公司 Scheduling method, device, equipment and storage equipment
CN111861412B (en) * 2020-07-27 2024-03-15 上海交通大学 Completion time optimization-oriented scientific workflow scheduling method and system
CN113760488A (en) * 2020-08-28 2021-12-07 北京沃东天骏信息技术有限公司 Method, device, equipment and computer readable medium for scheduling task
CN113986495A (en) * 2021-10-26 2022-01-28 北京百度网讯科技有限公司 Task execution method, device, equipment and storage medium
CN114518917B (en) * 2022-04-20 2022-08-09 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135699A1 (en) * 2019-12-31 2021-07-08 思必驰科技股份有限公司 Decision scheduling customization method and device based on information flow
WO2021179588A1 (en) * 2020-03-13 2021-09-16 北京旷视科技有限公司 Computing resource scheduling method and apparatus, electronic device, and computer readable storage medium
CN113886092A (en) * 2021-12-07 2022-01-04 苏州浪潮智能科技有限公司 Computation graph execution method and device and related equipment

Also Published As

Publication number Publication date
WO2023202006A1 (en) 2023-10-26
CN114518917A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
US11188380B2 (en) Method and apparatus for processing task in smart device
CN111818136B (en) Data processing method, device, electronic equipment and computer readable medium
CN109873863B (en) Asynchronous calling method and device of service
CN114518917B (en) Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN110333916B (en) Request message processing method, device, computer system and readable storage medium
CN111240834A (en) Task execution method and device, electronic equipment and storage medium
CN108111630B (en) Zookeeper cluster system and connection method and system thereof
CN109815056A (en) Realize method, apparatus, storage medium and the equipment of message queue reconnection
CN111694640B (en) Data processing method, device, electronic equipment and storage medium
CN107678863A (en) The page assembly means of communication and device
CN110968433A (en) Information processing method and system and electronic equipment
CN113760498A (en) Message consumption method, device, electronic equipment and computer readable medium
CN113127225A (en) Method, device and system for scheduling data processing tasks
CN115098254A (en) Method and system for triggering execution of subtask sequence and electronic equipment
CN114064227A (en) Interrupt processing method, device, equipment and storage medium applied to microkernel
CN111338775B (en) Method and equipment for executing timing task
CN112799797B (en) Task management method and device
CN114372690A (en) Order processing method, system, device and storage medium
CN113537893A (en) Order processing method, device, equipment and computer readable medium
CN109840073B (en) Method and device for realizing business process
CN112333262A (en) Data updating prompting method and device, computer equipment and readable storage medium
CN111625442B (en) Mock testing method and device
CN116010126B (en) Service aggregation method, device and system
CN115953282B (en) Video task processing method and device
WO2016110010A1 (en) Service implementation method and device, and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant