CN114896033A - Data processing method and device, storage medium and electronic device - Google Patents

Data processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114896033A
CN114896033A CN202210338063.2A CN202210338063A CN114896033A CN 114896033 A CN114896033 A CN 114896033A CN 202210338063 A CN202210338063 A CN 202210338063A CN 114896033 A CN114896033 A CN 114896033A
Authority
CN
China
Prior art keywords
data processing
task
information
target engine
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210338063.2A
Other languages
Chinese (zh)
Inventor
彭垚
葛利亚
屈啸
张云剑
刘平
周伟民
顾周超
王亚明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Original Assignee
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shanma Zhiqing Technology Co Ltd, Shanghai Supremind Intelligent Technology Co Ltd filed Critical Hangzhou Shanma Zhiqing Technology Co Ltd
Priority to CN202210338063.2A priority Critical patent/CN114896033A/en
Publication of CN114896033A publication Critical patent/CN114896033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention provides a data processing method, a virtual device, a computer readable storage medium and an electronic device, and relates to the technical field of data processing. The invention comprises the following steps: acquiring data processing task information; performing task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result; and sending a data processing instruction to a target engine according to the first distribution result so as to instruct the target engine to execute data processing operation according to the data processing task information. By the method and the device, the problem of low data processing efficiency is solved, and the effect of improving the data processing efficiency and precision is achieved.

Description

Data processing method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a data processing method, a data processing device, a storage medium and an electronic device.
Background
With the continuous development in the field of artificial intelligence (AI for short), the processing mode of multimedia data has been developed greatly.
For example, when performing video stream point-to-point analysis, the existing data scheduling method obtains tasks by an algorithm engine pull mode, in order to improve scheduling efficiency, the process is generally a random pull task, the scheduling mode cannot perform scheduling according to algorithms such as task priority, and the task processing precision is reduced, and after the tasks are pulled, the process of allocating the tasks to the engines is also random, which causes the problem of unbalanced task allocation of the algorithm engine, and thus reduces the task processing efficiency and the waste of computing resources.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, an apparatus, a storage medium, and an electronic apparatus, so as to at least solve the problem of low data processing efficiency in the related art.
According to an embodiment of the present invention, there is provided a data processing method including:
acquiring data processing task information;
performing task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
and sending a data processing instruction to a target engine according to the first distribution result so as to instruct the target engine to execute data processing operation according to the data processing task information.
In an exemplary embodiment, after said sending a data processing instruction to a target engine according to said first allocation result, said method further comprises:
determining a task processing state according to a received data processing result, wherein the data processing result is fed back after the target engine executes the data processing operation;
in the case that the task processing state is determined to be a first state, sending the first data processing instruction to a first engine to instruct the first engine to perform a data processing operation, wherein the first engine is an engine other than the target engine;
and under the condition that the task processing state is determined to be the second state, performing information updating processing on the data processing task information.
In an exemplary embodiment, before the performing the task allocation processing on the data processing task information through the preset first allocation algorithm, the method further includes:
acquiring registration information of the target engine;
sending a heartbeat monitoring instruction to the target based on the registration information to instruct the target engine to send heartbeat information, wherein the heartbeat information comprises a real-time working state of the target engine;
and receiving heartbeat information sent by the target engine.
In one exemplary embodiment, the method further comprises:
acquiring a plug switch instruction, wherein the plug switch instruction is used for instructing the target engine to execute a plug switch operation;
storing the push switch state based on the push switch command;
and under the condition of receiving a push flow updating request, sending push flow switch information to the target engine according to the push flow switch state so as to instruct the target engine to execute push flow switch operation, wherein the push flow updating request is sent by the target engine under the condition of receiving push flow change information.
In an exemplary embodiment, the performing, by a preset first allocation algorithm, task allocation processing on the data processing task information to obtain a first allocation result includes:
determining a factor weight in the first allocation algorithm based on the data processing task information;
and executing the task allocation processing based on the factor weight and the data processing task information to obtain the first allocation result.
According to another embodiment of the present invention, there is provided a data processing apparatus including:
the information acquisition module is used for acquiring data processing task information;
the task allocation module is used for performing task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
and the instruction sending module is used for sending a data processing instruction to a target engine according to the first distribution result so as to instruct the target engine to execute data processing operation according to the data processing task information.
In one exemplary embodiment, the apparatus further comprises:
the task state determining module is used for determining a task processing state according to a received data processing result, wherein the data processing result is fed back after the target engine executes the data processing operation;
a first instruction sending module, configured to, after sending a data processing instruction to a target engine according to the first allocation result, send the first data processing instruction to a first engine to instruct the first engine to perform a data processing operation when it is determined that the task processing state is a first state, where the first engine is an engine other than the target engine;
and the information updating module is used for performing information updating processing on the data processing task information under the condition that the task processing state is determined to be the second state.
In one exemplary embodiment, the apparatus further comprises:
the information registration module is used for acquiring the registration information of the target engine before the data processing task information is subjected to task allocation processing through a preset first allocation algorithm;
the monitoring instruction sending module is used for sending a heartbeat monitoring instruction to the target based on the registration information so as to instruct the target engine to send heartbeat information, wherein the heartbeat information comprises a real-time working state of the target engine;
and the heartbeat information receiving module is used for receiving the heartbeat information sent by the target engine.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, comprising a memory in which a computer program is stored and a processor configured to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, the engine state is determined before the data processing instruction is sent to the target engine, so that the situation that the target engine cannot process related tasks or the processing lower rate is low due to the full load of the target engine can be avoided, and the data processing tasks can be processed by the engine, therefore, the problem of low data processing efficiency can be solved, and the effect of improving the data processing efficiency is achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of data processing according to an embodiment of the present invention;
FIG. 3 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
FIG. 4 is a first flowchart in accordance with an embodiment of the present invention;
FIG. 5 is a block diagram of an apparatus according to an embodiment of the present invention;
FIG. 6 is a flow chart two according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the example of being operated on a mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of a data processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a data processing method is provided, and fig. 2 is a flowchart according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring data processing task information;
in this embodiment, the data processing task information is obtained to determine the type of the task to be processed, so that the task can be accurately allocated to the corresponding algorithm engine, and the algorithm engine can be efficiently utilized.
The data processing task information comprises information such as task types, task quantity, computing power requirements, task target time, task periods and task addresses of the data processing tasks; the data processing task information may be (but is not limited to) obtained by periodically monitoring the operation page, may be received by a fixed interface, and may also be obtained by other means.
For example, a user starts a task through a touch screen operation interface, then the task management service module flow adds the task to a task list to be processed by calling an interface of a scheduler manager, then allocates the task to an algorithm engine analyzer through a task scheduling instruction, and meanwhile, the algorithm engine analyzer obtains information of the task through monitoring and then starts analysis processing of the task.
Step S204, carrying out task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
in this embodiment, the task allocation processing of the data processing task information is to arrange the corresponding algorithm engine to process the relevant data processing task, so as to improve the utilization efficiency of the algorithm engine, and enable the data processing task to be processed by the appropriate engine, thereby improving the data processing efficiency.
The first allocation algorithm may be (but is not limited to) an algorithm model for performing weight calculation according to the video stream attribute and the engine attribute, an algorithm model for performing weight calculation according to the encoding format, and an algorithm model for performing weight calculation according to bandwidth, memory consumption, and the like; the first allocation result includes (but is not limited to) information such as the number of engines allocated, an engine number code, an engine address, and the like.
For example, for the calculation of task weight and engine analysis quota, the video stream attribute and the algorithm model can be taken into account and converted into a relatively consistent measurement mode; the video stream attributes include resolution, coding mode and code rate, and the resolution is generally 720P, 1080P, 2K, 4K and the like; the coding mode is divided into H264 and H265, the code rate is generally 2Mbps, 4Mbps, 8Mbps, 16Mbps and the like, and these attributes can influence GPU decoding, video memory, network bandwidth consumption and the like used for algorithm analysis.
Taking network bandwidth as an example, a hundred megabyte network card can analyze 25 paths of streams with 4Mbps code rate, and considering only from this dimension, the amount analyzed by an engine cannot exceed 25 paths, and other consumption is the same.
It should be noted that the algorithm model generally includes detection, analysis, tracking, and the like, and may affect consumption of the GPU, the CPU, the memory, and the like.
Step S206, according to the first distribution result, sending a data processing instruction to the target engine to instruct the target engine to execute data processing operation according to the data processing task information.
In this embodiment, the sending of the data processing instruction to the target engine is to accurately control the algorithm engine to process the data processing task, so as to ensure the data processing efficiency, and avoid overload or idle of the algorithm engine, so that the algorithm engine can accurately process the corresponding task.
The data processing instructions may be (but are not limited to) a data stream, a data packet, or an electrical signal; the transmission of the data processing instruction can be carried out by Ethernet, can also be carried out by a 3G/4G/5G/quantum communication mode, and can also be carried out by other modes; the number of the target engines can be one or more; the correspondence between the target engines and the data processing instructions may be (but is not limited to) that a plurality of target engines correspond to one data processing instruction, or that one data processing instruction corresponds to one target engine.
Through the steps, the data processing task information is distributed and calculated before the task is distributed to the target engine, and then the target engine is distributed according to the calculation result, so that the situation that the engine cannot accurately process the task or the processing efficiency of the engine is low can be avoided, the problem of low data processing efficiency is solved, and the data processing efficiency and precision are improved.
The main body of the above steps may be a base station, a terminal, etc., but is not limited thereto.
In an optional embodiment, after sending the data processing instruction to the target engine according to the first allocation result, the method further comprises:
step S208, determining a task processing state according to the received data processing result, wherein the data processing result is fed back after the target engine executes the data processing operation;
step S2010, in a case that it is determined that the task processing state is the first state, sending a first data processing instruction to the first engine to instruct the first engine to perform a data processing operation, wherein the first engine is an engine other than the target engine;
in step S2012, if it is determined that the job processing status is the second status, the information update process is performed on the data processing job information.
In this embodiment, the instruction of the first engine to perform the data processing operation is to ensure that the target engine can still perform the data processing operation if the target engine cannot perform the data processing operation; the information updating processing on the data processing task information is to avoid energy waste caused by data processing still executed under the condition that data has a fault or an error, so as to ensure normal data processing.
The first state may (but is not limited to) be a state in which the target engine fails to work normally, such as a failure of the target engine, a failure of the target engine to perform a data processing operation, and the like, and the second state may (but is not limited to) be a state in which there is a problem that data to be processed has an error or an unidentifiable data processing task is allocated; the number of the first engines can be multiple or one; the information updating process includes, but is not limited to, updating the data to be processed, stopping the processing of the data, and the like.
For example, when an engine analyzes a task, if an analysis error occurs and needs to retry, or the final analysis fails, a task state updating interface of a scheduler server can be called, the state is reported, and the scheduler server can correspondingly update the task state; or the algorithm engine analyzer monitors the task to know that the task is deleted and stops the analysis of the task; or the target engine monitors that the task state is changed from starting to waiting, the task information is deleted, and the analysis of the task is stopped after the engine monitors the task state.
In an optional embodiment, before performing the task allocation processing on the data processing task information through a preset first allocation algorithm, the method further includes:
step S20402, acquiring the registration information of the target engine;
step S20404, sending a heartbeat monitoring instruction to the target based on the registration information to instruct the target engine to send heartbeat information, wherein the heartbeat information comprises a real-time working state of the target engine;
step S20406, receiving the heartbeat information sent by the target engine.
In this embodiment, the heartbeat information is obtained to monitor the working state of the target engine in real time, so as to accurately control the target engine and ensure the normal data processing.
The registration information includes (but is not limited to) information such as a data processing type, a data processing quantity, a working state, a working load quantity, a maximum load quantity, address information of the target engine, interface information, and an interface quantity of the target engine; the heartbeat information also comprises information of real-time on-off state, load capacity and the like of the target engine.
For example, the algorithm engine analyzer registers engine information and keeps heartbeat by calling an interface of the scheduler when starting.
In an optional embodiment, the method further comprises:
step S2014, acquiring a plug flow switch instruction, wherein the plug flow switch instruction is used for instructing a target engine to execute a plug flow switch operation;
step S2016 of storing the state of the push switch based on the push switch command;
and step S2018, under the condition that a push flow updating request is received, sending push flow switch information to the target engine according to the push flow switch state so as to instruct the target engine to execute push flow switch operation, wherein the push flow updating request is sent by the target engine under the condition of receiving push flow change information.
In this embodiment, the storage of the state of the push switch, and the sending of the push switch information to the target engine when receiving the push update request is to buffer the target engine, implement the preloading of the task, and avoid the system failure caused by the violent operation executed by the target engine in a short time, thereby ensuring the normal operation of information processing.
The stream pushing change information may be (but is not limited to) obtained by the target engine through real-time monitoring of the information stream, obtained by the user through page input, or obtained through other methods; the stream pushing switch command may be (but is not limited to) in the form of an electrical signal, a data stream, or a data packet.
It should be noted that, the execution of the push switch operation is to make the target engine execute the turn-on or turn-off push, so as to avoid restarting the task to pull the flow again.
In an optional embodiment, the performing, by using a preset first allocation algorithm, task allocation processing on the data processing task information to obtain a first allocation result includes:
step S2042, determining the factor weight in the first distribution algorithm based on the data processing task information;
step S2044, based on the factor weight and the data processing task information, performs task allocation processing to obtain a first allocation result.
In this embodiment, the factor weight is determined to judge the distribution of the data processing task from multiple angles, so as to process the related data more accurately and improve the data processing efficiency.
The factor weight includes (but is not limited to) data resolution, encoding mode, code rate, GPU decoding, video memory and network bandwidth consumption, CPU and memory consumption, engine rated workload, engine real-time workload, and the like.
For example, taking a traffic event as an example, different algorithmic models consume different resources:
the motor vehicle runs the red light includes: traffic light classification + vehicle detection + attribute + tracking + license plate
The motor vehicle drives in reverse comprising: vehicle detection + attribute + tracking + license plate
The traffic jam includes: vehicle detection
In the scene, the resource consumption of the corresponding algorithm model is that the motor vehicle runs red light, the motor vehicle runs in the reverse direction and the traffic jam occurs, and at the moment, the three scenes are respectively distributed to different algorithm engines to be calculated so as to respectively process data in the three scenes.
In an optional embodiment, the method further comprises:
step S2020, a task stop instruction is obtained;
step S2022, according to the task stop instruction, executing deletion operation on the target data processing task information;
step S2024, in case of receiving the task deletion request, sends a task deletion instruction to the target engine to instruct the target engine to stop performing the data processing operation.
In the embodiment, the operation stop is instructed by deleting the task, so that the error processing caused by the error identification of the target engine can be avoided.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a data processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram of a data processing apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus including:
the information acquisition module 32 is used for acquiring data processing task information;
the task allocation module 34 is configured to perform task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
and an instruction sending module 36, configured to send a data processing instruction to a target engine according to the first allocation result, so as to instruct the target engine to perform a data processing operation according to the data processing task information.
In an optional embodiment, the apparatus further comprises:
a task state determining module 38, configured to determine a task processing state according to a received data processing result, where the data processing result is fed back by the target engine after performing the data processing operation;
a first instruction sending module 310, configured to, after sending a data processing instruction to a target engine according to the first allocation result, send the first data processing instruction to a first engine to instruct the first engine to perform a data processing operation if it is determined that the task processing state is a first state, where the first engine is an engine other than the target engine;
and an information updating module 312, configured to perform information updating processing on the data processing task information when it is determined that the task processing state is the second state.
In an optional embodiment, the apparatus further comprises:
an information registration module 314, configured to obtain registration information of the target engine before performing task allocation processing on the data processing task information through a preset first allocation algorithm;
a monitoring instruction sending module 316, configured to send a heartbeat monitoring instruction to the target based on the registration information to instruct the target engine to send heartbeat information, where the heartbeat information includes a real-time working state of the target engine;
a heartbeat information receiving module 318, configured to receive heartbeat information sent by the target engine.
In an optional embodiment, the apparatus further comprises:
the plug flow instruction acquisition module 320 is configured to acquire a plug flow switch instruction, where the plug flow switch instruction is used to instruct the target engine to perform a plug flow switch operation;
the instruction storage module 322 is configured to store the push switch state based on the push switch instruction;
an information sending module 324, configured to send, according to the push switch state, push switch information to the target engine to instruct the target engine to perform a push switch operation when a push update request is received, where the push update request is sent by the target engine when push change information is received.
In an alternative embodiment, task assignment module 34 includes:
a weight determination unit 342 for determining a factor weight in the first allocation algorithm based on the data processing task information;
an assigning unit 344, configured to perform the task assigning process based on the factor weight and the data processing task information to obtain the first assigning result.
In an optional embodiment, the apparatus further comprises:
a stop instruction acquisition module 326 for acquiring a task stop instruction;
a delete operation module 328, configured to perform a delete operation on the target data processing task information according to the task stop instruction;
a stop execution operation 330, configured to, in a case that a task deletion request is received, send a task deletion instruction to the target engine to instruct the target engine to stop executing the data processing operation.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
The present invention will be described with reference to specific examples.
As shown in fig. 4-6, the present invention includes the following processes:
in step S41 (corresponding to steps S20402-20406), when the algorithm engine analyzer is started, engine information is registered by calling the interface of the scheduler and the heartbeat is maintained. The scheduler monitors the change of the engine, and if the situation that the engine is newly increased is monitored, the engine information is added to an engine list which can be scheduled; if the engine is monitored to be disconnected, the task being analyzed by the engine is deleted, and the task is added to the task list to be processed and can be rescheduled by other engines.
Step S42 (corresponding to the steps S202-206), the user starts the task through the interface, the task management service flow adds the task to the task list to be processed by calling the interface of the scheduler; distributing the task to analyzer through task scheduling; the analyzer obtains the task information through monitoring, and starts the analysis of the task.
In step S43, when the engine analyzes the task, if the engine needs to retry the analysis error or the final analysis fails, the engine may call a task state update interface of the scheduler, and report the state, and the scheduler manager may update the task state accordingly.
Or, the user opens or closes the push flow through the interface, and the task management service flow updates the push flow state by calling the interface of the scheduler; then, the algorithm engine analyzer obtains a change of the push flow state by monitoring the flow state, and opens or closes the push flow (corresponding to the foregoing steps S2014-2018).
Step S44, the user stops the task through the interface, the task management service flow deletes the task information by calling the interface of the scheduler; the analyzer monitors the task to know that the task is deleted, and stops the analysis of the task (corresponding to the foregoing steps S2020-2024).
In this case, the task state is changed by global polling for scheduled (periodically running) task data. When the task state is changed from waiting to starting, adding the task to a list to be processed, and distributing the task to a corresponding engine through scheduling; when the task state is changed from starting to waiting, the task information is deleted, and the engine stops analyzing the task after monitoring.
Specifically, as shown in fig. 5, the scheduler scheduling service mainly includes the following modules:
the task management service terminal is used for interacting with the task management service flow;
the VmrService task scheduling server is used for interacting with the algorithm engine analyzer;
manager memory management, which is to realize different data management through interfaces, such as the etcd, memory buffer and the like;
and the Core module comprises a plurality of scheduling algorithms, task state processing and the like.
The specific functions of each module are introduced as follows:
the task management server side of the TaskService comprises:
the task management service flow realizes corresponding operations by calling the task client of the grpc client, such as:
the flow is called when starting the task, and task information is added to a task list to be processed;
StopTask, calling when flow stops the task, and deleting the task information;
update the push flow status.
The VmrService task scheduling server comprises:
the algorithm engine analyzer realizes corresponding operations by calling the vmrcient of the grpc client, such as:
analyzerease, Engine registration (grant lease) + heartbeats (persistent lease);
PutTaskStatus, updating the task state;
and the WatchTask monitors the task and comprises task information and a stream pushing state.
Manager memory management includes:
and internal data management, namely, packaging through an interface, and interfacing different storages, such as etcd, memory buffer and the like. Task schedule storage on a server may use etcd, which is a distributed key-value pair storage designed to reliably and quickly store critical data and provide access for shared configuration, service discovery, and scheduling coordination of a distributed system or computer cluster. And the embedded scenes such as the intelligent box and the like can simply use the memory because the number of tasks is less.
The main contents of the memory management service are as follows:
AddTask, newly adding a task, and adding task information into storage;
DeleteTask, deleting the task, and deleting the task information from the storage;
GetTasks, acquiring a task list and acquiring all tasks in storage;
PutTaskStream, updating the stream state, and adding the updated information of the stream state to the storage;
the WatchTask monitors tasks, including adding and deleting the tasks and updating the task state;
GetAnalyzer, acquiring all online algorithm engine lists for task scheduling selection;
WatchAnalyzer, snoop engine, including engine additions and deletions.
The Core module includes:
the core capabilities of the dispatch service include:
various task scheduling algorithms are realized, such as a random method, a first-come-first-go method, a maximum weight method, a minimum weight method and the like, and the task scheduling algorithms suitable for different scenes are selected through configuration;
the method comprises the following steps of realizing various engine load balancing algorithms, such as a random method, a polling method, a weighted random method, a maximum limit remaining method and the like, and selecting the engine load balancing algorithms suitable for different scenes through configuration;
and the global polling task correspondingly adds task information to the list to be distributed or deletes the task information according to whether the task state is changed in the running period.
For the calculation of the task weight and the engine analysis limit, the video stream attribute and the algorithm model can be taken into account and converted into a relatively consistent measurement mode.
The video stream attributes include resolution, coding mode and code rate, the resolution is generally 720P, 1080P, 2K and 4K, etc.,
the coding mode is divided into H264 and H265, and the code rate is generally 2Mbps, 4Mbps, 8Mbps, 16Mbps and the like. These attributes can affect GPU decoding, video memory, and network bandwidth consumption, etc. used for algorithmic analysis.
Taking the network bandwidth as an example, the hundred megabyte network card can analyze 25 paths of streams with 4Mbps code rate, and the credit amount analyzed by the engine cannot exceed 25 only from the dimension, and other consumption is the same.
The algorithm model generally comprises detection, analysis, tracking and the like, and can affect consumption of a GPU, a CPU, a memory and the like. Taking traffic events as an example, the resources consumed by different algorithm models are different:
the motor vehicle runs the red light includes: traffic light classification + vehicle detection + attribute + tracking + license plate
The motor vehicle drives in the wrong direction and comprises: vehicle detection + attribute + tracking + license plate
The traffic jam includes: vehicle detection
Resource consumption: motor vehicle running red light and motor vehicle running in reverse traffic jam
The pushstreams consume GPU coding, CPU, memory and network bandwidth, and it is generally not advisable to turn on multiple pushstreams simultaneously.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A data processing method, comprising:
acquiring data processing task information;
performing task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
sending a data processing instruction to a target engine according to the first distribution result so as to instruct the target engine to execute data processing operation according to the data processing task information;
the method further comprises the following steps:
acquiring a plug switch instruction, wherein the plug switch instruction is used for instructing the target engine to execute a plug switch operation;
storing the push switch state based on the push switch command;
and under the condition of receiving a push flow updating request, sending push flow switch information to the target engine according to the push flow switch state so as to instruct the target engine to execute push flow switch operation, wherein the push flow updating request is sent by the target engine under the condition of receiving push flow change information.
2. The method of claim 1, wherein after said sending a data processing instruction to a target engine based on said first allocation result, said method further comprises:
determining a task processing state according to a received data processing result, wherein the data processing result is fed back after the target engine executes the data processing operation;
in a case that the task processing state is determined to be a first state, sending the first data processing instruction to a first engine to instruct the first engine to perform a data processing operation, wherein the first engine is an engine other than the target engine;
and under the condition that the task processing state is determined to be the second state, performing information updating processing on the data processing task information.
3. The method according to claim 1, wherein before the task allocation processing is performed on the data processing task information through a preset first allocation algorithm, the method further comprises:
acquiring registration information of the target engine;
sending a heartbeat monitoring instruction to the target based on the registration information to instruct the target engine to send heartbeat information, wherein the heartbeat information comprises a real-time working state of the target engine;
and receiving heartbeat information sent by the target engine.
4. The method according to claim 1, wherein the task allocation processing of the data processing task information through a preset first allocation algorithm to obtain a first allocation result comprises:
determining a factor weight in the first allocation algorithm based on the data processing task information;
and executing the task allocation processing based on the factor weight and the data processing task information to obtain the first allocation result.
5. A data processing apparatus, comprising:
the information acquisition module is used for acquiring data processing task information;
the task allocation module is used for performing task allocation processing on the data processing task information through a preset first allocation algorithm to obtain a first allocation result;
the instruction sending module is used for sending a data processing instruction to a target engine according to the first distribution result so as to instruct the target engine to execute data processing operation according to the data processing task information;
wherein the apparatus further comprises:
the system comprises a plug flow instruction acquisition module, a plug flow switch instruction acquisition module and a plug flow switch control module, wherein the plug flow switch instruction is used for instructing the target engine to execute a plug flow switch operation;
the instruction storage module is used for storing the state of the flow pushing switch based on the flow pushing switch instruction;
and the information sending module is used for sending push flow switch information to the target engine according to the push flow switch state under the condition of receiving a push flow updating request so as to instruct the target engine to execute push flow switch operation, wherein the push flow updating request is sent by the target engine under the condition of receiving push flow change information.
6. The apparatus of claim 5, further comprising:
the task state determining module is used for determining a task processing state according to a received data processing result, wherein the data processing result is fed back after the target engine executes the data processing operation;
a first instruction sending module, configured to, after sending a data processing instruction to a target engine according to the first allocation result, send the first data processing instruction to a first engine to instruct the first engine to perform a data processing operation when it is determined that the task processing state is a first state, where the first engine is an engine other than the target engine;
and the information updating module is used for performing information updating processing on the data processing task information under the condition that the task processing state is determined to be the second state.
7. The apparatus of claim 5, further comprising:
the information registration module is used for acquiring the registration information of the target engine before the data processing task information is subjected to task allocation processing through a preset first allocation algorithm;
the monitoring instruction sending module is used for sending a heartbeat monitoring instruction to the target based on the registration information so as to instruct the target engine to send heartbeat information, wherein the heartbeat information comprises a real-time working state of the target engine;
and the heartbeat information receiving module is used for receiving the heartbeat information sent by the target engine.
8. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 4 when executed
An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 4.
CN202210338063.2A 2022-04-01 2022-04-01 Data processing method and device, storage medium and electronic device Pending CN114896033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210338063.2A CN114896033A (en) 2022-04-01 2022-04-01 Data processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210338063.2A CN114896033A (en) 2022-04-01 2022-04-01 Data processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114896033A true CN114896033A (en) 2022-08-12

Family

ID=82716313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210338063.2A Pending CN114896033A (en) 2022-04-01 2022-04-01 Data processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114896033A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102905197A (en) * 2012-10-17 2013-01-30 哈尔滨海能达科技有限公司 Video scheduling method, data processing method and corresponding devices and systems thereof
CN103092712A (en) * 2011-11-04 2013-05-08 阿里巴巴集团控股有限公司 Method and device for recovering interrupt tasks
CN103324539A (en) * 2013-06-24 2013-09-25 浪潮电子信息产业股份有限公司 Job scheduling management system and method
WO2019204572A1 (en) * 2018-04-18 2019-10-24 Alibaba Group Holding Linited Task processing method, apparatus, and system
CN112162865A (en) * 2020-11-03 2021-01-01 中国工商银行股份有限公司 Server scheduling method and device and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092712A (en) * 2011-11-04 2013-05-08 阿里巴巴集团控股有限公司 Method and device for recovering interrupt tasks
CN102905197A (en) * 2012-10-17 2013-01-30 哈尔滨海能达科技有限公司 Video scheduling method, data processing method and corresponding devices and systems thereof
CN103324539A (en) * 2013-06-24 2013-09-25 浪潮电子信息产业股份有限公司 Job scheduling management system and method
WO2019204572A1 (en) * 2018-04-18 2019-10-24 Alibaba Group Holding Linited Task processing method, apparatus, and system
CN112162865A (en) * 2020-11-03 2021-01-01 中国工商银行股份有限公司 Server scheduling method and device and server

Similar Documents

Publication Publication Date Title
CN107018175B (en) Scheduling method and device of mobile cloud computing platform
CN112162865B (en) Scheduling method and device of server and server
CN109766172B (en) Asynchronous task scheduling method and device
KR20140057371A (en) Method, device, and system for performing scheduling in multi-processor core system
CN108920153A (en) A kind of Docker container dynamic dispatching method based on load estimation
JP2001084195A (en) Network managing system with event control means
CN112579304A (en) Resource scheduling method, device, equipment and medium based on distributed platform
CN115604768B (en) Electromagnetic perception task dynamic migration method, system and terminal based on resource state
KR101055548B1 (en) Semantic Computing-based Dynamic Job Scheduling System for Distributed Processing
CN111258746A (en) Resource allocation method and service equipment
EP4361808A1 (en) Resource scheduling method and device and computing node
US20230037783A1 (en) Resource scheduling method and related apparatus
WO2022100365A1 (en) Method, system, and device for managing artificial intelligence application task, and storage medium
CN114896033A (en) Data processing method and device, storage medium and electronic device
CN108347341A (en) A kind of acceleration capacity method of adjustment and device for adjusting virtual machine acceleration capacity
CN111833478A (en) Data processing method, device, terminal and storage medium
CN112929482A (en) Sensor operation control method and device, computer equipment and storage medium
CN114090201A (en) Resource scheduling method, device, equipment and storage medium
CN115883639A (en) Web real-time message pushing method and device, equipment and storage medium
CN112925692A (en) Multi-terminal autonomous cooperative monitoring device and system thereof
CN112995704B (en) Cache management method and device, electronic equipment and storage medium
CN112532450B (en) Dynamic updating method and system for data stream distribution process configuration
US20230359501A1 (en) Accelerator management device, communication system, accelerator management method, and accelerator management program
KR101506448B1 (en) Method And Apparatus for Managing Machine to Machine Traffic
CN114245139A (en) Video transcoding scheduling method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination