CN112346845B - Method, device and equipment for scheduling coding tasks and storage medium - Google Patents

Method, device and equipment for scheduling coding tasks and storage medium Download PDF

Info

Publication number
CN112346845B
CN112346845B CN202110024208.7A CN202110024208A CN112346845B CN 112346845 B CN112346845 B CN 112346845B CN 202110024208 A CN202110024208 A CN 202110024208A CN 112346845 B CN112346845 B CN 112346845B
Authority
CN
China
Prior art keywords
encoding
coding
channel
task
occupancy rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110024208.7A
Other languages
Chinese (zh)
Other versions
CN112346845A (en
Inventor
赖文星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110024208.7A priority Critical patent/CN112346845B/en
Publication of CN112346845A publication Critical patent/CN112346845A/en
Application granted granted Critical
Publication of CN112346845B publication Critical patent/CN112346845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The application discloses a method, a device, equipment and a storage medium for scheduling a coding task, and belongs to the field of computing resource allocation. The method comprises the following steps: selecting n encoding devices in an idle state from the encoding device cluster, wherein the idle state is used for representing that the occupancy rate of an encoding channel of the encoding device is lower than an occupancy rate threshold, n is more than or equal to 2, and n is an integer; determining a target coding device with the minimum coding channel occupancy rate from the n coding devices in the idle state and the first coding device; and scheduling the coding task to the target coding device for execution. The method is combined with a big data technology, and in a distributed system, the distributed system always keeps load balance through a reasonable resource scheduling mode, so that the efficiency of resource scheduling is improved.

Description

Method, device and equipment for scheduling coding tasks and storage medium
Technical Field
The present application relates to the field of computing resource allocation, and in particular, to a method, an apparatus, a device, and a storage medium for scheduling an encoding task.
Background
An Intelligent Camera (AIC), referred to as an AI Camera for short, is a Camera having certain computing power, encoding power and machine learning power, and can be used to collect a video, identify content in the video, encode the video, and upload the encoded video to a server.
In the related art, when an intelligent camera in a camera cluster generates a coding task, if a coding channel of the intelligent camera is in a busy state, the intelligent camera randomly selects an intelligent camera from the camera cluster as target coding equipment, and sends the coding task to the target coding equipment.
In the technical scheme, one intelligent camera is selected to process the coding task in a random selection mode, and in some cases, the coding task may be randomly scheduled to the same or multiple identical coding task queues of the intelligent cameras, so that the loads of other intelligent cameras are small, even zero, and the load imbalance of each camera in the camera cluster is easily caused.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for scheduling a coding task, wherein the coding task is scheduled to the coding equipment with the minimum coding channel occupancy rate by comparing the coding channel occupancy rates of n +1 coding equipments (n coding equipments in an idle state and a first coding equipment), so that the coding task is always scheduled to the coding equipment with the smaller coding channel occupancy rate, a coding equipment cluster always keeps load balance, and the technical scheme comprises the following scheme.
According to an aspect of the present application, there is provided a scheduling method of an encoding task, the method being applied to a first encoding device in a cluster of encoding devices, the method comprising the following steps.
Selecting n encoding devices in an idle state from the encoding device cluster, wherein the idle state is used for representing that the occupancy rate of the encoding channel of the encoding device is lower than an occupancy rate threshold value, n is more than or equal to 2, and n is an integer;
determining the target coding device with the minimum coding channel occupancy rate from the n coding devices in the idle state and the first coding device;
and scheduling the coding task to the target coding device for execution.
According to another aspect of the present application, there is provided a scheduling apparatus of an encoding task, the apparatus including the following.
The selection module is used for selecting n encoding devices in an idle state from the encoding device cluster, wherein the idle state is used for representing that the occupancy rate of an encoding channel of the encoding device is lower than an occupancy rate threshold value, n is more than or equal to 2, and n is an integer;
the processing module is used for determining the target coding device with the minimum coding channel occupancy rate from the n coding devices in the idle state and the first coding device;
and the scheduling module is used for scheduling the coding task to the target coding equipment for execution.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of scheduling of encoded tasks as described above.
According to another aspect of the present application, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements a method of scheduling an encoded task as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and the processor executes the computer instructions to cause the computer device to perform the scheduling method of the encoding task as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects.
By comparing the occupancy rates of the coding channels of n +1 coding devices (n coding devices in an idle state and the first coding device), the first coding device can schedule the generated coding tasks to the coding device with the minimum occupancy rate of the coding channel, so that the condition that most coding devices in a coding device cluster schedule the coding tasks to the same coding device due to real-time change of the occupancy rates of the coding channels is avoided; even if the generated coding tasks are all scheduled to n coding devices in an idle state, the first coding device can still add the coding tasks to the coding queue of the first coding device, so that the coding tasks are prevented from waiting for a long time, and the load of the coding device cluster is kept in a balanced state all the time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for scheduling an encoding task according to an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for scheduling an encoding task according to another exemplary embodiment of the present application;
FIG. 4 is a flowchart of a method for scheduling an encoding task according to another exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a decoupling process for video frames provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a method for scheduling an encoding task according to another exemplary embodiment of the present application;
FIG. 7 is a block diagram of a scheduling apparatus for encoding tasks according to an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a server provided by an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a computer device provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application will be described.
Traditional camera: cameras that only have the capability to acquire video frames or to acquire images.
Edge calculation box: the edge computing device has certain computing capacity, coding capacity and machine learning capacity, and can be connected with a traditional camera to acquire video frames (or images) and perform content identification and video frame processing (or image processing). The edge computing box is connected with a traditional camera to form the coding equipment.
AI camera: the camera has certain computing capability, coding capability and machine learning capability, and can acquire video frames (or images) and perform content identification and video frame processing (or image processing). The AI camera may serve as an independent encoding device, and in the embodiment of the present application, the encoding device includes the AI camera as an example.
Hardware encoding: the method for coding tasks by using specific hardware chips and the like is characterized by high speed, low power consumption and small influence by the load of a Central Processing Unit (CPU).
Software coding: the method for coding tasks by using the CPU is characterized by low speed, high power consumption and large influence by CPU load.
And (3) coding channels: the encoding device refers to a channel used for performing an encoding task, and the encoding channel in the embodiment of the present application includes a hardware encoding channel and a software encoding channel.
Hardware coding channel: the encoding device executes channels of encoding tasks in a hardware encoding manner, the hardware encoding can be realized by a Graphics Processing Unit (GPU) and a Field Programmable Gate Array (FPGA) encoding and decoding chip, and the number of the hardware encoding channels is the number of encoding tasks which can be simultaneously performed by the encoding device in the hardware encoding manner.
Software coding channel: the coding device executes channels of coding tasks in a software coding mode, and the number of the software coding channels is the number of the coding tasks (generally, the number of the software coding channels is the number of cores of a CPU) which can be simultaneously performed by the device.
And (3) coding task: the method is characterized in that tasks which need to be executed by coding equipment comprise video frames (or images) which need to be coded and video frame (or image) uploading addresses; after the encoding task is received, the encoding device encodes the video frame (or image) and uploads the encoded video frame (or image) to a specified address.
Cloud Computing (Cloud Computing) refers to a mode of delivery and use of Internet Technology (IT) infrastructure, and refers to obtaining required resources in an on-demand, easily-scalable manner over a network; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. Cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like.
With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.
Big data (Big data) refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which can have stronger decision-making power, insight discovery power and flow optimization capability only by a new processing mode. With the advent of the cloud era, big data has attracted more and more attention, and the big data needs special technology to effectively process a large amount of data within a tolerance elapsed time. The method is suitable for the technology of big data, and comprises a large-scale parallel processing database, data mining, a distributed file system, a distributed database, a cloud computing platform, the Internet and an extensible storage system. The large data technology distributed system can provide convenient service for users by combining with more available resources, and the encoding task is determined to be scheduled to the target encoding equipment by acquiring the occupancy rates of the encoding channels of other encoding equipment in the encoding equipment cluster through the main encoding equipment in the embodiment of the application, so that the efficiency of resource scheduling is improved.
The method for scheduling the coding tasks can be applied to computer equipment with strong data processing capacity. The camera scheduling method provided by the embodiment of the application can be applied to a coding equipment cluster (such as a camera cluster comprising at least two intelligent cameras or a distributed camera system), or a system formed by a common camera and edge computing equipment, or a server connected with the common camera. Schematically, the embodiment of the present application takes an example that a scheduling method of an encoding task is applied to a camera cluster as an illustration, and a first camera in the camera cluster allocates a task to be processed to an intelligent camera with a smaller load for processing, so as to achieve load balancing.
Fig. 1 shows a schematic structural diagram of a computer system provided in an exemplary embodiment of the present application, where the computer system 100 includes a first camera 111, a second camera 112, a third camera 113, and a server 120.
The first camera 111, the second camera 112 and the third camera 113 belong to intelligent cameras in the same camera cluster, and all the three are intelligent cameras with computing capability, encoding capability and machine learning capability. The three are located in the same network environment, and data communication is carried out among the three through a communication network. Illustratively, the communication network may be a wired network or a wireless network, and the communication network may be at least one of a local area network, a metropolitan area network, and a wide area network. The first camera 111, the second camera 112, and the third camera 113 are used to collect a video, identify video content, such as a face appearing in the video, and upload an identified result together with the video to the server 120.
The first camera 111, the second camera 112, and the third camera 113 perform data communication with the server 120 through a communication network, respectively. Illustratively, the communication network may be a wired network or a wireless network, and the communication network may be at least one of a local area network, a metropolitan area network, and a wide area network.
The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. In one possible implementation, the server 120 is a backend server for applications installed in the terminal.
The server determines the candidate scheduling list by acquiring the code channel occupancy rate of the intelligent cameras in the camera cluster.
Illustratively, the first camera 111, the second camera 112, and the third camera 113 respectively send an encoding channel occupancy rate to the server 120, where the encoding channel occupancy rate refers to a pixel queue length Q to be encodediAnd the length threshold of the pixel queue
Figure 284389DEST_PATH_IMAGE001
The length of the pixel queue to be coded refers to the sum of functional relations corresponding to the number of pixels to be coded, wherein the functional relations comprise time consumption when the coding channel executes other processing tasks, and the other task processing comprises processing of cutting video frames, obtaining memory space, refreshing cache and the like.
Illustratively, the first camera 111 of the encoding apparatus corresponds to a first encoding channel occupancy rate, and the first camera 111 obtains a second encoding channel occupancy rate of the second camera 112 and a third encoding channel occupancy rate of the third camera 113, and sends the second encoding channel occupancy rate, the third encoding channel occupancy rate, and the first encoding channel occupancy rate to the server 120.
The server 120 is configured to perform the following steps: step 11: acquiring the occupancy rates of the coding channels of the three; step 12: generating a candidate scheduling list according to the occupancy rate of the coding channel; step 13: and sending the candidate scheduling list to the first encoding device.
The candidate dispatch list includes the intelligent cameras whose encoding channel occupancy rates are lower than the occupancy rate threshold, that is, the intelligent cameras in idle state. The server 120 sends the candidate scheduling list to a first encoding device (a first camera 111), the first camera 111 randomly selects n cameras (n is greater than or equal to 2 and n is an integer) according to the candidate scheduling list, and sends a query request to the n cameras, wherein the query request is used for acquiring the encoding channel occupancy rates of the n cameras. Illustratively, the first camera 111 selects the second camera 112 and the third camera 113 from the candidate dispatch list. The first camera 111 acquires the second encoding channel occupancy rate and the third encoding channel occupancy rate, compares the first encoding channel occupancy rate, the second encoding channel occupancy rate and the third encoding channel occupancy rate, and sends the encoding task to the encoding task queue of the intelligent camera with the minimum encoding channel occupancy rate. In some embodiments, the first camera 111 sends k query requests to the second camera 112 and the third camera 113, and the first camera 111 adds the encoding task to its own encoding task queue without responding.
In some embodiments, the candidate scheduling list is generated by the first camera 111, each camera in the camera cluster sends the encoding channel occupancy rate to the first camera 111 at intervals, and the first camera 111 summarizes the encoding channel occupancy rates of the cameras to obtain the candidate scheduling list.
It can be understood that, since the camera cluster belongs to the distributed computer system, any one camera in the camera cluster can be used as a master node for task scheduling.
It should be noted that, in the present embodiment, only the first camera 111, the second camera 112, and the third camera 113 are taken as examples, and the camera cluster further includes other smart cameras. For convenience of description, the following embodiments are described taking as an example that a scheduling method of an encoding task is applied to a first encoding apparatus (first camera).
The relevant parameters of the encoding channel of the encoding device are explained.
Illustratively, the encoding device is taken as an intelligent camera as an example.
Because each intelligent camera has certain computing power and software coding channels, the number of the coding channels is the core number of the CPU. In some embodiments, the smart camera may further include a GPU encoding and decoding chip, and the smart camera includes a software encoding channel and a hardware encoding channel corresponding to the GPU encoding and decoding chip, where the number of the encoding channels of the smart camera includes the sum of the number of cores of the CPU and the number of the hardware encoding channels, and the number of the hardware encoding channels is the number of encoding tasks that the encoding device can perform in a hardware encoding manner at the same time. In other embodiments, the intelligent camera may further include an FPGA codec chip, and the intelligent camera includes a software encoding channel and a hardware encoding channel corresponding to the FPGA codec chip, where the number of the encoding channels of the intelligent camera includes a sum of the number of cores of the CPU and the number of the hardware encoding channels.
In order to avoid the encoding task from influencing the normal work of the intelligent cameras, any intelligent camera in the camera cluster is defined as CiThe performance threshold of the CPU of the intelligent camera is
Figure 64126DEST_PATH_IMAGE002
When the CPU occupancy rate is continuous
Figure 193756DEST_PATH_IMAGE003
Over time of
Figure 476970DEST_PATH_IMAGE004
Then the coding channel is in
Figure 174799DEST_PATH_IMAGE005
Is not available for a while. Illustratively, the performance threshold of the GPU of the smart camera is
Figure 176253DEST_PATH_IMAGE006
When the occupancy rate of the GPU is continuous
Figure 109574DEST_PATH_IMAGE007
Over time of
Figure 247294DEST_PATH_IMAGE006
Then the coding channel is in
Figure 365292DEST_PATH_IMAGE008
Is not available for a while. If the CPU usage rate exceeds 80% for 10 continuous seconds, the coding channel corresponding to the CPU is considered to be unavailable within the next 20 seconds. For intelligent camera Ci、CjSimultaneously, a request for inquiring the occupancy rate of the coding channel is initiated, and the intelligent camera returns the occupancy rate of the coding channel (respectively recorded as O) after receiving the requesti、Oj(ii) a AI Camera Ci、CjThe returned code channel occupancy may not be the same code channel occupancy at the same time, but the returned code channel occupancy may be roughly considered to be the same time in a round of inquiry), Oi、OjThe smart camera for the higher is said to be "busier".
FIG. 2 illustrates a flowchart providing a scheduling method of an encoding task according to an exemplary embodiment of the present application. The method is applied to a first encoding device in an encoding device cluster, and this embodiment will be described by taking the first camera 111 applied to the computer system 100 shown in fig. 1 as an example. The method comprises the following steps.
Step 201, selecting n encoding devices in an idle state from the encoding device cluster, where the idle state is used to represent that the encoding channel occupancy rate of the encoding device is lower than an occupancy rate threshold, n is greater than or equal to 2, and n is an integer.
The encoding device cluster is a distributed computer system including a plurality of encoding devices, and each encoding device in the encoding device cluster can be used as a first encoding device. Illustratively, the first encoding device is fixed or changes in real time according to the network environment, or changes in real time according to the length of the encoding task queue. The first encoding device in the encoding device cluster is taken as the main node encoding device for explanation.
The idle state is measured by the occupancy rate of the coding channel and the occupancy rate threshold, and the occupancy rate of the coding channel is used for representing the utilization rate of the coding channel in the process of executing the coding task by the coding device. For one encoding apparatus, when encoding tasks increase, more encoding passes need to be taken to perform the encoding tasks, and thus the encoding pass occupancy increases.
Illustratively, the first encoding device is an intelligent camera, and a CPU of the intelligent camera is provided with an occupancy rate threshold corresponding to the encoding channel. Because the intelligent camera needs to execute tasks such as video acquisition, video coding, face recognition and the like, when the occupancy rate of the coding channels of the CPU in a continuous period of time does not exceed the occupancy rate threshold value, the number of the coding channels for executing the coding tasks is large, and the coding channels are in an idle state.
It can be seen that when the encoding channel occupancy is above the occupancy threshold, the encoding device is in a busy state.
In the coding device cluster, a part of coding devices may be in an idle state, another part of coding devices may be in a busy state, n coding devices are randomly selected from the coding devices in the idle state, n is greater than or equal to 2, and n is an integer.
Step 202, determining the target coding device with the minimum coding channel occupancy rate from the n coding devices in the idle state and the first coding device.
The first coding device obtains the code channel occupancy rates of the n coding devices in the idle state, and compares the code channel occupancy rates of the n coding devices with the code channel occupancy rates of the first coding device and the first coding device. And selecting the target coding device with the minimum code channel occupancy rate from the n +1 coding devices.
And step 203, scheduling the coding task to a target coding device for execution.
The encoding task refers to a task which needs to be executed by encoding equipment and comprises a video frame (or image) to be encoded and an uploading address of the video frame (or image); after the encoding task is received, the encoding device encodes the video frame (or image) and uploads the encoded video frame (or image) to a specified address. In some embodiments, scheduling the encoding task further includes scheduling meta-information of the image or video, the meta-information being information describing the structure, semantics, usage and usage of information delivered by the encoding task.
Illustratively, the target encoding device is a second encoding device, and the first encoding device schedules the encoding task to the second encoding device for execution.
Illustratively, the target encoding device is a first encoding device, and the first encoding device adds the encoding task to its own encoding task queue, where the encoding task queue refers to a queue formed by the encoding tasks to be processed by the encoding device.
In summary, in the method provided in this embodiment, by comparing the occupancy rates of the encoding channels of n +1 encoding devices (n encoding devices in an idle state and a first encoding device), the first encoding device may schedule the generated encoding task to the encoding device with the minimum occupancy rate of the encoding channel, so as to avoid a situation that most encoding devices in the encoding device cluster schedule the encoding task to the same encoding device due to real-time change of the occupancy rates of the encoding channels; even if the generated coding tasks are all scheduled to n coding devices in an idle state, the first coding device can still add the coding tasks to the coding queue of the first coding device, so that the coding tasks are prevented from waiting for a long time, and the load of the coding device cluster is kept in a balanced state all the time.
Fig. 3 shows a flowchart of a scheduling method for an encoding task, which is provided in an exemplary embodiment of the present application, and is applied to a first encoding device in an encoding device cluster, where this embodiment takes the first camera 111 applied in the computer system 100 shown in fig. 1 as an example for description. The method comprises the following steps.
Step 301, acquiring a use state of an encoding channel of a first encoding device.
The usage states of the encoding channels include an idle state, which in some embodiments is designated as an available state, or a rich state, and a busy state, which in other embodiments is designated as an unavailable state, or a non-rich state.
The first encoding device acquires the use state of its own encoding channel. In some embodiments, the usage status of the encoding channels is compared by relative encoding channel occupancy, such that if the encoding channel occupancy of encoding device a is higher than the encoding channel occupancy of encoding device b, the encoding channel occupancy of encoding device a is in a busy state compared to encoding device b.
Step 302, responding to the fact that the using state is in a busy state, sending a list obtaining request to the server, wherein the list obtaining request is used for obtaining a scheduling list from the server, and the busy state is used for representing that the occupancy rate of the coding channel of the coding equipment is higher than the occupancy rate threshold value.
The first encoding apparatus generates one or more encoding tasks at any time, and the first encoding apparatus judges whether the use status of the encoding channel is busy according to the length of the encoding task queue, and this step 302 can be replaced with the following steps 3021 to 3023.
Step 3021, obtaining a pixel queue length of the first encoding device channel, where the pixel queue length is obtained by summing function relationships corresponding to the number of pixels in the encoding task.
The length of the pixel queue refers to a queue formed by pixel points to be coded in a video or image belonging to a task to be coded.
Illustratively, for an intelligent camera C at a certain momentiThe intelligent camera defines a function f and a pixel queue length Q in a software coding mode (coding is carried out by using a CPU)iLength of pixel queue QiBy each task T to be codedjNumber of pixels xjFunction value f (x)j) And the sum is expressed as shown in formula one.
The formula I is as follows:
Figure 854042DEST_PATH_IMAGE009
wherein Q isiRepresenting pixel queue production length, Ci representing intelligent camera, xjRepresenting a task T to be encodedjThe number of pixels (in an image or video), f, represents the functional relationship between the number of pixels in the task to be encoded and the pixel queue length.
In the schematic view of the above, the first embodiment of the invention,
Figure 325475DEST_PATH_IMAGE010
x is the number of pixels, c0Factors that need to be considered when performing the encoding task. For an encoding task TjThe coding task TjHas x number of pixels to be codedjThe coding task TjWill increase the pixel queue length xj+ c0,c0The encoding task may be a temporal factor or a spatial factor, for example, when each encoding task is processed, preprocessing operations such as clipping, feature extraction, noise reduction, and sharpness improvement need to be performed on the encoding task, and a memory space needs to be applied for the encoding task or a buffer space of the encoding device needs to be refreshed, which consumes some time, and in the process, the encoding device still generates the encoding task, so the pixel queue length will increase.
Step 3022, in response to the length of the pixel queue exceeding a length threshold, determining the use status of the encoding channel of the first encoding device as a busy status, the length threshold having a corresponding relationship with an available number of encoding channels in the encoding device, the available number being the number of encoding channels of unprocessed encoding tasks.
Illustratively, an intelligent camera CiBy means of software coding (coding by CPU), the intelligent camera C is aimed atiSetting a length threshold
Figure 583281DEST_PATH_IMAGE011
The length threshold value
Figure 121447DEST_PATH_IMAGE011
Is obtained as follows.
1) The method comprises the steps of obtaining a first available quantity of a first type coding channel and a first coding parameter corresponding to the first type coding channel, wherein the first available quantity is used for representing the maximum quantity of simultaneous coding tasks of a central processing unit, and the first coding parameter is used for representing a performance parameter corresponding to the first type coding channel.
2) A length threshold is derived from the first available number and the first coding parameter.
The coding device comprises a first type coding channel which is a coding channel corresponding to the CPU. The length threshold value
Figure 831914DEST_PATH_IMAGE011
Can be expressed by the following formula two.
The formula II is as follows:
Figure 107038DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 484929DEST_PATH_IMAGE013
a length threshold value is indicated that is indicative of,
Figure 679150DEST_PATH_IMAGE014
which is indicative of a first encoding parameter,
Figure 142493DEST_PATH_IMAGE015
indicating the available number (first available number) of software coding channels at time t.
In some embodiments, when Qi
Figure 955728DEST_PATH_IMAGE016
Then, the intelligent camera CiThe use state of the coding channel is in a busy state; in other embodiments, when
Figure 188126DEST_PATH_IMAGE017
Then, the intelligent camera CiThe use status of the coding channel of (1) is busy,
Figure 303981DEST_PATH_IMAGE018
and expressing the multiple, and by increasing the multiple relation, the length of the pixel queue of the encoding device can not judge that the encoding channel of the encoding device is in a busy state just when the length threshold is exceeded.
In some embodiments, the encoding device further comprises a GPU coding and decoding chip, and the intelligent camera CiSetting a length threshold
Figure 520198DEST_PATH_IMAGE011
The length threshold value
Figure 871545DEST_PATH_IMAGE011
Is obtained as follows.
1) The method comprises the steps of obtaining a first available quantity of a first type coding channel and a first coding parameter corresponding to the first type coding channel, wherein the first available quantity is used for representing the maximum quantity of simultaneous coding tasks of a central processing unit, and the first coding parameter is used for representing a performance parameter corresponding to the first coding channel.
2) And acquiring a second available quantity of the hardware coding channels and second coding parameters corresponding to the hardware coding channels, wherein the second available quantity is used for representing the maximum quantity of the chips for simultaneously carrying out coding tasks, and the second coding parameters are used for representing performance parameters corresponding to a second type of coding channels.
3) A length threshold is calculated based on the first available number, the first encoding parameter, the second available number, and the second encoding parameter.
Calculating a first product of the first available number and the first encoding parameter; calculating a second product of the second available number and the second encoding parameter; the sum of the first product and the second product is determined as the length threshold.
The coding device comprises a first type coding channel and a second type coding channel, wherein the first type coding channel is a coding channel corresponding to a CPU, and the second type coding channel is a coding channel corresponding to a GPU. Whereby the length threshold value
Figure 224029DEST_PATH_IMAGE011
Can be expressed by the following formula three.
The formula III is as follows:
Figure 900998DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 197987DEST_PATH_IMAGE020
a length threshold value is indicated that is indicative of,
Figure 353025DEST_PATH_IMAGE021
which is indicative of a first encoding parameter,
Figure 560016DEST_PATH_IMAGE022
indicating the available number of coding channels for the CPU at time t,
Figure 142307DEST_PATH_IMAGE023
representing the second encoding parameter (performance parameter of the encoding channel corresponding to the GPU chip),
Figure 175860DEST_PATH_IMAGE024
and the available number (second available number) of the coding channels corresponding to the GPU chip at the time t is represented.
In other embodiments, the encoding device further has an FPGA encoding and decoding chip, and the second type encoding channel is an encoding channel corresponding to the FPGA, and the length threshold is a threshold
Figure 134588DEST_PATH_IMAGE025
Can be expressed by the following formula four.
The formula four is as follows:
Figure 196085DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 214857DEST_PATH_IMAGE027
a length threshold value is indicated that is indicative of,
Figure 486438DEST_PATH_IMAGE028
which is indicative of a first encoding parameter,
Figure 248858DEST_PATH_IMAGE022
indicating the available number of coding channels for the CPU at time t,
Figure 164861DEST_PATH_IMAGE029
representing the second encoding parameter (the performance parameter of the encoding channel corresponding to the FPGA chip),
Figure 88955DEST_PATH_IMAGE030
and the available number (second available number) of the coding channels corresponding to the FPGA chip at the time t is represented.
In other embodiments, the encoding device further comprises a GPU encoding and decoding chip and an FPGA decoding chipThe second type coding channel comprises a coding channel corresponding to the GPU and a coding channel corresponding to the FPGA. The length threshold value
Figure 598565DEST_PATH_IMAGE031
Can be expressed by the following formula five.
The formula five is as follows:
Figure 899096DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 669606DEST_PATH_IMAGE033
a length threshold value is indicated that is indicative of,
Figure 30180DEST_PATH_IMAGE028
which is indicative of a first encoding parameter,
Figure 276354DEST_PATH_IMAGE034
indicating the available number of CPU-corresponding code channels at time t (the first available number),
Figure 380576DEST_PATH_IMAGE035
representing the performance parameters of the coding channel corresponding to the GPU chip,
Figure 5592DEST_PATH_IMAGE036
indicating the available number of coding channels corresponding to the GPU chip at time t,
Figure 802647DEST_PATH_IMAGE037
representing the performance parameters of the coding channels corresponding to the FPGA chip,
Figure 145904DEST_PATH_IMAGE038
and the available number of the coding channels corresponding to the FPGA chip at the time t is represented.
In other embodiments, the length threshold
Figure 439437DEST_PATH_IMAGE039
Also includes a custom constant ciThe length threshold value
Figure 918960DEST_PATH_IMAGE040
Represented by the following equation six.
Formula six:
Figure 886916DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure 717469DEST_PATH_IMAGE042
a length threshold value is indicated that is indicative of,
Figure 22548DEST_PATH_IMAGE043
which is indicative of a first encoding parameter,
Figure 887736DEST_PATH_IMAGE044
indicating the available number of CPU-corresponding code channels at time t (the first available number),
Figure 761014DEST_PATH_IMAGE045
representing the performance parameters of the coding channel corresponding to the GPU chip,
Figure 78863DEST_PATH_IMAGE046
indicating the available number of coding channels corresponding to the GPU chip at time t,
Figure 938365DEST_PATH_IMAGE047
representing the performance parameters of the coding channels corresponding to the FPGA chip,
Figure 392481DEST_PATH_IMAGE038
representing the available number of coding channels corresponding to the FPGA chip at the time t, ciDenotes a constant defined according to the actual application, and c is generally expressediSet to a larger positive number such that the length threshold
Figure 764556DEST_PATH_IMAGE039
The larger the size, the coding channel of the coding equipment is always in an idle state; or, mixing ciSet to an absolute valueLarge negative number, so that the length threshold value
Figure 569701DEST_PATH_IMAGE040
Smaller, the coding channel of the coding device is always busy.
And step 3023, sending a list acquisition request to the server according to the busy state.
When the coding channel of the first coding device is in a busy state, the first coding device sends a list acquisition request to the server, wherein the list acquisition request is used for acquiring a candidate scheduling list. And the use states of the coding channels of the coding devices in the candidate scheduling list are all idle states.
In some embodiments, the candidate dispatch list is generated by a first encoding device in the encoding device cluster, that is, each encoding device in the encoding device cluster sends the use state of its encoding channel to the first encoding device at intervals, and the first encoding device determines the use state of the encoding channel of each encoding device according to the above formula to obtain the candidate dispatch list. Illustratively, the election mode of the first encoding device may be implemented by referring to an election master node (leader) in the Kafka message system, which is a distributed message publish-subscribe system. The embodiment of the present application does not limit the manner of electing the first encoding device.
In other embodiments, the candidate scheduling list is generated by the server, that is, each encoding device in the encoding device cluster sends the use state of its encoding channel to the server at intervals, and the server determines the use state of the encoding channel of each encoding device according to the above formula to obtain the candidate scheduling list. When a certain coding device generates a coding task and cannot process the coding task by the coding device, a list acquisition request is sent to a server to acquire a candidate scheduling list.
In other embodiments, each encoding device in the encoding device cluster may actively acquire the candidate dispatch list from the first encoding device at regular time (at intervals), or each encoding device in the encoding device cluster may actively acquire the candidate dispatch list from the server at regular time.
In other embodiments, the first encoding device sends the candidate scheduling list to the encoding devices in the encoding device cluster at intervals, or the server sends the candidate scheduling list to the encoding devices in the encoding device cluster at intervals.
Step 303, receiving a candidate scheduling list sent by the server, where the candidate scheduling list includes m encoding devices in an idle state in the encoding device cluster, m is greater than or equal to n, and m is a positive integer.
Illustratively, the first encoding device acquires a candidate scheduling list from the server, and the server sends the candidate scheduling list to the first encoding device according to the received list acquisition request.
Illustratively, the coding device cluster includes N coding devices, where a part of the N coding devices is in a busy state, and a part of the N coding devices is in an idle state, where m coding devices are in an idle state, and the m coding devices in the idle state are all on a candidate scheduling list, where N is equal to or greater than m and equal to or less than N, N is equal to or greater than 2, and N, m, and N are integers.
At step 304, n encoding devices in an idle state are randomly selected from the m encoding devices.
And 305, randomly sending query requests to the n encoding devices in the idle state, wherein the query requests are used for requesting the occupancy rates of the encoding channels from the n encoding devices.
The first encoding device randomly selects n encoding devices from the m encoding devices and sends query requests to the n encoding devices to obtain the encoding channel occupancy rates of the n encoding devices.
And step 306, repeatedly executing the step of randomly sending the query request to the n encoding devices at most k times.
Illustratively, a preset number k of times is set for randomly selecting n encoding devices to send query requests for a first encoding device.
Illustratively, k is 3, n is 3, the first encoding device is an intelligent camera C1, the intelligent camera C1 randomly selects the intelligent camera C2, the intelligent camera C3 and the intelligent camera C4 for the first time, and sends query requests to the three intelligent cameras; the intelligent camera C1 randomly selects the intelligent camera C4, the intelligent camera C6 and the intelligent camera C7 for the second time, and sends query requests to the three intelligent cameras; the intelligent camera C1 randomly selects the intelligent camera C2, the intelligent camera C3 and the intelligent camera C5 for the third time, and sends query requests to the three intelligent cameras.
Step 307, in response to receiving the encoding channel occupancy rate sent by the at least one encoding device in the process of executing the k steps, determining a target encoding device with the minimum encoding channel occupancy rate from the at least one encoding device and the first encoding device, where k is greater than or equal to 1, and k is an integer.
In the process of sending the query request for three times, when the first encoding device receives the encoding channel occupancy rate sent by one encoding device, the received encoding channel occupancy rate is compared with the encoding channel occupancy rate of the first encoding device, so that the target device with the minimum encoding channel occupancy rate is determined.
And if the first coding device receives the coding channel occupancy rates sent by more than two coding devices, comparing the received multiple coding channel occupancy rates with the coding channel occupancy rate of the first coding device, so as to determine the target device with the minimum coding channel occupancy rate.
And step 308, scheduling the coding task to a target coding device for execution.
Illustratively, in the implementation process of step 306, when the smart camera C1 sends an inquiry request to the smart camera C2, the smart camera C3, and the smart camera C4 for the first time, the encoding channel occupancy rates of the smart camera C2 and the smart camera C3 are received, and after the encoding channel occupancy rates are compared, the encoding channel occupancy rate of the smart camera C2 is the minimum, and the smart camera C1 schedules the target encoding task to the smart camera C2 for execution.
Step 309, in response to not receiving the encoding channel occupancy rate sent by the encoding device in the process of executing the k steps, adding the encoding task to the encoding task queue of the first encoding device.
In the process implemented in step 306, if the first encoding device does not receive the encoding channel occupancy rate sent by any encoding device, the first encoding device determines that the first encoding device is the target encoding device, and adds the encoding task to the encoding task queue of the first encoding device.
It should be noted that there are also two cases where an encoding task is to be added to the encoding task queue of the first encoding device.
And in response to the fact that no coding device in an idle state exists in the candidate scheduling list received by the first coding device, adding the coding task to a coding task queue of the first coding device.
And in response to the encoding channel of the first encoding device being in an idle state, adding the encoding task to the encoding task queue of the first encoding device.
In summary, in the method of this embodiment, by comparing the occupancy rates of the encoding channels of n +1 encoding devices (n encoding devices in an idle state and a first encoding device), the first encoding device may schedule the generated encoding task to the encoding device with the minimum occupancy rate of the encoding channel, so as to avoid a situation that most encoding devices in the encoding device cluster schedule the encoding task to the same encoding device due to real-time change of the occupancy rates of the encoding channels; even if the generated coding tasks are all scheduled to n coding devices in an idle state, the first coding device can still add the coding tasks to the coding queue of the first coding device, so that the coding tasks are prevented from waiting for a long time, and the load of the coding device cluster is kept in a balanced state all the time.
The method of the embodiment also obtains the occupancy rates of the coding channels of the n coding devices by randomly sending query requests to the n coding devices in the idle state, and repeats the process of sending the query requests for the preset times at most, if the occupancy rates of the coding channels are received within the preset times, the received occupancy rates of the coding channels are compared with the occupancy rate of the coding channel of the first coding device, so that the generated coding tasks are scheduled to the coding device with the minimum occupancy rate of the coding channel, the generated coding tasks are distributed and scheduled in time, and the backlog of the coding tasks caused by overlong time for the first coding device to wait for the responses of other coding devices is avoided.
According to the method, if any coding channel occupancy rate cannot be received within the preset times, the generated coding tasks are dispatched to the coding task queue of the first coding device, even if the coding channel occupancy rate changes in real time, the n coding devices randomly selected by the first coding device are all in a busy state, the first coding device can determine the first coding device as the target coding device with the minimum coding channel occupancy rate, the coding tasks are guaranteed to be dispatched and distributed all the time in time, and the backlog of the coding tasks caused by the fact that the first coding device waits for the response time of other coding devices is too long is avoided.
In the method of this embodiment, n encoding devices are randomly selected by obtaining the candidate scheduling list from the server, and the encoding device in the idle state is determined by using the candidate scheduling list generated by the server, so that the first encoding device can accurately determine the encoding device in the idle state.
The method of this embodiment further determines whether the use state of the coding channel of the first coding device is a busy state by determining whether the length of the pixel queue exceeds a length threshold, so that the first coding device can determine when to send a list acquisition request to the server to obtain a candidate scheduling list.
In the method of this embodiment, the length threshold is further calculated according to the available number and the encoding parameters of the different types of encoding channels corresponding to the first encoding device, so that the calculated length threshold is accurate, and it is more accurate to subsequently determine whether the encoding channel of the first encoding device is in a busy state according to the length threshold.
Based on the alternative embodiment of fig. 3, when the encoding device processes the encoding task, in order to reduce the waiting time of the task to be encoded, the encoding process and the recognition process may be decoupled to accelerate the processing process of the encoding task, as shown in fig. 4, the method includes the following steps.
Step 311, acquiring a video frame sequence in the acquired video, where the video frames in the video frame sequence correspond to frame identifiers.
Take the example that the encoding task is to identify the video containing the pedestrian movement track. The sequence of video frames in the video records the moving track of a pedestrian, and generally includes information of the position, characteristics, attributes and the like of a certain pedestrian at a plurality of time points.
The positions of the pedestrians at multiple time points are generally indirectly obtained from the geographic position of the pedestrian, the position of the camera, the Media Access Control Address (MAC) of the camera, and the relative position between consecutive video frames (or images) captured by the camera. If the pedestrian moves in the market, the camera is the camera in the market, and each camera should have unique identification, and the pedestrian moves to the northwest corner in the southeast corner of second floor in the market, then the camera that is located the southeast corner of market and northwest corner can all shoot the pedestrian, can confirm the position coordinate of pedestrian in the removal in-process through the position coordinate of the camera of southeast corner and northwest corner.
The features of the pedestrian include facial features. Illustratively, the face feature vectors are determined in the captured video by identifying face feature points, including face five sense organs, such as eyes, nose, left and right mouth corners, and the like.
The attribute of the pedestrian includes a face attribute. Illustratively, the attributes of the face include, but are not limited to, whether the pedestrian wears sunglasses, whether the pedestrian wears a mask, the pedestrian's hairstyle, the pedestrian's estimated age, and the pedestrian's estimated gender.
The acquired video also comprises other information, such as a snapshot timestamp, a face attribute confidence, the quality of the snapshot image and the like.
The frame identifier is used for uniquely identifying a video frame in the video frame sequence, and the frame identifier may be a character string of a literal type, an alphabetical type, or a numeric type.
Step 312, identify the video content contained in the video frames in the video frame sequence to obtain a content identification result.
The video content identification means identifying video content contained in a video through an encoding device, for example, the video contains pedestrians, and the encoding device identifies information such as face features, face attributes and moving tracks of the pedestrians. As shown in the right diagram of fig. 5, the second sequence of video frames 52 comprises a content recognition result, which is a framed pedestrian.
The encoding apparatus is a computing apparatus having a certain machine learning capability, and illustratively, the encoding apparatus includes a trained machine learning model, and the trained machine learning model is obtained by training the machine learning model through a sample video containing a pedestrian. And identifying the video content through the trained machine learning model to obtain a content identification result.
Step 313, sending the content recognition result and the frame identifier corresponding to the video frame to the server.
The encoding device can be any device in an encoding device cluster, is connected with a server, sends the identified content identification result to the server, and simultaneously sends the frame identification corresponding to the acquired video frame to the server.
Step 314, encoding the video frames in the video frame sequence to obtain encoded video frames.
Illustratively, the encoding device duplicates the collected video into two parts, one part is used for identifying the video content, the other part is used for encoding the video, the two parts of the video are processed in parallel, and the identification process and the encoding process are decoupled. As shown in the left diagram of fig. 5, the first video frame sequence 51 is a partial video frame sequence copied from an original video, and includes timestamp information, pedestrian feature information, attribute information of a pedestrian, and frame identification.
And 315, sending the encoded video frame and the frame identifier corresponding to the video frame to a server, wherein the server is used for storing the video frame corresponding to the content identification result and the encoded video frame in an associated manner according to the frame identifier.
The encoding device can be any device in an encoding device cluster, is connected with a server, sends encoded video frames to the server, and simultaneously sends frame identifiers corresponding to the acquired video frames to the server.
After receiving the video frames respectively sent in the identification process and the coding process, the server associates the video frames in the two processes according to the unique frame identifier corresponding to each video frame, so that the content identification result of the complete video is obtained. The server archives the objects identified in the video and stores the video.
It is understood that steps 314 and 315 described above may be performed prior to steps 312 and 313, or simultaneously with steps 312 and 313.
Because the video frames are easy to have the conditions of time delay and the like during the operations of extraction, cutting and the like and the encoding operation, the information corresponding to the video content can be uploaded to the server only after the video frames are processed and encoded. Coupling the recognition process with the processing process causes a delay in subsequent operations (e.g., comparing the facial features with features in the database of suspected persons escaping from the life) due to processing of the video frame pictures or encoding of the video frames, which is particularly obvious when the encoding channel of the encoding apparatus is busy or when the network environment is poor.
In summary, in the method of this embodiment, by decoupling the identification process and the encoding process, the server performs associated storage on the encoded video frame and the video frame containing the content identification result through the frame identifier, so as to obtain a complete video, so that the content identification result and the original video can be processed and uploaded respectively, thereby avoiding delayed uploading of the content identification result due to time consumption of video processing and encoding processing, and improving the real-time performance of the identification information.
It is understood that the image coding is similar to the video coding, and the present embodiment only takes the video coding process as an example; the identified track information can contain images, videos and other binary information, and the distribution of the image coding task can only contain image frames, and can also contain information such as meta information and uploading addresses of the images.
In one example, a first encoding device randomly selects two encoding devices in an idle state and initiates a query request to the two encoding devices, as shown in fig. 6, and the method includes the following steps.
Step 601, obtaining a video frame to be encoded.
The first encoding apparatus is exemplified by the first camera 111 shown in fig. 1. The first camera 111 starts a video capture operation, and the captured video includes a video frame sequence including a plurality of video frames, where the video frames in the video are to-be-encoded video frames.
Step 602, checking the pixel queue length and each channel availability of the encoding channels of the first encoding device.
The first camera 111 acquires the pixel queue length of its own encoding channel, and acquires the use state of each type of encoding channel of the first camera 111. Illustratively, the first camera 111 includes a CPU and a GPU codec chip, and the first camera 111 determines whether the encoding channel of the first camera 111 is in an available state, that is, in a busy state, according to the sum of the pixel queue length of the encoding channel corresponding to the CPU and the pixel queue length of the encoding channel corresponding to the GPU.
Step 603, whether the encoding channel is busy.
The first camera 111 determines whether its own encoding channel is in a busy state through step 3021 in the above embodiment, which is not described herein again. If the encoding channel is busy, go to step 604; if the own encoding channel is not busy, go to step 608. The busy state of the coding channel is obtained through the length of the pixel queue, and when the length of the pixel queue exceeds a length threshold value, the coding channel is in the busy state. The pixel queue length refers to the sum of the function values of the number of pixels for each task to be encoded.
Step 604, two devices are randomly selected from the list of devices with abundant coding resources.
The first camera 111 randomly selects two intelligent cameras from a candidate scheduling list, where the candidate scheduling list includes m intelligent cameras with available encoding channels, where m is greater than or equal to 2, and m is an integer. Illustratively, the first camera 111 randomly selects two smart cameras from the m smart cameras, namely a second camera 112 and a third camera 113.
Step 605, inquiring the occupancy rate of the coding channels of the two devices.
The first camera 111 sends query requests to the second camera 112 and the third camera 113 respectively, and the query requests are used for requesting the code channel occupancy rates of the two intelligent cameras. The encoding channel occupancy is a ratio between the length of the pixel queue and the length threshold, and is schematically defined as the following formula seven.
The formula seven:
Figure 731430DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 305631DEST_PATH_IMAGE049
representing the code channel occupancy, Q, of the ith coding device at time ti Which represents the length of the pixel queue and,
Figure 520711DEST_PATH_IMAGE050
indicating a length threshold.
The length of the pixel queue is related to the sum of corresponding functional relations of the number of pixels of the encoding task; the length threshold value is related to the available number of coding channels for characterizing the maximum number of coding channels not subject to coding tasks in the coding device and to the coding parameters for characterizing the performance parameters of the coding channels.
Step 606, whether a response is received within a specified time.
Illustratively, the first camera 111 randomly selects two cameras at most k times, and sends a query request to the randomly selected two cameras, where the query request is used for querying the occupancy rate of the encoding channel. Whether the first camera 111 can acquire the encoding channel occupancy rate is judged by whether the first camera 111 can receive a response within a prescribed time.
If the first camera 111 receives a response of at least one intelligent camera of the second camera 112 and the third camera 113 within a specified time, entering step 607; if the first camera 111 does not receive any smart camera response within the specified time, step 608 is entered.
And step 607, comparing the response result with the occupancy rate of the coding channel of the first coding device, and sending the coding task to the coding device with the minimum occupancy rate of the coding channel.
And the first camera compares the received code channel occupancy rate with the code channel occupancy rate of the first camera, and sends the code task to the coding equipment with the lowest code channel occupancy rate. That is, the first camera 111 compares the second encoding channel occupancy rate (the channel occupancy rate of the second camera 112), the third encoding channel occupancy rate (the channel occupancy rate of the third camera 113) and the encoding channel occupancy rate of itself, and determines the target encoding device having the minimum encoding occupancy rate.
Step 608, sending the encoding task to the encoding task queue of the first encoding device.
If the encoding channel of the first camera 111 is not in a busy state, adding the encoding task into the encoding task queue of the first camera; or, if the first camera 111 does not receive any response result of the intelligent camera within a specified time, adding the encoding task to its own encoding task queue; or, if there is no encoding device in an idle state in the candidate scheduling list received by the first camera 111, adding the encoding task to its own encoding task queue.
And step 609, ending.
By the method of the embodiment, two coding devices are randomly selected, and the coding device with the minimum coding channel occupancy rate is determined, so that the coding task can be scheduled to the coding device with the minimum coding channel occupancy rate, the situation that most coding devices in a coding device cluster schedule the coding task to the same coding device at random is avoided, and the coding device cluster always keeps load balance.
The following describes effects produced by the scheduling method for encoding tasks provided in the embodiment of the present application.
Illustratively, with an intelligent camera asThe coding equipment simulates the load condition of a camera cluster consisting of 100 intelligent cameras when a coding task is scheduled, and the processing capacity of each intelligent camera is set to be the same, namely the total time consumption of the intelligent camera cluster per second in executing the coding task is TtotalThat is, the time consumed for executing the encoding task by a single intelligent camera per second is
Figure 547573DEST_PATH_IMAGE051
. In order to simulate different intelligent cameras with different busy degrees, the intelligent cameras are arranged in 100 intelligent cameras, and the time consumption for executing the encoding task by 10 intelligent cameras per second is 2TcAnd the time consumption for executing the coding task by 20 intelligent cameras per second is 1.5TcAnd the average time consumption of executing the coding task by 40 intelligent cameras per second is 0.975TcAnd the average time consumption of executing the coding task by 20 intelligent cameras per second is 0.5TcAnd the average time consumption of executing tasks by 10 intelligent cameras per second is 0.1Tc
Each encoding device generates an encoding task every 20 milliseconds (ms), and the time-consuming compliance of the encoding device to process the encoding task
Figure 263725DEST_PATH_IMAGE052
σ =0.4 normal distribution. Updating the candidate scheduling list with abundant coding resources every 10 seconds, and when the average load of the intelligent camera cluster is more than or equal to 0 and less than or equal to L<At 1 time (i.e. T)total=100 × 1000 × L msec), 1 hour of the process of generating a coding task and processing by the coding device is simulated, and the maximum waiting time after the generation of the coding task in the case of the average load of 0.95 and 0.975 is calculated by comparing the process of generating a coding task and the process of processing by the coding device using the task non-reallocation method, the conventional random algorithm, and the dual random scheduling algorithm, respectively, as shown in table one.
Watch 1
Figure 692433DEST_PATH_IMAGE053
If the maximum waiting time of the encoding task is 2 seconds (the encoding task is discarded after the waiting time is exceeded), the task discarding rate is as shown in the following table two.
Watch two
Figure 78415DEST_PATH_IMAGE054
The cluster average load L is used to represent a ratio of time consumed by each coding device to perform a coding task to total time consumed by all coding tasks, and can be used as an index for measuring the total number of coding tasks of the cluster. For example, when L =0.95, Ttotal=100 × 1000 × 0.95=95000 milliseconds (ms) =95 seconds(s). Each AI camera takes 95 seconds per second to perform the encoding task.
The task non-redistribution method refers to that when the coding device generates a coding task, the coding device generating the coding task executes the task, namely 'self-generation self-consumption'. The conventional random algorithm is that when a coding device generates a coding task, one coding device is randomly selected from a coding device cluster, and the coding task is sent to the selected coding device. The dual random scheduling algorithm is that when the coding equipment generates a coding task, two coding equipments are randomly selected from the candidate scheduling list, the coding channels of the two coding equipments are compared with the coding channel of the coding equipment, and the coding equipment with the minimum coding channel occupancy rate is selected to send the coding task.
As can be seen from table one, with the dual scheduling random algorithm provided in the embodiment of the present application, under the condition that the average loads of the intelligent camera clusters are 95% and 97.5%, respectively, the longest waiting time after the coding task is generated is less than the longest waiting time of the coding task when the task is not redistributed or the conventional random algorithm is adopted.
As can be seen from table two, with the dual scheduling random algorithm provided in the embodiment of the present application, under the condition that the average loads of the intelligent camera clusters are 95% and 97.5%, respectively, the discarding rate of the coding tasks is less than that of the coding tasks when the tasks are not redistributed or the conventional random algorithm is adopted.
The method for scheduling an encoding task provided in the embodiment of the present application is applied to an encoding device cluster, where a plurality of encoding devices in the encoding device cluster are in the same block chain, each encoding device is a node in the block chain, and after processing an image (or a video), the encoding device can store the processed result in the block chain.
The following are embodiments of an apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the device embodiments of the present application, reference is made to the method embodiments of the present application.
Fig. 7 is a block diagram illustrating a scheduling apparatus for encoding tasks according to an exemplary embodiment of the present application, where the apparatus includes:
a selecting module 710, configured to select n encoding devices in an idle state from the encoding device cluster, where the idle state is used to characterize that an occupancy rate of a coding channel of the encoding device is lower than an occupancy rate threshold, n is greater than or equal to 2, and n is an integer;
the processing module 720 is configured to determine, from the n encoding devices in the idle state and the first encoding device, a target encoding device with the minimum encoding channel occupancy rate;
and a scheduling module 730, configured to schedule the encoding task to a target encoding device for execution.
In an alternative embodiment, the apparatus includes a sending module 740;
the sending module 740 is configured to randomly send a query request to the n encoding devices in the idle state, where the query request is used to request the occupancy rates of the encoding channels from the n encoding devices; repeatedly executing the step of randomly sending the query request to the n encoding devices at most k times;
the processing module 720 is configured to determine, in response to receiving the encoding channel occupancy rate sent by the at least one encoding device in the process of executing k steps, a target encoding device with the smallest encoding channel occupancy rate from the at least one encoding device and the first encoding device, where k is greater than or equal to 1, and k is an integer.
In an optional embodiment, the scheduling module 730 is configured to add the encoding task to the encoding task queue of the first encoding device in response to not receiving the encoding channel occupancy sent by the encoding device in the process of executing the k steps.
In an alternative embodiment, the apparatus includes an acquisition module 750;
the obtaining module 750 is configured to obtain a use state of a coding channel of a first coding device;
the sending module 740 is configured to send, in response to that the usage state is in a busy state, a list obtaining request to the server, where the list obtaining request is used to obtain a candidate scheduling list from the server, and the busy state is used to indicate that the occupancy rate of the coding channel of the coding device is higher than an occupancy rate threshold;
the processing module 720 is configured to receive a candidate scheduling list sent by a server, where the candidate scheduling list includes m encoding devices in an idle state in an encoding device cluster, m is greater than or equal to n, and m is a positive integer;
the selecting module 710 is configured to randomly select n encoding devices in an idle state from the m encoding devices.
In an optional embodiment, the obtaining module 750 is configured to obtain a pixel queue length of a coding channel of the first coding device, where the pixel queue length is obtained by summing function relationships corresponding to the number of pixels in a coding task;
the processing module 720 is configured to determine, in response to that the length of the pixel queue exceeds a length threshold, a use state of the coding channels of the first coding device as a busy state, where the length threshold has a corresponding relationship with an available number of coding channels in the coding device, and the available number is a number of coding channels that have not processed coding tasks;
the sending module 740 is configured to send a list obtaining request to the server according to the busy state.
In an optional embodiment, the encoding device includes a first type encoding channel, where the first type encoding channel is an encoding channel corresponding to the central processing unit, and the encoding device includes a second type encoding channel, where the second type encoding channel is an encoding channel corresponding to the chip;
the obtaining module 750 is configured to obtain a first available number of the first type coding channels and a first coding parameter corresponding to the first type coding channels, where the first available number is used to represent a maximum number of simultaneous coding tasks performed by the central processing unit, and the first coding parameter is used to represent a performance parameter corresponding to the first type coding channels;
the obtaining module 750 is configured to obtain a second available number of the second type coding channels and a second coding parameter corresponding to the second type coding channels, where the second available number is used to represent a maximum number of coding tasks simultaneously performed by the chip, and the second coding parameter is used to represent a performance parameter corresponding to the second type coding channels;
the processing module 720 is configured to calculate the length threshold according to the first available number, the first encoding parameter, the second available number, and the second encoding parameter.
In an alternative embodiment, the processing module 720 is configured to calculate a first product of the first available number and the first encoding parameter; calculating a second product of the second available number and the second encoding parameter; the sum of the first product and the second product is determined as the length threshold.
In an optional embodiment, the obtaining module 750 is configured to obtain a video frame sequence in the captured video, where video frames in the video frame sequence correspond to frame identifiers;
the processing module 720 is configured to identify video content included in a video frame of the video frame sequence to obtain a content identification result;
the sending module 740 is configured to send the content identification result and the frame identifier corresponding to the video frame to the server;
the processing module 720 is configured to perform encoding processing on a video frame in the video frame sequence to obtain an encoded video frame;
the sending module 740 is configured to send the encoded video frame and the frame identifier corresponding to the video frame to a server, where the server is configured to perform association storage on the video frame corresponding to the content identification result and the encoded video frame according to the frame identifier.
In summary, in the apparatus of this embodiment, by comparing the encoding channel occupancy rates of n +1 encoding devices (n encoding devices in an idle state and a first encoding device), the first encoding device may schedule the generated encoding task to the encoding device with the minimum encoding channel occupancy rate, so as to avoid a situation that a majority of encoding devices in an encoding device cluster schedule the encoding task to the same encoding device due to real-time change of the encoding channel occupancy rates; even if the generated coding tasks are all scheduled to n coding devices in an idle state, the first coding device can still add the coding tasks to the coding queue of the first coding device, so that the coding tasks are prevented from waiting for a long time, and the load of the coding device cluster is kept in a balanced state all the time.
The device of the embodiment also obtains the occupancy rates of the coding channels of the n coding devices by randomly sending query requests to the n coding devices in the idle state, and repeats the query request sending process for the preset times at most, if the occupancy rates of the coding channels are received within the preset times, the received occupancy rates of the coding channels are compared with the occupancy rate of the coding channel of the first coding device, so that the generated coding tasks are scheduled to the coding device with the minimum occupancy rate of the coding channels, the generated coding tasks are distributed and scheduled in time, and the backlog of the coding tasks caused by overlong time for the first coding device to wait for the responses of other coding devices is avoided.
According to the device of the embodiment, if any coding channel occupancy rate cannot be received within the preset times, the generated coding tasks are dispatched to the coding task queue of the first coding device, even if the coding channel occupancy rate changes in real time, the n coding devices randomly selected by the first coding device are all in a busy state, the first coding device can determine the first coding device as the target coding device with the minimum coding channel occupancy rate, the coding tasks are guaranteed to be dispatched and distributed all the time in time, and the backlog of the coding tasks caused by the fact that the first coding device waits for the response time of other coding devices to be too long is avoided.
The apparatus of this embodiment further obtains the candidate scheduling list from the server to randomly select n encoding devices, and determines the encoding device in the idle state by using the candidate scheduling list generated by the server, so that the first encoding device can accurately determine the encoding device in the idle state.
The apparatus of this embodiment further determines whether the usage state of the coding channel of the first coding device is a busy state by determining whether the length of the pixel queue exceeds a length threshold, so that the first coding device can determine when to send a list acquisition request to the server to obtain a candidate scheduling list.
The apparatus of this embodiment further calculates the length threshold according to the available number of different types of coding channels and the coding parameters corresponding to the first coding device, so that the calculated length threshold is accurate, and it is more accurate to subsequently determine whether the coding channel of the first coding device is in a busy state according to the length threshold.
The device of the embodiment further comprises a decoupling identification process and a coding process, and the server stores the coded video frames and the video frames containing the content identification results in an associated manner through the frame identifiers, so that complete videos are obtained, the content identification results and the original videos can be processed and uploaded respectively, the time delay of uploading the content identification results caused by the time consumption of video processing and coding processing is avoided, the time delay of uploading the content identification results caused by the coding processing is avoided, and the real-time performance of the identification information is improved.
Fig. 8 shows a schematic structural diagram of a server according to an exemplary embodiment of the present application. The server may be the server 120 in the computer system 100 shown in fig. 1. Specifically, the following sections are included.
The server 800 includes a Central Processing Unit (CPU) 801, a system Memory 804 including a Random Access Memory (RAM) 802 and a Read Only Memory (ROM) 803, and a system bus 805 connecting the system Memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein a display 808 and an input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read Only Memory (CD-ROM) drive.
Computer-readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Solid State Memory technology, CD-ROM, Digital Versatile Disks (DVD), or Solid State Drives (SSD), other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
According to various embodiments of the present application, server 800 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
Fig. 9 shows a block diagram of a computer device 900 provided in an exemplary embodiment of the present application. The computer device 900 may be: the system comprises coding equipment with a coding function, an intelligent camera, edge computing equipment, a notebook computer or a desktop computer. Computer device 900 may also be referred to by other names such as user device, portable computer device, laptop computer device, desktop computer device, and so forth. Such as the computer device may be the first camera 111, the second camera 112 and the third camera 113 as shown in fig. 1.
Generally, computer device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 9-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement a method of scheduling an encoded task provided by method embodiments herein.
In some embodiments, computer device 900 may also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuitry 904 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless-Fidelity (wlan) networks, and/or Wi-Fi (Wireless-Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 905 may be one, providing the front panel of the computer device 900; in other embodiments, the number of the display screens 905 may be at least two, and each of the display screens may be disposed on a different surface of the computer device 900 or may be in a foldable design; in other embodiments, the display 905 may be a flexible display, disposed on a curved surface or on a folded surface of the computer device 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of a computer apparatus, and a rear camera is disposed on a rear surface of the computer apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 900 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The Location component 908 is used to locate the current geographic Location of the computer device 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 909 is used to supply power to the various components in the computer device 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 9 is not intended to be limiting of the computer device 900 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
Embodiments of the present application further provide a computer device, including: a processor and a memory, the computer device memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the scheduling method of the encoding task in the above embodiments.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the scheduling method of the encoding task in the above embodiments.
Embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the scheduling method of the encoding task as in the embodiments.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A scheduling method for an encoding task is applied to a first encoding device in a cluster of encoding devices, and the method comprises the following steps:
acquiring the use state of a coding channel of the first coding device, responding to the busy state of the use state, and sending a list acquisition request to a server, wherein the list acquisition request is used for acquiring a candidate scheduling list from the server, and the busy state is used for representing that the occupancy rate of the coding channel of the coding device is higher than an occupancy rate threshold value; receiving the candidate scheduling list sent by the server, wherein the candidate scheduling list comprises m encoding devices in an idle state in the encoding device cluster, m is greater than or equal to n, and m is a positive integer; randomly selecting n encoding devices in an idle state from the m encoding devices, wherein the idle state is used for representing that the occupancy rate of an encoding channel of the encoding device is lower than an occupancy rate threshold value, n is more than or equal to 2, and n is an integer;
determining the target coding device with the minimum coding channel occupancy rate from the n coding devices in the idle state and the first coding device;
and scheduling the coding task to the target coding device for execution.
2. The method according to claim 1, wherein the determining the target encoding device with the minimum encoding channel occupancy rate from the n encoding devices in the idle state and the first encoding device comprises:
randomly sending query requests to the n encoding devices in the idle state, wherein the query requests are used for requesting the encoding channel occupancy rates from the n encoding devices;
repeatedly performing the step of randomly transmitting the query request to the n encoding devices at most k times;
and in response to receiving the encoding channel occupancy rate sent by at least one encoding device in the process of executing the k steps, determining a target encoding device with the minimum encoding channel occupancy rate from the at least one encoding device and the first encoding device, wherein k is greater than or equal to 1 and is an integer.
3. The method of claim 2, further comprising:
and in response to the fact that the encoding channel occupancy rate sent by the encoding device is not received in the process of executing the k steps, adding the encoding task to an encoding task queue of the first encoding device.
4. The method of claim 1, wherein sending a list acquisition request to a server in response to the usage status being in a busy state comprises:
acquiring the pixel queue length of a coding channel of the first coding device, wherein the pixel queue length is obtained by the sum of functional relations corresponding to the number of pixels in the coding task;
determining a use status of a coding channel of the first coding device as the busy status in response to the pixel queue length exceeding a length threshold, the length threshold having a corresponding relationship with an available number of coding channels in the coding device, the available number being a number of coding channels that have not processed the coding task;
and sending the list acquisition request to the server according to the busy state.
5. The method of claim 4, wherein the encoding device comprises a first type of encoding channel, the first type of encoding channel being a central processor corresponding encoding channel, the encoding device comprises a second type of encoding channel, the second type of encoding channel being a chip corresponding encoding channel;
the length threshold is obtained by the following method:
acquiring a first available number of the first type coding channels and a first coding parameter corresponding to the first type coding channels, wherein the first available number is used for representing the maximum number of the central processing unit for simultaneously performing the coding tasks, and the first coding parameter is used for representing a performance parameter corresponding to the first type coding channels;
acquiring a second available number of the second type coding channels and second coding parameters corresponding to the second type coding channels, wherein the second available number is used for representing the maximum number of the coding tasks simultaneously performed by the chip, and the second coding parameters are used for representing performance parameters corresponding to the second type coding channels;
calculating the length threshold based on the first available quantity, the first encoding parameter, the second available quantity, and the second encoding parameter.
6. The method of claim 5, wherein calculating the length threshold based on the first available number, the first coding parameter, the second available number, and the second coding parameter comprises:
calculating a first product of the first available number and the first encoding parameter;
calculating a second product of the second available quantity and the second encoding parameter;
determining a sum of the first product and the second product as the length threshold.
7. The method of any of claims 1 to 3, further comprising:
acquiring a video frame sequence in a collected video, wherein video frames in the video frame sequence correspond to frame identifiers;
identifying video content contained in video frames in the video frame sequence to obtain a content identification result;
sending the content identification result and a frame identifier corresponding to the video frame to a server;
coding the video frames in the video frame sequence to obtain coded video frames;
and sending the encoded video frame and a frame identifier corresponding to the video frame to the server, wherein the server is used for storing the video frame corresponding to the content identification result and the encoded video frame in an associated manner according to the frame identifier.
8. The device for scheduling the coding tasks is characterized by comprising an acquisition module, a sending module, a processing module, a selection module and a scheduling module;
the acquisition module is used for acquiring the use state of a coding channel of the first coding device;
a sending module, configured to send a list obtaining request to a server in response to that the usage state is in a busy state, where the list obtaining request is used to obtain a candidate scheduling list from the server, and the busy state is used to characterize that an occupancy rate of a coding channel of the coding device is higher than an occupancy rate threshold;
the processing module is used for receiving the candidate scheduling list sent by the server, wherein the candidate scheduling list comprises m encoding devices in an idle state in an encoding device cluster, m is greater than or equal to n, and m is a positive integer;
the selection module is used for randomly selecting n encoding devices in an idle state from the m encoding devices, wherein the idle state is used for representing that the occupancy rate of the encoding channel of the encoding device is lower than an occupancy rate threshold value, n is more than or equal to 2, and n is an integer;
the processing module is further configured to determine, from the n encoding devices in the idle state and the first encoding device, a target encoding device with the minimum encoding channel occupancy rate;
and the scheduling module is used for scheduling the coding task to the target coding equipment for execution.
9. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement a scheduling method of an encoding task according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, implements the scheduling method of an encoding task according to any one of claims 1 to 7.
CN202110024208.7A 2021-01-08 2021-01-08 Method, device and equipment for scheduling coding tasks and storage medium Active CN112346845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110024208.7A CN112346845B (en) 2021-01-08 2021-01-08 Method, device and equipment for scheduling coding tasks and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110024208.7A CN112346845B (en) 2021-01-08 2021-01-08 Method, device and equipment for scheduling coding tasks and storage medium

Publications (2)

Publication Number Publication Date
CN112346845A CN112346845A (en) 2021-02-09
CN112346845B true CN112346845B (en) 2021-04-16

Family

ID=74427877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110024208.7A Active CN112346845B (en) 2021-01-08 2021-01-08 Method, device and equipment for scheduling coding tasks and storage medium

Country Status (1)

Country Link
CN (1) CN112346845B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238853B (en) * 2021-06-15 2021-11-12 上海交通大学 Server-free computing scheduling system and method based on function intermediate expression
CN114245133A (en) * 2022-02-23 2022-03-25 北京拙河科技有限公司 Video block coding method, coding transmission method, system and equipment
CN115269209B (en) * 2022-09-30 2023-01-10 浙江宇视科技有限公司 GPU cluster scheduling method and server
CN115619614B (en) * 2022-12-19 2023-08-01 北京中昌工程咨询有限公司 Intelligent classification coding method and system for rail transit assets

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1873613A (en) * 2005-05-30 2006-12-06 英业达股份有限公司 Load balanced system and method of preloading files
JP2007328508A (en) * 2006-06-07 2007-12-20 Sony Corp Recording device, method and program
CN105786600A (en) * 2016-02-02 2016-07-20 北京京东尚科信息技术有限公司 Task scheduling method and device
CN105975334A (en) * 2016-04-25 2016-09-28 深圳市永兴元科技有限公司 Distributed scheduling method and system of task
CN109345305A (en) * 2018-09-28 2019-02-15 广州凯风科技有限公司 A kind of elevator electrical screen advertisement improvement analysis method based on face recognition technology
CN110018893A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 A kind of method for scheduling task and relevant device based on data processing
CN111209110A (en) * 2019-12-31 2020-05-29 浙江明度智控科技有限公司 Task scheduling management method, system and storage medium for realizing load balance
CN111290841A (en) * 2018-12-10 2020-06-16 北京沃东天骏信息技术有限公司 Task scheduling method and device, computing equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2008299666A1 (en) * 2007-09-14 2009-03-19 Bae Systems Plc Real time priority based scheduling for radar tasks
US10802831B2 (en) * 2017-06-30 2020-10-13 Sap Se Managing parallel processing
US11113113B2 (en) * 2017-09-08 2021-09-07 Apple Inc. Systems and methods for scheduling virtual memory compressors
CN109766175A (en) * 2018-12-28 2019-05-17 深圳晶泰科技有限公司 Resource elastic telescopic system and its dispatching method towards high-performance calculation on cloud

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1873613A (en) * 2005-05-30 2006-12-06 英业达股份有限公司 Load balanced system and method of preloading files
JP2007328508A (en) * 2006-06-07 2007-12-20 Sony Corp Recording device, method and program
CN105786600A (en) * 2016-02-02 2016-07-20 北京京东尚科信息技术有限公司 Task scheduling method and device
CN105975334A (en) * 2016-04-25 2016-09-28 深圳市永兴元科技有限公司 Distributed scheduling method and system of task
CN109345305A (en) * 2018-09-28 2019-02-15 广州凯风科技有限公司 A kind of elevator electrical screen advertisement improvement analysis method based on face recognition technology
CN111290841A (en) * 2018-12-10 2020-06-16 北京沃东天骏信息技术有限公司 Task scheduling method and device, computing equipment and storage medium
CN110018893A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 A kind of method for scheduling task and relevant device based on data processing
CN111209110A (en) * 2019-12-31 2020-05-29 浙江明度智控科技有限公司 Task scheduling management method, system and storage medium for realizing load balance

Also Published As

Publication number Publication date
CN112346845A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112346845B (en) Method, device and equipment for scheduling coding tasks and storage medium
US10938725B2 (en) Load balancing multimedia conferencing system, device, and methods
CN111090687B (en) Data processing method, device and system and computer readable storage medium
US10803676B2 (en) 3D scene reconstruction using shared semantic knowledge
US9325930B2 (en) Collectively aggregating digital recordings
US9146940B2 (en) Systems, methods and apparatus for providing content based on a collection of images
US20140293069A1 (en) Real-time image classification and automated image content curation
US11750682B2 (en) Messaging system with circumstance configuration framework for hardware
CN111935663B (en) Sensor data stream processing method, device, medium and electronic equipment
CN114244595A (en) Method and device for acquiring authority information, computer equipment and storage medium
CN111435377A (en) Application recommendation method and device, electronic equipment and storage medium
CN110769050B (en) Data processing method, data processing system, computer device, and storage medium
US11048745B2 (en) Cognitively identifying favorable photograph qualities
CN110727808A (en) Image processing method and device and terminal equipment
CN114065056A (en) Learning scheme recommendation method, server and system
CN113762585A (en) Data processing method, account type identification method and device
CN113392676A (en) Multi-target tracking behavior identification method and device
CN112953993A (en) Resource scheduling method, device, network system and storage medium
CN113469438B (en) Data processing method, device, equipment and storage medium
CN113673427B (en) Video identification method, device, electronic equipment and storage medium
CN113591958B (en) Method, device and equipment for fusing internet of things data and information network data
CN112636993B (en) Information display method and device, terminal and server
CN113705309A (en) Scene type judgment method and device, electronic equipment and storage medium
CN117112087A (en) Ordering method of desktop cards, electronic equipment and medium
CN117834328A (en) Smart home control method, smart home control device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038798

Country of ref document: HK