CN110856045A - Video processing method, electronic device, and storage medium - Google Patents

Video processing method, electronic device, and storage medium Download PDF

Info

Publication number
CN110856045A
CN110856045A CN201910942696.2A CN201910942696A CN110856045A CN 110856045 A CN110856045 A CN 110856045A CN 201910942696 A CN201910942696 A CN 201910942696A CN 110856045 A CN110856045 A CN 110856045A
Authority
CN
China
Prior art keywords
video processing
processing
running time
application
edge node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910942696.2A
Other languages
Chinese (zh)
Other versions
CN110856045B (en
Inventor
王�琦
金晶
潘兴浩
杜欧杰
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN201910942696.2A priority Critical patent/CN110856045B/en
Publication of CN110856045A publication Critical patent/CN110856045A/en
Application granted granted Critical
Publication of CN110856045B publication Critical patent/CN110856045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless

Abstract

The embodiment of the invention relates to the field of multimedia processing, and discloses a video processing method, electronic equipment and a storage medium. In the invention, the video processing capacity of the currently accessed edge node is acquired in a preset period, wherein the video processing capacity comprises an application list and the running time of each application in the application list; determining a first processing running time required by the processing of the target video processing task at the edge node according to the video processing capacity of the edge node, and determining a second processing running time required by the local processing of the target video processing task; if the first processing running time is less than the second processing running time, the target video processing task is unloaded to the edge node for processing, the purpose of relieving the video calculation performance consumption of the mobile terminal by video playing is achieved, a part of terminal equipment with lower performance can provide normal video service, and the performance experience of the part of terminal users is improved.

Description

Video processing method, electronic device, and storage medium
Technical Field
The present invention relates to the field of multimedia processing, and in particular, to a video processing method, an electronic device, and a storage medium.
Background
In the conventional video processing technology, the video processing technology generally performs processing on a terminal, and video processing tasks such as video dragging and playing, video fast forwarding and fast rewinding, video frame extraction, video picture compression and uploading, video rendering, video decoding and the like are all processed on the terminal side.
However, the inventor finds that, when a mobile terminal receives a video calculation task, for example, a picture is generated based on a frame extraction of a live broadcast picture, live stream content needs to be extracted while being acquired, so that the energy consumption of the terminal is obvious, and the mobile terminal is not favorable for supporting other concurrent calculation tasks. Moreover, the current video processing mode greatly depends on the computing performance of the mobile terminal, so that the processing response of part of mobile terminals with weak computing power is delayed, and the problem of influencing the video processing experience exists.
Disclosure of Invention
An object of embodiments of the present invention is to provide a video processing method, an electronic device, and a storage medium, so that consumption of video calculation performance of a mobile terminal by video playback is reduced, and a video processing speed is increased.
To solve the above technical problem, an embodiment of the present invention provides a video processing method, including: acquiring the video processing capacity of the currently accessed edge node according to a preset period, wherein the video processing capacity comprises an application list and the running time of each application in the application list; determining a first processing running time required by the processing of the target video processing task at the edge node according to the video processing capacity of the edge node, and determining a second processing running time required by the local processing of the target video processing task; and if the first processing running time is less than the second processing running time, unloading the target video processing task to the edge node for processing.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the video processing method.
The embodiment of the invention also provides a storage medium, which stores a computer program, and the computer program realizes the video processing method when being executed by a processor.
Compared with the prior art, the method and the device have the advantages that the video processing capacity of the edge node which is accessed by the terminal at present is obtained, the time consumed by the video processing task at the terminal is compared with the time consumed by the video processing task at the edge node, whether the video processing task is executed on the edge node is determined, and the purpose of unloading a part of the video processing task to the edge node is achieved, so that the video processing performance consumption of the mobile terminal is greatly relieved, the video processing speed is increased, a part of terminal equipment with lower performance can also provide normal video service, and the performance experience of the part of terminal users is improved.
In addition, the application running time is calculated according to the theoretical running time of the application and the actual available dynamic coefficient of the application; wherein the actual available dynamic coefficient of the application is calculated from a plurality of actual average running times of the application and the actual load corresponding to the actual average running times. The time of video processing on the edge node predicted by calculation is more accurate, the decision is more reasonable in the process of judging which side the video processing is carried out on, and better user experience is provided for users.
In addition, if the first processing running time is greater than the second processing running time, obtaining running time CDi required by local completion of each application called by the completion of the target video processing task and running time CSi required by completion of the target video processing task at the edge node; and locally executing the application with CDi larger than CSi, and executing the application with CDi smaller than or equal to CSi in the edge node. And finely distributing the video processing subtasks to the local terminal or unloading the video processing subtasks to the edge node according to the predicted running time, so that the time consumed by the video processing tasks is closer to the theoretical minimum value, and the video processing speed is further reduced.
In addition, if the currently accessed edge node changes and the target video processing task is not completed, a processing mark of the target video processing task is obtained, and the processing mark is used for indicating a processing party of the target video processing task; and if the processing party indicated by the processing mark is the edge node, switching the target video processing task to local processing. Under the scene that the user terminal moves rapidly, the change of the position can affect the edge node connected with the terminal, and the execution end of the video processing task is re-decided according to the video processing capacity of the current edge node, so that the optimal video task processing mode is achieved, and the user experience is further improved.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of a video processing method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a video processing method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a video processing method after an access edge node is changed according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to a video processing method. The embodiment is applied to a terminal device, such as a mobile phone, a tablet computer, and the like, for example, but not by way of limitation, a video processing method of the embodiment includes: acquiring the video processing capacity of the currently accessed edge node according to a preset period, wherein the video processing capacity comprises an application list and the running time of each application in the application list; determining a first processing running time required by the processing of the target video processing task at the edge node according to the video processing capacity of the edge node, and determining a second processing running time required by the local processing of the target video processing task; and if the first processing running time is less than the second processing running time, unloading the target video processing task to the edge node for processing.
The following describes the implementation details of the video processing method of the present embodiment in detail, and the following is only provided for the convenience of understanding and is not necessary for the present embodiment.
The video processing method in this embodiment has a flow as shown in fig. 1, and specifically includes:
step 101, acquiring the video processing capability of the currently accessed edge node. The video processing capability includes an application list and a running time of each application in the application list.
Specifically, in the embodiment, the smart phone is used as the terminal device, and when the access network of the smart phone is a 5G network, the user operates the terminal to play the video. Since the MEC server in the 5G network, that is, the edge node mentioned above, can provide the required service and cloud computing function for the end user nearby, the interactive response between the terminal and the edge node can be as low as 1 ms by virtue of the high bandwidth and low latency in the 5G environment. Therefore, when the video processing application runs on the edge node, the video processing task can be put on the edge node to be executed, and therefore the computing performance consumption of the terminal is reduced under the condition that the user experience is not influenced. Therefore, after the user operates the terminal to play the video, the terminal needs to acquire the video processing capability of the currently accessed edge node.
In a specific implementation, a terminal needs to run an application program for video playing, where the application program needs to connect to an MEC server in a 5G network through an access network, then queries a service application list running on the MEC server through a restful interface, obtains an application list L { a1, a2, a 3.., an } running on a current edge node, then caches the obtained application list locally, and periodically updates the obtained application list according to an expiration time set by a user. The expiration time is generally defaulted to 1 hour, and the operation and maintenance personnel can set a more reasonable expiration time in a targeted manner according to a specific use environment so as to ensure the validity of the locally changed application list. In addition, each application deployed on the MEC server has performance management on its own capability, and provides a corresponding performance query interface to the outside, so that the terminal obtains the running time Kn of each application in the list while obtaining the application list, where Kn is used to measure the performance state of the current service application. The terminal needs to periodically acquire the running time Kn of the application, and similarly, on the MEC server, the application service also needs to periodically calculate the performance capability of the application and update the running time to the running time in the current performance state.
The running time of each application in the application list may be calculated from the theoretical running time of the application and the actual available dynamic coefficient of the application, and specifically, the product of the theoretical running time Kn of the application and the actual available dynamic coefficient Un of the application may be calculated to obtain the running time of the application, i.e., Kn ═ f (nn) × Un. The Un can be obtained by calculation according to a plurality of actual average running times of the application and the actual load corresponding to the actual average running times, and the specific calculation process is as follows:
firstly, a test experience static data table cluster Qn of an application n is set, and the cluster comprises the corresponding relation between the performance load of the application n during testing and the running time of a testing system. Then, the theoretical operating time is set to f (Nn) ═ qn (Nn), and Nn is n to apply the current load. Wherein the first m actual run times of application n constitute a data set Xn { X1, X2.., Xm }; applying the first m realities of nA set of load data Yn { Y1, Y2.., Ym }; then, setting an actual available dynamic coefficient Un of the application n, wherein Un represents an average value of actual running time/corresponding load theoretical running time of the first m tasks of the same type, namely:
Figure BDA0002223342570000043
Figure BDA0002223342570000041
where m is a value that can be set according to requirements, and is 100 by default.
In a specific application, each time a terminal is accessed, the edge node stores the data of the video processing task decision in the server, that is, each time the data set Xn composed of the running time and the data set Un of the average value of the actual running time/the corresponding load theoretical running time of the same task are continuously updated. The predicted running time Kn can continuously approach the actual running time, the Kn value is more accurate, and the decision of video processing is more reasonable.
From the above data, the current predicted value of the running time of the application n is Kn ═ f (nn) × Un.
And step 102, determining a first processing running time of the target video processing task according to the video processing capacity of the edge node, and simultaneously determining a second processing running time.
Specifically, the first processing running time of the target video processing task is the time required by the edge node to process the target video processing task; the second processing runtime of the target video processing task is the time it takes for the terminal to process the target video processing task locally. After the terminal acquires a series of performance parameters from the edge node, which have an influence on the time taken for video processing, the first processing runtime and the second processing runtime are calculated according to the formula, respectively.
In one specific implementation, when the terminal needs to process a video task, the evaluation can be performed one by one according to the following algorithm:
Figure BDA0002223342570000042
Figure BDA0002223342570000051
CSn represents first processing running time, n represents the number of applications called by the target video processing task, epsilon n represents an application set called by the target video processing task, and di represents the interactive information quantity of the ith application and a module calling the ith application; b represents the network bandwidth; ki represents the predicted running time of the ith application acquired from the edge node in step 101, the size of this parameter is the main performance of the edge node for the video processing task, and the smaller Ki means the faster the video processing speed on the edge node. It will be appreciated that any one video processing task may invoke a number of different applications for processing. Wherein the first processing running time is influenced by the terminal network connection bandwidth B and the interactive information amount di between the module and the application.
CDn denotes a second processing running time, Tn denotes a running time of a module itself which processes the target video processing task, n denotes the number of applications called by the target video processing task, ε n denotes a set of applications called by the target video processing task, and Pi denotes whether or not the edge node has the ith application called by the target video processing task. Meanwhile, before calculating the second processing running time, the terminal further needs to determine whether the application that the video processing task needs to call runs on the current edge node according to the running application list on the current server acquired in step 101.
In a specific implementation, if an application i called by a task is an application for implementing rendering, and an application capable of implementing rendering exists in an obtained running application list, taking Pi as 1, that is to say, an edge node currently accessed has the capability of implementing rendering, and a rendered part in the task is allocated to the application on the edge node for implementation; if there is no application in the running application list that can realize rendering, Pi ∞, that is, the currently accessed edge node does not have the capability of realizing rendering, which means that Csi ∞ Pi is a value that is inevitably greater than CDi, and this indicates that the part of the video processing task that realizes rendering is realized by the local application.
And executing the next step after the first processing running time and the second processing running time are calculated.
Step 103, determining whether the first processing running time is less than the second processing running time. If the first processing runtime is less than the second processing runtime, step 104 is executed. Otherwise, step 105 is performed.
And 104, unloading the target video processing task to the edge node for processing.
And 105, distributing the target video processing task to the local terminal for processing.
Specifically, after the first processing running time and the second processing running time are obtained through calculation, the sizes of the first processing running time and the second processing running time are compared, and when the first processing running time is smaller than the second processing running time, the time spent on running the video processing task on the edge node is smaller than the time spent on running the video processing task locally on the terminal, so that the video processing task is unloaded to the edge node for processing, and a better video playing experience can be provided for a terminal user. And when the first processing running time is not less than the second processing running time, the video processing task is distributed locally for processing by default.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, when the edge node connected with the terminal currently has the video processing capacity, whether the video processing task is executed on the edge node is determined by comparing the time consumed by the video processing task on the terminal with the time consumed by the video processing task on the edge node, so that the aim of unloading a part of the video processing task to the edge node is fulfilled, the video processing performance consumption of the mobile terminal is greatly relieved, and the video processing speed is improved.
A second embodiment of the present invention relates to a video processing method, and a specific flow is shown in fig. 2, where the method includes:
step 201 and step 202 are similar to step 101 and step 102 in the first embodiment of the present invention, and are not described herein again.
Step 203, determine whether the first processing running time is less than the second processing running time. If yes, go to step 204; step 204 is similar to step 104 in the first embodiment of the present invention, and is not described herein again.
If the determination result is negative, go to step 205;
and step 205, distributing each processing process to the client self-processing or the edge node processing according to the minimum running time of each application.
Specifically, when the first processing running time is less than the second processing running time, the time for running a video processing task at the edge node in its entirety is less than the time for running at the local terminal. When the first processing running time is longer than the second processing running time, it also only means that the time of the whole video processing task running at the edge node is longer than the time of running at the local terminal. Since any one video processing task can call a plurality of different applications to execute the corresponding processing procedures, in this case, the time spent by each application in processing on the edge node and the time spent in processing on the local terminal in one video processing task can be respectively calculated. Then, the processing running time of the applications on each edge node is compared with the processing running time of each local terminal application, so that the difference of the processing of each application at the two ends can be obtained. At the moment, the process with short processing running time at the local terminal is distributed to the client application for processing, and the process with short processing running time at the edge node is distributed to the edge node application for processing, so that the time spent by the whole video processing task is minimized.
In a specific implementation, assuming that a video processing task needs to call applications a, B, and C for processing, the first processing running time of the video processing task is 800 ms and the second processing running time is 700 ms, which are calculated by the formula mentioned in the first embodiment of the present invention. At this time, if the technical scheme in the first embodiment of the present invention is adopted, it is decided that the video processing task is executed locally at the terminal, which takes 700 milliseconds; if the technical solution in the second embodiment is adopted, the running time of the application a, the application B, and the application C executed locally on the terminal and the running time executed on the edge node are further calculated. Assume time TA1 (application a running time on the edge node) is 100ms, TA2 (application a running time on the local terminal) is 300ms, TB1 (application B running time on the edge node) is 100ms, TB2 (application B running time on the local terminal) is 300ms, TC1 is 600ms, and TC2 is 100 ms. Since TA1< TA2, TB1< TB2, TC1> TC2, the final decision is to run application a and application B on the edge node and application C on the local terminal, and the total time consumption TT is TA1+ TB1+ TC2 is 100ms +100ms +200ms is 400 ms. Thereby enabling the video processing task to run more quickly.
It can be seen that the second embodiment is an optimization scheme based on the first embodiment of the present invention, and the main difference is that after the first processing running time is determined to be not less than the second processing running time, in the first embodiment, the video processing task is allocated to be executed locally by default. In the second embodiment of the present invention, each processing procedure is allocated to the client own processing or the edge node processing according to the minimum running time of each application.
In addition, those skilled in the art can understand that, by the technical means in the second embodiment of the present invention, the video processing task can be more finely distributed to the local terminal or the edge node, so as to achieve the purpose of minimizing the video processing running time, thereby providing better performance experience for the user.
A third embodiment of the present invention relates to a video processing method, which is further improved based on the first embodiment, and after deciding a processing party of a target video processing task according to a comparison result between a first processing running time and a second processing running time, it is further required to detect whether a currently accessed edge node changes, and if the currently accessed edge node changes and the target video processing task is not completed, a processing flag of the target video processing task is obtained, and the processing flag is used for indicating the processing party of the target video processing task; and if the processing party indicated by the processing mark is the edge node, switching the target video processing task to local processing.
The same parts of this embodiment as those of the first embodiment will not be described again, and details of the modified parts of this embodiment with respect to the first embodiment are specifically described below, and specifically as shown in fig. 3, the modified parts include:
step 301, after detecting that the accessed edge node changes, determining whether the current video processing task is completed. If the current video processing task is not completed, execute step 302; otherwise, ending the video processing flow.
Specifically, when the mobile terminal accesses the 5G network, the user may perform a video playing operation in a fast moving scene. In this scenario, since the mobile terminal is moving continuously and each edge node can only cover a fixed area, the edge node accessed by the terminal is switched continuously during the process, and therefore the terminal needs to continuously monitor changes of the currently accessed edge node. According to the description of the first embodiment of the present invention, the applications currently run by each edge node are different, and the different geographical locations of each edge node also means that the ongoing services of the terminals currently accessed by different edge nodes are different, so that the video processing performance that this edge node can provide is different from the previous edge node that is accessed. Therefore, when accessing different edge nodes, the processing decision made according to the previous edge node parameter cannot be applied to a new edge node accessed after the terminal position is moved, and therefore in the present embodiment, the processing state of the current video processing task is changed according to the actual situation of the current edge node.
In addition, the processing state of the target video processing task, if completed, does not require additional operations. Only when the target video processing task has not been completed, a change in the processing state of the target video processing task is required to maximize the processing speed. If the target video processing task is completed, it means that the terminal does not need to continue to provide performance resources for video processing, and does not need to make a decision again on the running of the video processing task.
Step 302, judging whether the processing party indicated by the processing mark is an edge node; if yes, directly entering step 303; if the determination result is negative, go to step 304 directly.
Step 303, the target video processing task is switched to local processing.
Specifically, before determining whether to offload the target video processing task to the edge node for execution, the currently accessed edge node must be subjected to the acquisition of the application run list and the acquisition of the video processing performance. In order to ensure the continuity of the video playing operation, if the processing party indicated by the video processing task processing mark is an edge node when the last edge node is accessed, the video processing task needs to be switched to the local terminal to be executed. And if the decision result is that the terminal executes locally after the last edge node is accessed, namely the processing party indicated by the processing mark is the terminal local, continuing to execute the video processing task.
Step 304, acquiring the video processing capability of the currently accessed edge node, and determining the current remaining running time of the unfinished subtask in the target video processing task.
Specifically, the current remaining running time of the sub-task not completed in the target video processing task includes a first remaining running time and a second remaining running time. Similar to the first processing runtime and the second processing runtime of the first embodiment of the present invention, since the target video processing task has already been partially completed when accessing the upper edge node, these two runtimes are used to measure the speed at which the partially uncompleted target video processing task is executed locally at the terminal and at the edge node. While the next step is executed after the two remaining operating times are acquired through calculations similar to those in the first embodiment of the present invention.
In a specific implementation, the terminal first needs to obtain the processing progress of the current target video processing task, that is, extract a portion that has not been processed currently, then identify a required application for executing the portion of the video processing task, then query the edge node for a list of applications currently running on the currently accessed edge node, then obtain the performance status of the applications on the currently accessed edge node, and calculate a first processing running time and a second processing running time, that is, a first remaining running time and a second remaining running time, for the unprocessed portion according to the same formula as in the first embodiment of the present invention.
Step 305, determine whether the first remaining operation time is less than the second remaining operation time. If the first remaining operation time is less than the second remaining operation time, go to step 306; otherwise, go to step 307.
This step is similar to step 103 in the first embodiment of the present invention, and is not described herein again.
And step 306, unloading the target video processing task to the currently accessed edge node for processing.
This step is similar to step 104 in the first embodiment of the present invention, and is not described herein again.
And 307, distributing the target video processing task to the local terminal for processing.
This step is similar to step 105 in the first embodiment of the present invention, and is not described herein again.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Compared with the prior art, in the embodiment, considering that the terminal device may perform video operation in a fast moving scene, due to the change of the position, the terminal may connect different edge nodes. In such a scenario, through optimization of the process, the implementation method can realize seamless switching of the video processing task processing party, and maximally reduce consumption of the terminal computing performance, so that the terminal can still make a rapid decision on video task processing, and further improve video processing performance experience of a user in a fast moving scene.
A fourth embodiment of the present invention relates to an electronic device, which is specifically configured as shown in fig. 4, and includes:
at least one processor 401; and a memory 402 communicatively coupled to the at least one processor 401; the memory 402 stores instructions executable by the at least one processor 401, and the instructions are executed by the at least one processor to enable the at least one processor to execute the video processing method according to the first, second, and third embodiments.
Where the memory 402 and the processor 401 are coupled by a bus, which may include any number of interconnected buses and bridges that couple one or more of the various circuits of the processor 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 401 may be transmitted over a wireless medium via an antenna, which may receive the data and transmit the data to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store data used by processor 401 in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A video processing method, comprising:
acquiring video processing capacity of a currently accessed edge node according to a preset period, wherein the video processing capacity comprises an application list and running time of each application in the application list;
determining a first processing running time required by the processing of a target video processing task at the edge node according to the video processing capacity of the edge node, and determining a second processing running time required by the local processing of the target video processing task;
and if the first processing running time is less than the second processing running time, unloading the target video processing task to the edge node for processing.
2. The video processing method according to claim 1, wherein the running time of the application is calculated from a theoretical running time of the application and an actual available dynamic coefficient of the application;
wherein the actual available dynamic coefficient of the application is calculated from a plurality of actual average run times of the application and an actual load corresponding to the actual average run times.
3. The video processing method according to claim 2, wherein the calculating the running time of the application according to the theoretical running time of the application and the actual available dynamic coefficient of the application comprises:
and calculating the product of the theoretical running time of the application and the actual available dynamic coefficient of the application to obtain the running time of the application.
4. The video processing method according to claim 1, wherein the first processing runtime is calculated according to the following formula:
Figure FDA0002223342560000011
wherein CSn represents the first processing running time, n represents the number of applications called by the target video processing task, epsilon n represents the set of applications called by the target video processing task, and di represents the interactive information amount of the ith application and the module calling the ith application; b represents a network bandwidth; the Ki represents the runtime of the ith application.
5. The video processing method according to any one of claims 1 to 4, further comprising:
if the first processing running time is greater than the second processing running time, obtaining running time CDi required by local completion of each application called by the target video processing task and running time CSi required by completion of the application at the edge node;
and locally executing the application with CDi larger than CSi, and executing the application with CDi smaller than or equal to CSi in the edge node.
6. The video processing method according to claim 5, wherein the second processing runtime is calculated according to the following formula:
wherein CDn represents the second processing running time, Tn represents the running time of a module for processing the target video processing task, n represents the number of applications called by the target video processing task, sn represents the set of applications called by the target video processing task, and Pi represents whether the edge node has the ith application called by the target video processing task.
7. The video processing method according to any one of claims 1 to 4, further comprising:
if the currently accessed edge node changes and the target video processing task is not completed, acquiring a processing mark of the target video processing task, wherein the processing mark is used for indicating a processing party of the target video processing task;
and if the processing party indicated by the processing mark is an edge node, switching the target video processing task to local processing.
8. The video processing method according to claim 7, further comprising, after said switching the target video processing task to local processing:
acquiring the video processing capacity of the currently accessed edge node;
determining a first remaining operation time required by processing the unfinished subtask in the target video processing task in the currently accessed edge node according to the video processing capacity of the currently accessed edge node, and determining a second remaining operation time required by processing the unfinished subtask locally;
and if the first residual running time is less than the second residual running time, unloading the target video processing task to the currently accessed edge node for processing.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method of any of claims 1 to 8.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the video processing method of any of claims 1 to 8.
CN201910942696.2A 2019-09-30 2019-09-30 Video processing method, electronic device, and storage medium Active CN110856045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910942696.2A CN110856045B (en) 2019-09-30 2019-09-30 Video processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910942696.2A CN110856045B (en) 2019-09-30 2019-09-30 Video processing method, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110856045A true CN110856045A (en) 2020-02-28
CN110856045B CN110856045B (en) 2021-12-07

Family

ID=69597293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910942696.2A Active CN110856045B (en) 2019-09-30 2019-09-30 Video processing method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110856045B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259769A (en) * 2021-04-07 2021-08-13 苏州华兴源创科技股份有限公司 Video source switching method and device, electronic equipment and computer readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015018146A (en) * 2013-07-12 2015-01-29 株式会社Nttドコモ Function management system and function management method
US20150156276A1 (en) * 2012-09-19 2015-06-04 Tencent Technology (Shenzhen) Company Limited Distributed data-based concurrent processing method and system, and computer storage medium
CN107295110A (en) * 2017-08-16 2017-10-24 网宿科技股份有限公司 Processing method, fringe node, service server and the system of calculating task
US20180178127A1 (en) * 2016-12-22 2018-06-28 Nintendo Co., Ltd. Game development system
CN108933815A (en) * 2018-06-15 2018-12-04 燕山大学 A kind of control method of the Edge Server of mobile edge calculations unloading
CN109558240A (en) * 2018-10-31 2019-04-02 东南大学 A kind of mobile terminal applies the lower task discharging method based on support vector machines
CN109710336A (en) * 2019-01-11 2019-05-03 中南林业科技大学 The mobile edge calculations method for scheduling task of joint energy and delay optimization
CN109767117A (en) * 2019-01-11 2019-05-17 中南林业科技大学 The power distribution method of Joint Task scheduling in mobile edge calculations
CN109905470A (en) * 2019-02-18 2019-06-18 南京邮电大学 A kind of expense optimization method for scheduling task based on Border Gateway system
CN110012039A (en) * 2018-01-04 2019-07-12 华北电力大学 Task distribution and power control scheme in a kind of car networking based on ADMM
CN110109745A (en) * 2019-05-15 2019-08-09 华南理工大学 A kind of task cooperation on-line scheduling method for edge calculations environment
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150156276A1 (en) * 2012-09-19 2015-06-04 Tencent Technology (Shenzhen) Company Limited Distributed data-based concurrent processing method and system, and computer storage medium
JP2015018146A (en) * 2013-07-12 2015-01-29 株式会社Nttドコモ Function management system and function management method
US20180178127A1 (en) * 2016-12-22 2018-06-28 Nintendo Co., Ltd. Game development system
CN107295110A (en) * 2017-08-16 2017-10-24 网宿科技股份有限公司 Processing method, fringe node, service server and the system of calculating task
CN110012039A (en) * 2018-01-04 2019-07-12 华北电力大学 Task distribution and power control scheme in a kind of car networking based on ADMM
CN108933815A (en) * 2018-06-15 2018-12-04 燕山大学 A kind of control method of the Edge Server of mobile edge calculations unloading
CN109558240A (en) * 2018-10-31 2019-04-02 东南大学 A kind of mobile terminal applies the lower task discharging method based on support vector machines
CN109710336A (en) * 2019-01-11 2019-05-03 中南林业科技大学 The mobile edge calculations method for scheduling task of joint energy and delay optimization
CN109767117A (en) * 2019-01-11 2019-05-17 中南林业科技大学 The power distribution method of Joint Task scheduling in mobile edge calculations
CN109905470A (en) * 2019-02-18 2019-06-18 南京邮电大学 A kind of expense optimization method for scheduling task based on Border Gateway system
CN110109745A (en) * 2019-05-15 2019-08-09 华南理工大学 A kind of task cooperation on-line scheduling method for edge calculations environment
CN110290011A (en) * 2019-07-03 2019-09-27 中山大学 Dynamic Service laying method based on Lyapunov control optimization in edge calculations

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259769A (en) * 2021-04-07 2021-08-13 苏州华兴源创科技股份有限公司 Video source switching method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110856045B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
EP3456032B1 (en) Method and server for controlling relocation of a mec application
Khoda et al. Efficient computation offloading decision in mobile cloud computing over 5G network
US20210377097A1 (en) Communication method and apparatus, entity, and computer-readable storage medium
CN111314741B (en) Video super-resolution processing method and device, electronic equipment and storage medium
US10101910B1 (en) Adaptive maximum limit for out-of-memory-protected web browser processes on systems using a low memory manager
CN109889576A (en) A kind of mobile cloud game method for optimizing resources based on game theory
CN111708642B (en) Processor performance optimization method and device in VR system and VR equipment
CN110673948A (en) Cloud game resource scheduling method, server and storage medium
US20180375739A1 (en) Cache based on dynamic device clustering
CN110633143A (en) Cloud game resource scheduling method, server and storage medium
CN112202829A (en) Social robot scheduling system and scheduling method based on micro-service
US10248321B1 (en) Simulating multiple lower importance levels by actively feeding processes to a low-memory manager
CN110856045B (en) Video processing method, electronic device, and storage medium
CN114205361B (en) Load balancing method and server
CN112463293A (en) Container-based expandable distributed double-queue dynamic allocation method in edge scene
US20230379268A1 (en) Resource scheduling method and system, electronic device, computer readable storage medium
CN112087646B (en) Video playing method and device, computer equipment and storage medium
CN112040332A (en) Method and system for obtaining video content with smooth CDN bandwidth
US20200183747A1 (en) User Presence Prediction Driven Device Management
US20160041791A1 (en) Electronic device, on-chip memory and method of operating the on-chip memory
US20090183172A1 (en) Middleware Bridge System And Method
CN112163985B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111968190B (en) Compression method and device for game map and electronic equipment
US20230195527A1 (en) Workload distribution by utilizing unused central processing unit capacity in a distributed computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant