WO2023131076A2 - 视频处理方法、装置及系统 - Google Patents

视频处理方法、装置及系统 Download PDF

Info

Publication number
WO2023131076A2
WO2023131076A2 PCT/CN2022/144030 CN2022144030W WO2023131076A2 WO 2023131076 A2 WO2023131076 A2 WO 2023131076A2 CN 2022144030 W CN2022144030 W CN 2022144030W WO 2023131076 A2 WO2023131076 A2 WO 2023131076A2
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
video
super
resolution
processed
Prior art date
Application number
PCT/CN2022/144030
Other languages
English (en)
French (fr)
Other versions
WO2023131076A3 (zh
Inventor
汤然
蔡尚志
郑龙
Original Assignee
上海哔哩哔哩科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海哔哩哔哩科技有限公司 filed Critical 上海哔哩哔哩科技有限公司
Publication of WO2023131076A2 publication Critical patent/WO2023131076A2/zh
Publication of WO2023131076A3 publication Critical patent/WO2023131076A3/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments

Definitions

  • the present application relates to the technical field of the Internet, in particular to a video processing method.
  • the present application also relates to a video processing device, a computing device, and a computer-readable storage medium.
  • the current online video websites provide a wealth of video business content. Users can watch movies, TV dramas, variety shows, live broadcasts, recorded broadcasts, etc., which greatly enriches users' business life. Some video business content is due to video shooting. Due to equipment reasons, the resolution of the captured video is low. For example, during the live broadcast, the mobile phone resolution of the anchor is low, and the users watching the live broadcast want to watch a video with higher resolution and better quality. Based on this, the video super Sub-technology has been developed accordingly.
  • video super-resolution technology will require a higher bit rate for network transmission, which in turn will consume more network bandwidth, and video super-resolution technology requires a lot of computing power and consumes more computing resources. Operating costs of the website. Therefore, the inventor realized that how to effectively save network bandwidth and reduce website operating costs while ensuring that users watch higher-definition videos has become an urgent problem to be solved by technical personnel.
  • the embodiment of the present application provides a video processing method.
  • the present application also relates to a video processing device, a computing device, and a computer-readable storage medium, so as to solve the problems existing in the prior art that video super-resolution tasks occupy large network resources and consume large calculations.
  • a video processing method which is applied to a first terminal in a first terminal set, and the method includes:
  • the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed
  • a video processing method applied to a third terminal in a third terminal set comprising:
  • a video processing system including:
  • the first terminal in the first terminal set is configured to receive a video super-resolution task, obtain a list of terminals to be allocated in response to the video super-resolution task, and determine the first terminal list in the list of terminals to be allocated according to the video parameter information to be processed.
  • the second terminal set and the third terminal set determine the corresponding relationship between each video frame to be processed in the video frame set to be processed and each second terminal in the second terminal set, according to each video frame to be processed and the
  • the video parameter information to be processed generates a video super-resolution instruction, and sends each video super-resolution instruction to a corresponding second terminal according to the correspondence;
  • the second terminal in the second terminal set is configured to determine the target video frame to be processed according to the video super-resolution instruction, and perform video super-resolution processing on the target video frame to be processed to obtain the corresponding target super-resolution video frame, and sending the target super-resolution video frame to each third terminal in the third terminal set;
  • the third terminal in the third terminal set is configured to receive the super-resolution video frame sent by each second terminal in the second terminal set, and splice each super-resolution video frame to obtain an initial super-resolution video frame
  • the super-resolution video frames in the initial super-resolution video frame set are temporally smoothed and encoded to obtain a target video stream.
  • a video processing apparatus which is applied to a first terminal in a first terminal set, and the apparatus includes:
  • the receiving module is configured to receive a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed;
  • the obtaining module is configured to obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed;
  • a determination module configured to determine a correspondence between each video frame to be processed in the set of video frames to be processed and each second terminal in the set of second terminals;
  • the sending module is configured to generate a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and send each video super-resolution instruction to a corresponding second terminal according to the corresponding relationship.
  • a video processing apparatus which is applied to a third terminal in a third terminal set, and the apparatus includes:
  • the receiving module is configured to receive a super-resolution video frame sent by each second terminal in the second terminal set, wherein the super-resolution video frame carries a super-resolution video frame identifier;
  • the splicing module is configured to splice each super-resolution video frame according to each super-resolution video frame identifier to obtain an initial super-resolution video frame set;
  • the smooth coding module is configured to perform temporal smoothing processing on the super-resolution video frames in the initial super-resolution video frame set and encode to obtain the target video stream.
  • a computing device including a memory, a processor, and computer instructions stored in the memory and operable on the processor.
  • the processor executes the computer instructions, the computer instructions are implemented. The steps of the video processing method are described.
  • a computer-readable storage medium which stores computer instructions, and implements the steps of the video processing method when the computer instructions are executed by a processor.
  • the video processing method provided by this application is applied to a first terminal in a first terminal set, and the method includes: receiving a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and a video frame to be processed Parameter information; obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed; determine each to-be-distributed terminal set in the video frame set to be processed Processing the corresponding relationship between the video frame and each second terminal in the second terminal set; generating a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and converting each The video super-resolution command is sent to the corresponding second terminal.
  • An embodiment of the present application realizes determining the second terminal set in the terminal list to be allocated by processing video parameter information, and assigning each video frame to be processed in the set of video frames to be processed to the second terminal in the second terminal set Perform video super-resolution processing, perform super-resolution on each video frame through the computing power of each terminal, use the user's bandwidth to distribute traffic, reduce the bandwidth consumption of the website, and use the computing power of the second terminal to perform Over-resolution saves the operating cost of the website, and at the same time ensures that users can see the over-resolution video.
  • FIG. 1 is a flowchart of a video processing method provided by an embodiment of the present application
  • FIG. 2 is a flow chart of a video processing method provided in the second embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a video processing system provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a video processing device provided by another embodiment of the present application.
  • Fig. 6 is a structural block diagram of a computing device provided by an embodiment of the present application.
  • first, second, etc. may be used to describe various information in one or more embodiments of the present application, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, first may also be referred to as second, and similarly, second may also be referred to as first, without departing from the scope of one or more embodiments of the present application. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • CDN Content Delivery Network, which is the content distribution network.
  • CDN is an intelligent virtual network built on the basis of the existing network. Relying on the edge servers deployed in various places, through the load balancing, content distribution, scheduling and other functional modules of the central platform, users can obtain the required content nearby and reduce network congestion.
  • the key technologies of CDN mainly include content storage and distribution technology.
  • Super-resolution technology refers to reconstructing corresponding high-resolution images from observed low-resolution images, and has important applications in monitoring equipment, satellite images, and medical imaging. value.
  • Video encoding The process of re-encoding audio and video.
  • a video processing method is provided.
  • the present application also relates to a video processing device, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments.
  • FIG. 1 shows a flow chart of a video processing method provided according to an embodiment of the present application, which is applied to a first terminal in a first terminal set, and specifically includes the following steps:
  • Step 102 Receive a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed.
  • video websites provide a wealth of video business services, such as movies, TV dramas, and variety shows. Users can also watch live broadcasts, recorded broadcasts, etc. on video websites.
  • the resolution of the video is low.
  • the mobile phone resolution of the anchor is only 1080P, while the terminal of the user watching the live broadcast supports 2K or even 4K resolution, and hopes to see a higher definition video. Therefore, video super-resolution technology came into being.
  • Video super-resolution refers to improving the resolution of the original image through software or hardware, and processing the low-resolution image into a high-resolution image, such as enlarging an image with a resolution of 1920*1080 to a resolution of 4096* 2160 images.
  • the video super-resolution technology will require a higher bit rate for network transmission, which will consume more bandwidth, and the video super-resolution technology requires the terminal to have a very strong computing power and consume more computing resources of the terminal. Greatly increased the cost of website operators.
  • CDN Content Delivery Network
  • CDN Content Delivery Network
  • CDN is an intelligent virtual network built on the basis of existing networks.
  • the network relying on the edge servers deployed in various places, through the load balancing, content distribution, scheduling and other functional modules of the central platform, enables users to obtain the required content nearby, reduces network congestion, improves user access response speed and hit rate, and is the key technology of CDN
  • a video processing method splits the video super-resolution task into several subtasks, which are processed by multiple terminals.
  • the first terminal set in this application specifically refers to the video that performs overall processing for video super-resolution tasks.
  • each first terminal in the first terminal set All are terminals participating in the video service. For example, in a live broadcast scene, if there are 30 viewers in a live broadcast room, the first terminal set is composed of some terminals among the 30 people; the viewers in a certain live broadcast room have If there are 3000 people, the first terminal set is composed of some terminals among the 3000 people.
  • the first terminal set can select terminals with average performance.
  • the performance of the terminals that are allowed to obtain terminal attribute information corresponding to the target video task can be counted, and sorted according to the order of performance from low to high, and a preset number of terminals are selected as the first terminal, for example, the audience in a live broadcast room There are 30 people, sort the 30 terminals according to the order of terminal performance from low to high, and select the top 10 terminals as the first terminal set, that is, there are 10 first terminals in the first terminal set.
  • the video super-resolution task specifically refers to the task of performing super-resolution processing on video frames issued by the upstream business.
  • users want to increase the video
  • the video super-resolution task that is, to increase the resolution of the original video
  • the terminal performance of the first terminal is weak.
  • the video super-resolution task it is responsible for the overall management work, and then according to the video super-resolution task, the video frames to be processed are distributed to the performance
  • the real video super-resolution operation is performed by the second terminal.
  • the first terminal receives the video super-resolution task issued by the upstream video service, and the video super-resolution task carries the set of video frames to be processed and the video frame to be processed.
  • Parameter information specifically, the set of video frames to be processed refers to the set of video frames to be processed that need to be scheduled for a certain first terminal, and the video parameter information to be processed specifically refers to the parameter information that needs to be processed for the video to be processed, including video frame Rate information, original resolution information, target resolution information, etc.
  • the original resolution of the video to be processed is 1080P, and it needs to be super-resolved to a target resolution of 2K.
  • the original resolution is included in the video parameter information to be processed. Also includes the target resolution.
  • the first terminal 1 receives the video super-resolution task 1, and the video super-resolution task 1 carries a set of video frames to be processed (video frames to be processed 1-10); the first terminal 2 receives the video super-resolution task 2, and the video super-resolution task 2 carries a set of video frames to be processed (video frames to be processed 11-20); the first terminal 3 receives the video super-resolution task Sub-task 3, the video super-resolution task 3 carries a set of video frames to be processed (each frame 21-30 of the video to be processed).
  • the video parameter information to be processed is to super-resolution the video frame to be processed from the original resolution of 1280*720 to the target resolution of 3840*2160.
  • Step 104 Obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed.
  • the list of terminals to be assigned specifically refers to a list of terminals corresponding to the target video task, except for the first terminal set.
  • the first terminal set, the list of other 20 terminals is the list of terminals to be allocated; if there are 500 viewers in a live broadcast room, among which 30 terminals are the first terminal set, the list of other 470 terminals is A list of terminals to be allocated.
  • a second set of terminals is determined in the list of terminals to be allocated.
  • the second set of terminals refers to a set of terminals used to perform super-resolution processing on video frames to be processed.
  • the terminal for super-resolution of the video frame to be processed needs to have strong computing power, that is, to have strong terminal performance. In practical applications, it is also necessary to combine the video parameter information to be processed to further determine the second terminal set. For example, for the same In the terminal, the computing power required for super-resolution of 720P video frames to be processed to 2K resolution is higher than the computing power required for super-resolution of 720P video frames to be processed to 1080P resolution.
  • the same terminal can be used as a second terminal when super-scaling a 720P video frame to a 1080P video frame, but cannot be used as a second terminal when performing a super-scaling of a 720P video frame to a 4K video frame .
  • determining the second terminal set in the terminal list to be allocated according to the video parameter information to be processed includes:
  • the second terminal set is determined according to the terminal performance weight of each terminal.
  • the second terminal set is determined according to the terminal performance weight of each terminal, including:
  • a preset number of terminals is selected as the second terminal or a terminal whose performance weight exceeds a preset threshold is selected as the second terminal.
  • the terminal attribute information of each terminal in the to-be-allocated list is obtained.
  • the terminal attribute information includes the terminal's CPU model, memory size, available resource information, and so on.
  • the terminal performance weight of each terminal is calculated according to the video parameter information to be processed and the terminal attribute information of each terminal.
  • the terminal performance weight of each terminal can be determined by determining the terminal computing power score of each terminal, After the terminal performance weight of each terminal is determined, a preset number of terminals are selected as the second terminal according to the sequence of terminal performance weights from high to low, and a second terminal set is formed.
  • the number of second terminals can be preset, or can be determined according to the frame rate of the video, for example, it can be determined that the number of second terminals in the second terminal set is 60; or the frame rate of the acquired video is 30 frames per second , and then it can be determined that the number of second terminals is 30; furthermore, the frame rate of the video is 30 seconds, there are 100 terminals in the terminal list to be processed, and the top 60 terminals with the highest terminal performance can also be selected as A second set of terminals.
  • the specific manner of how to determine the second terminal there is no limitation on the specific manner of how to determine the second terminal.
  • the computing power value of each terminal is calculated according to the terminal attribute information of each terminal and the video parameter information to be processed, and sorted according to the order of computing power value from high to low. After the sorting is completed, select the first 30 terminals as the second terminal.
  • the computing power value of each terminal is calculated according to the terminal attribute information of each terminal and the video parameter information to be processed, and the computing power values are sorted in descending order. After the sorting is completed, the terminal whose computing power value exceeds the preset threshold is selected as the second terminal.
  • the second terminal in the second terminal set is the terminal that performs the super-resolution operation on the video frames to be processed.
  • the super-resolution of the video frames to be processed cannot form a smooth and coherent video. Therefore, there is also a need for The terminal splices the video frames that have completed the super-resolution processing. Therefore, the video processing method provided by this application also includes:
  • the third terminal set specifically refers to the terminals used to stitch the video frames that complete the super-resolution task into a video.
  • the third terminal set is also composed of terminals corresponding to the target video task, that is, in this application
  • the mentioned first terminal set, second terminal set, and third terminal set are all terminals corresponding to the same target video task.
  • the terminals used by these 80 users are sorted according to the terminal attribute information, and the performance weight is Poor terminals are used as the first terminal set, terminals with better performance weights are used as the second terminal set, and terminals with medium performance weights are used as the third terminal set.
  • the terminal performance weights are poor, medium, and good. All are relatively speaking, it is necessary to sort the terminals corresponding to the target video service according to the terminal performance weight, and select a preset number (or proportion) of terminals to form the first terminal set, the second terminal set and the third terminal set.
  • first terminals in the first terminal set there are 3 first terminals in the first terminal set, and 30 second terminals are determined to form the second terminal set according to the video super-resolution task, wherein the second terminal 1- 10 corresponds to the first terminal 1, the second terminals 11-20 correspond to the first terminal 2, and the second terminals 21-30 correspond to the first terminal 3.
  • the second terminal 1- 10 corresponds to the first terminal 1
  • the second terminals 11-20 correspond to the first terminal 2
  • the second terminals 21-30 correspond to the first terminal 3.
  • five third terminals in the third terminal set are determined, which are respectively the third terminal 1-5.
  • Step 106 Determine the corresponding relationship between each video frame to be processed in the set of video frames to be processed and each second terminal in the set of second terminals.
  • each video frame to be processed in the video frame set to be processed needs to be sent to the corresponding second terminal set for super-resolution processing.
  • a video super-resolution task is divided into For multiple subtasks, multiple terminals perform video frame super-resolution processing respectively. Therefore, it is necessary to determine which second terminal performs super-resolution processing for each video frame to be processed.
  • determining the corresponding relationship between each video frame to be processed in the video frame set to be processed and each second terminal in the second terminal set includes:
  • each first terminal can know the number of second terminals corresponding to itself.
  • the first terminal will determine the number of video frames to be processed in the received video frame set to be processed.
  • the number of terminals of the second terminal for example, if there are n video frames to be processed in the video frame set to be processed, the second terminal set corresponding to the first terminal usually includes n second terminals, when the second terminal set in the second terminal set
  • the second terminal set may also include n/2 terminals.
  • the video frame set to be processed received by the first terminal has 15 video frames to be processed
  • the second terminal corresponding to the first terminal has 15 video frames to be processed.
  • the video frames to be processed are labeled, for example, there are n video frames to be processed, and the labels are 1-n; correspondingly, the second terminal includes n second terminals Terminals, label each second terminal, also marked as 1-n, then the video frame 1 to be processed can be corresponding to the second terminal 1, the video frame 2 to be processed can be corresponding to the second terminal 2...the video frame to be processed n corresponds to the second terminal n.
  • the first terminal 1 receives the video frames 1-10 to be processed, and the first terminal 1 corresponds to the second terminal 1-10; the first terminal 2 receives the video frames to be processed Frame 11-20, the first terminal 2 corresponds to the second terminal 11-20; the first terminal 3 receives the video frame 21-30 to be processed, the first terminal 3 corresponds to the second terminal 21-30, then the video frame 1 to be processed corresponds to The second terminal 1 , the video frame 2 to be processed corresponds to the second terminal 2 , ... the video frame 30 to be processed corresponds to the second terminal 30 .
  • Step 108 Generate a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and send each video super-resolution instruction to a corresponding second terminal according to the corresponding relationship.
  • each second terminal After determining the corresponding relationship between each video frame to be processed and each second terminal, generate a video super-resolution instruction for each video frame to be processed and video parameter information to be processed, and send the video super-resolution instruction to each The second terminal corresponding to the video frame to be processed.
  • the video super-resolution instruction specifically refers to the instruction for super-resolution processing of the video frame.
  • the video super-resolution instruction usually carries the video frame to be processed, the video parameter information to be processed, and the second terminal identification, that is, the video super-resolution instruction is sent to the second terminal.
  • the second terminal identifies the corresponding second terminal, so that the second terminal obtains the video frame to be processed and the video parameter information to be processed carried in the video super-resolution instruction, and responds to the video super-resolution instruction according to the original resolution in the video parameter information to be processed
  • the rate information and target resolution information are used to perform super-resolution processing on the video frames to be processed.
  • the third terminal set will also be determined.
  • the third terminal set is used to receive each super-resolved video frame and splicing the video frames to generate the target video. Therefore , generating a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, including:
  • a video super-resolution instruction is generated according to each video frame to be processed, the video parameter information to be processed, and the third terminal set.
  • the video to be processed After the second terminal corresponding to the frame completes the video super-resolution operation according to the video frame to be processed and the video parameter information to be processed, it can send the video frame on which the super-resolution operation has been completed to the third terminal for video splicing.
  • the video frame 1 to be processed is taken as an example.
  • the video parameter information "the video frame to be processed has an original resolution of 1280*720, Super-resolution to the target resolution of 3840*2160” and the third terminal 1-5 form a video super-resolution command 1, and send it to the second terminal 1 to perform video super-resolution operation on the video frame 1 to be processed, so that the second terminal 1
  • the target video frame 1 is obtained, and the target video frame 1 is sent to the third terminal 1-5 respectively.
  • the video processing method provided in the embodiment of the present application is applied to the first terminal in the first terminal set, including receiving a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed Obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed; determine each video to be processed in the video frame set to be processed The corresponding relationship between frames and each second terminal in the second terminal set; generate a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and super-score each video according to the corresponding relationship The sub-command is sent to the corresponding second terminal.
  • the second terminal set is determined in the terminal list to be allocated by processing the video parameter information, so that the second terminal performs super-resolution processing on the video frame, and uses the computing power of each terminal to process each Super-resolution of each video frame, using the user's bandwidth to distribute traffic, reducing the bandwidth consumption of the website, and at the same time using the computing power of the second terminal to perform super-resolution, saving the operating cost of the website and ensuring users You can see the super-resolution video.
  • FIG. 2 shows the video processing method provided by the second embodiment of the present application.
  • the video processing method provided by this embodiment is applied to the third terminal in the third terminal set, specifically including steps 202 to 206:
  • Step 202 Receive a super-resolution video frame sent by each second terminal in the second terminal set, wherein the super-resolution video frame carries a super-resolution video frame identifier.
  • the third terminal set specifically refers to the terminal used to splice the video frames that complete the super-resolution task to generate the target video.
  • each third terminal in the third terminal set will receive Each super-resolution video frame sent by the second terminal has completed super-resolution, and each super-resolution video frame carries a super-resolution video frame identifier corresponding to the video frame.
  • the second terminal set has a total of 30 second terminals, and for the third terminal 1, receiving 30 third terminals
  • Step 204 Splicing each super-resolution video frame according to each super-resolution video frame identifier to obtain an initial super-resolution video frame set.
  • each third terminal the super-resolution video frames sent by each second terminal are received, and then the received super-resolution video frames are sorted and spliced according to the identification of each super-resolution video frame, so as to obtain the initial A collection of super-resolution video frames.
  • the video frames after super-resolution in each second terminal are stored in the initial super-resolution video frame set.
  • the third terminal 1 receives 30 super-resolution video frames, and in the order of 1-30, the 30 super-resolution video frames are The frames are sorted and spliced to obtain an initial set of super-resolution video frames (super-resolution video frame 1, super-resolution video frame 2, ... super-resolution video frame 30).
  • Step 206 Perform time-domain smoothing processing on the super-resolution video frames in the initial super-resolution video frame set and encode to obtain a target video stream.
  • each super-resolution video frame is processed in a different second terminal, the second terminal does not refer to the adjacent video frames when performing video super-resolution, so multiple super-resolution video After the frames are spliced, the picture will be incoherent. Therefore, it is also necessary to perform a smoothing operation on the whole super-resolution video frame of the initial super-resolution video frame set, so that the super-resolution video frames are more coherent and smooth. Afterwards, encoding is performed to obtain a target video stream that can be played directly, and the target video stream is distributed to terminals of other users.
  • each third terminal will perform the operation of merging super-resolution video frames to generate the target video stream, and one of the multiple third terminals can be selected
  • the three terminals serve as the target third terminal, and the target third terminal sends the target video stream to other users, and the other third terminals serve as backup third terminals.
  • the backup third terminal sends the target video stream to other users. The stream is sent to other users to ensure the timeliness of video super-resolution.
  • each third terminal can also upload the target video stream to the corresponding CDN node, reducing the bandwidth consumption of the video website, changing from a single CDN node to multiple CDN nodes, and improving It improves the timeliness of viewers watching super-resolution videos, improves user experience, and reduces the operating costs of video websites.
  • the time-domain smoothing process is performed on each super-resolution video frame of the initial super-resolution video frame set, including:
  • time-domain smoothing processing is also required for the initial super-resolution video frame set.
  • the target smoothing processing strategy will be determined in the smoothing processing strategy.
  • the smoothing processing strategy library specifically refers to the The database of strategies, the smoothing processing strategy library stores a variety of strategies for smoothing video processing, such as optical flow processing strategies, video frame smoothing model strategies, video smoothing filter strategies, and so on.
  • the optical flow method uses the change of pixels in the image sequence on the time threshold and the correlation between adjacent frames to find the corresponding relationship between the previous frame and the current frame, so as to calculate the objects between adjacent frames. Motion information, so as to achieve the effect of video smoothing between video frames.
  • the video frame smoothing model strategy is to input two adjacent video frames into the intelligent model through the intelligent AI model.
  • the intelligent model eliminates the obvious difference between two adjacent video frames, making the transition between the two video frames smoother. smooth.
  • the video smoothing filter strategy specifically refers to eliminating obvious differences between adjacent video frames by means of a smoothing filter, so as to make the transition between adjacent video frames smoother.
  • Determining the target smoothing strategy in the smoothing strategy library can be determined according to the performance of the third terminal. For example, when the performance weight of the third terminal is relatively high, a video frame smoothing model can be selected for video smoothing. When the third terminal When the performance weight of the video is low, you can use the video smoothing filter strategy or the optical flow method processing strategy to smooth the video.
  • the super-resolution video frames in the initial video frame set can be smoothed in time domain according to the target smoothing strategy, so that the connection of the super-resolution video frames is smoother, more natural and coherent. While improving the frame quality, it can also ensure the smoothness of the video, so that users can experience the high-definition picture quality after super-resolution, and improve the user experience.
  • the video processing method provided by the embodiment of the present application is applied to the third terminal in the third terminal set, and includes receiving a super-resolution video frame sent by each second terminal in the second terminal set, wherein the super-resolution video frame carries a super-resolution video frame sub-video frame identification; splicing each super-resolution video frame according to each super-resolution video frame identification to obtain an initial super-resolution video frame set; performing temporal smoothing on the super-resolution video frames in the initial super-resolution video frame set And encode to obtain the target video stream.
  • the super-resolution video frames are spliced and distributed to the clients of each viewer, which reduces the bandwidth consumption of video websites, improves the timeliness of viewers watching super-resolution videos, and improves user experience , It also reduces the operating cost of the video website.
  • the time-domain smoothing processing of super-resolution video frames makes the connection of super-resolution video frames more smooth, natural and coherent. High-definition picture quality improves user experience.
  • FIG. 3 provides a schematic diagram of the architecture of a video processing system provided by an embodiment of the present application.
  • the video processing system provided by the present application includes a first terminal set 302, a second terminal set 304, and a third terminal set 306. in:
  • the first terminal in the first terminal set 302 is configured to receive a video super-resolution task, obtain a list of terminals to be allocated in response to the video super-resolution task, and determine in the list of terminals to be allocated according to video parameter information to be processed
  • the second terminal set and the third terminal set determine the corresponding relationship between each video frame to be processed in the video frame set to be processed and each second terminal in the second terminal set, according to each video frame to be processed and
  • the video parameter information to be processed generates a video super-resolution instruction, and sends each video super-resolution instruction to a corresponding second terminal according to the corresponding relationship;
  • the second terminal in the second terminal set 304 is configured to determine the target video frame to be processed according to the video super-resolution instruction, and perform video super-resolution processing on the target video frame to be processed to obtain the corresponding target super-resolution video frame, and sending the target super-resolution video frame to each third terminal in the third terminal set;
  • the third terminal in the third terminal set 306 is configured to receive the super-resolution video frame sent by each second terminal in the second terminal set, and splice each super-resolution video frame to obtain the initial super-resolution video frame.
  • each user’s terminal corresponding to it can be either a resource acquirer or a resource provider.
  • the first choice is to Determine which users have agreed to obtain user terminal information permissions, and after collecting the terminal information of users who have agreed to obtain user terminal information permissions, sort them according to performance, and divide them into the first terminal set and the third according to performance from low to high.
  • a terminal set and a second terminal set wherein, the terminal performance of the second terminal in the second terminal set is higher than that of the third terminal in the third terminal set, and the terminal performance of the third terminal in the third terminal set is higher than that of the first terminal
  • the first terminal in the terminal set, the first terminal in the first terminal set is used for overall management of super-resolution tasks; the second terminal in the second terminal set is used for super-resolution of video frames to be processed to obtain super-resolution video frame; the third terminal in the third terminal set is used to receive the super-resolution video frames sent by the second terminal, and perform splicing to generate a target video stream, and distribute the target video stream to other users.
  • the first terminal determines the second terminal set and the third terminal set according to the video super-resolution task, and according to each video frame to be processed and each second terminal set in the video frame set to be processed According to the corresponding relationship between the terminals, a video super-resolution instruction is generated, and the video frame to be processed is sent to the corresponding second terminal.
  • the video frame to be processed consists of 1-t video frames
  • the second terminal is divided into 1-t total t groups, and the video frames are sent to the corresponding numbered groups respectively, such as the video frame 1 to be processed is sent to the first group , the video frame 2 to be processed is sent to the second group... .
  • each second terminal in the second terminal set After each second terminal in the second terminal set receives the video to be processed, it performs a super-resolution task on a single video frame to be processed, obtains a super-resolution video frame, and marks each super-resolution video frame.
  • the super-resolution video frame of the super-resolution video is sent to a third terminal in the third terminal set. For example, if the frame rate of a video is 30 seconds, a single terminal cannot complete the super-resolution task of 30 frames within 1 second, and cannot meet the requirements of real-time super-resolution, then you can choose 30 terminals that process one frame of picture in less than 1 second.
  • the super-resolution processing of 30 video frames to be processed can be completed within 1 second to achieve super-resolution requirements.
  • the super-resolution video frame is obtained, and the super-resolution video frame is sent to each third terminal among the third terminals.
  • Each third terminal in the third terminal set can receive the super-resolution video frame sent by each second terminal, and splicing according to the identification pairs of each super-resolution video frame to obtain the initial super-resolution video frame, and then the initial super-resolution video frame
  • the super-resolution video frames in the super-resolution video frame set are subjected to time-domain smoothing processing and encoding to obtain a target video stream, and then the third terminal sends the target video stream to terminals of other users.
  • the first terminal, the second terminal, and the third terminal may have the same attribute information, for example, the first terminal, the second terminal, and the third terminal belong to the same operator, or the first terminal, the second terminal, The third terminal belongs to the same region and so on.
  • the video processing system determines the second terminal set in the terminal list to be allocated by processing the video parameter information, and allocates each video frame to be processed in the set of video frames to be processed to the second terminal set in the second terminal set.
  • the terminal performs video super-resolution processing, and performs super-resolution on each video frame through the computing power of each terminal, using the user's bandwidth to distribute traffic, reducing the bandwidth consumption of the website, and using the computing power of the second terminal at the same time Performing super-resolution saves the operating cost of the website, and at the same time ensures that users can see the video after super-resolution.
  • the super-resolution processing of video frames is performed in multiple second terminals, and a whole task is divided into multiple sub-tasks, which are processed in parallel by multiple terminals and completed at the same time, achieving the effect of real-time super-resolution and improving the user's use Experience, and average the high requirements for a single terminal to multiple second terminals to complete, reducing the investment cost of video websites.
  • the third terminal splices the super-resolution video frames and distributes them to the clients of each viewer, which reduces the bandwidth consumption of video websites, improves the timeliness of viewers watching super-resolution videos, improves user experience, and reduces operating costs of video sites.
  • the time-domain smoothing processing of super-resolution video frames makes the connection of super-resolution video frames more smooth, natural and coherent. High-definition picture quality improves user experience.
  • the present application also provides an embodiment of a video processing device applied to the first terminal in the first terminal set, as shown in FIG. 4
  • a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • the device is applied to the first terminal in the first terminal set, as shown in Figure 4, the device includes:
  • the receiving module 402 is configured to receive a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed;
  • the obtaining module 404 is configured to obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed;
  • the determination module 406 is configured to determine the corresponding relationship between each video frame to be processed in the set of video frames to be processed and each second terminal in the set of second terminals;
  • the sending module 408 is configured to generate a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and send each video super-resolution instruction to a corresponding second terminal according to the corresponding relationship.
  • the acquisition module 404 is further configured to:
  • the second terminal set is determined according to the terminal performance weight of each terminal.
  • the acquisition module 404 is further configured to:
  • a preset number of terminals is selected as the second terminal or a terminal whose performance weight exceeds a preset threshold is selected as the second terminal.
  • the determining module 406 is further configured to:
  • the device also includes:
  • a terminal determining module configured to determine a third set of terminals in the list of terminals to be allocated.
  • the sending module 408 is further configured to:
  • a video super-resolution instruction is generated according to each video frame to be processed, the video parameter information to be processed, and the third terminal set.
  • the video parameter information to be processed includes video frame rate information, original resolution information, and target resolution information.
  • the video processing device provided in the embodiment of the present application is applied to the first terminal in the first terminal set, including receiving a video super-resolution task, wherein the video super-resolution task carries a set of video frames to be processed and video parameter information to be processed Obtain a terminal list to be allocated in response to the video super-resolution task, and determine a second terminal set in the terminal list to be allocated according to the video parameter information to be processed; determine each video to be processed in the video frame set to be processed The corresponding relationship between frames and each second terminal in the second terminal set; generate a video super-resolution instruction according to each video frame to be processed and the video parameter information to be processed, and super-score each video according to the corresponding relationship The sub-command is sent to the corresponding second terminal.
  • the second terminal set is determined in the terminal list to be allocated by processing the video parameter information, so that the second terminal can perform super-resolution processing on the video frame, and the computing power of each terminal is used for each Super-resolution of each video frame, using the user's bandwidth to distribute traffic, reducing the bandwidth consumption of the website, and at the same time using the computing power of the second terminal to perform super-resolution, saving the operating cost of the website and ensuring users You can see the super-resolution video.
  • the foregoing is a schematic solution of a video processing apparatus applied to the first terminal in the first terminal set in this embodiment.
  • the technical solution of the video processing device belongs to the same idea as the technical solution of the above-mentioned video processing method applied to the first terminal in the first terminal set, and the video processing method applied to the first terminal in the first terminal set
  • the details not described in detail in the technical solution of the processing device refer to the above description of the technical solution of the video processing method applied to the first terminal in the first terminal set.
  • the present application also provides an embodiment of a video processing device applied to the third terminal in the third terminal set, as shown in FIG. 5
  • a video processing device applied to the third terminal in the third terminal set, as shown in FIG. 5
  • FIG. 5 A schematic structural diagram of another video processing device provided by an embodiment of the present application. As shown in Figure 5, the device includes:
  • the receiving module 502 is configured to receive a super-resolution video frame sent by each second terminal in the second terminal set, wherein the super-resolution video frame carries a super-resolution video frame identifier;
  • the splicing module 504 is configured to splice each super-resolution video frame according to each super-resolution video frame identifier to obtain an initial super-resolution video frame set;
  • the smoothing encoding module 506 is configured to perform temporal smoothing processing on the super-resolution video frames in the initial super-resolution video frame set and encode them to obtain a target video stream.
  • the smooth coding module 506 is further configured to:
  • the smoothing processing strategy library includes an optical flow method processing strategy, a video frame smoothing model strategy, and a video smoothing filter strategy.
  • the video processing device is applied to the third terminal in the third terminal set, and includes receiving a super-resolution video frame sent by each second terminal in the second terminal set, wherein the super-resolution video frame carries a super-resolution video frame sub-video frame identification; splicing each super-resolution video frame according to each super-resolution video frame identification to obtain an initial super-resolution video frame set; performing temporal smoothing on the super-resolution video frames in the initial super-resolution video frame set And encode to obtain the target video stream.
  • the super-resolution video frames are spliced and distributed to the clients of each viewer, reducing the bandwidth consumption of video websites, improving the timeliness of viewers watching super-resolution videos, and improving user experience , It also reduces the operating cost of the video website.
  • the time-domain smoothing processing of super-resolution video frames makes the connection of super-resolution video frames more smooth, natural and coherent. High-definition picture quality improves user experience.
  • the foregoing is a schematic solution of a video processing apparatus applied to a third terminal in the third terminal set in this embodiment.
  • the technical solution of the video processing device belongs to the same idea as the technical solution of the above-mentioned video processing method applied to the third terminal in the third terminal set, and the video processing method applied to the third terminal in the third terminal set
  • the details not described in detail in the technical solution of the processing device refer to the above description of the technical solution of the video processing method applied to the third terminal in the third terminal set.
  • FIG. 6 shows a structural block diagram of a computing device 600 provided according to an embodiment of the present application.
  • Components of the computing device 600 include, but are not limited to, memory 610 and processor 620 .
  • the processor 620 is connected to the memory 610 through the bus 630, and the database 650 is used for saving data.
  • Computing device 600 also includes an access device 640 that enables computing device 600 to communicate via one or more networks 660 .
  • networks include the Public Switched Telephone Network (PSTN), Local Area Network (LAN), Wide Area Network (WAN), Personal Area Network (PAN), or a combination of communication networks such as the Internet.
  • Access device 640 may include one or more of any type of network interface (e.g., a network interface card (NIC)), wired or wireless, such as an IEEE 802.11 wireless local area network (WLAN) wireless interface, Worldwide Interoperability for Microwave Access ( Wi-MAX) interface, Ethernet interface, Universal Serial Bus (USB) interface, cellular network interface, Bluetooth interface, Near Field Communication (NFC) interface, etc.
  • NIC network interface card
  • the above-mentioned components of the computing device 600 and other components not shown in FIG. 6 may also be connected to each other, for example, through a bus. It should be understood that the structural block diagram of the computing device shown in FIG. 6 is only for the purpose of illustration, rather than limiting the scope of the application. Those skilled in the art can add or replace other components as needed.
  • Computing device 600 may be any type of stationary or mobile computing device, including mobile computers or mobile computing devices (e.g., tablet computers, personal digital assistants, laptop computers, notebook computers, netbooks, etc.), mobile telephones (e.g., smartphones), ), wearable computing devices (eg, smart watches, smart glasses, etc.), or other types of mobile devices, or stationary computing devices such as desktop computers or PCs.
  • mobile computers or mobile computing devices e.g., tablet computers, personal digital assistants, laptop computers, notebook computers, netbooks, etc.
  • mobile telephones e.g., smartphones
  • wearable computing devices eg, smart watches, smart glasses, etc.
  • desktop computers or PCs e.g., desktop computers or PCs.
  • Computing device 600 may also be a mobile or stationary server.
  • the steps of the video processing method are implemented when the processor 620 executes the computer instructions.
  • An embodiment of the present application also provides a computer-readable storage medium, which stores computer instructions, and implements the steps of the aforementioned video processing method when the computer instructions are executed by a processor.
  • the computer instructions include computer program code, which may be in source code form, object code form, executable file or some intermediate form or the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, and a read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunication signal and software distribution medium, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunication signal and software distribution medium, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请提供视频处理方法及装置,其中所述视频处理方法应用于第一终端集合中的第一终端,包括:接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端,通过本方法通过每个第二终端的算力对每个视频帧进行超分,节省了网站的运营成本,同时还保证了用户可以看到超分后的视频。

Description

视频处理方法、装置及系统
本申请申明2022年01月04日递交的申请号为202210006283.5、名称为“视频处理方法、装置及系统”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及互联网技术领域,特别涉及视频处理方法。本申请同时涉及视频处理装置,一种计算设备,以及一种计算机可读存储介质。
背景技术
目前的在线视频网站,提供了丰富的视频业务内容,用户可以观看电影、电视剧、综艺,也可以观看直播、录播等等,极大丰富了用户的业务生活,有一些视频业务内容由于视频拍摄设备的原因,拍摄的视频分辨率较低,例如在直播过程中,主播的手机分辨率较低,而观看直播的用户希望观看到分辨率更高、质量更好的视频,基于此,视频超分技术随之得到了发展。
但是,视频超分技术会导致需要更高的码率来进行网络传输,进而会导致消耗更多的网络带宽,而且视频超分技术需要很大的算力,需要消耗更多的计算资源,提高网站的运营成本。因此,本发明人意识到,如何在可以保证用户观看更高清的视频的同时,又能有效节省网络带宽,降低网站的运营成本,就成为技术人员亟待解决的问题。
发明内容
有鉴于此,本申请实施例提供了视频处理方法。本申请同时涉及视频处理装置,一种计算设备,以及一种计算机可读存储介质,以解决现有技术中存在的视频超分任务占用网络资源大、计算消耗大的问题。
根据本申请实施例的第一方面,提供了一种视频处理方法,应用于第一终端集合中的第一终端,所述方法包括:
接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;
响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;
确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;
根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对 应关系将每个视频超分指令发送至对应的第二终端。
根据本申请实施例的第二方面,提供了一种视频处理方法,应用于第三终端集合中的第三终端,所述方法包括:
接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;
根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;
对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
根据本申请实施例的第三方面,提供了一种视频处理系统,包括:
第一终端集合中的第一终端,被配置为接收视频超分任务,响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合和第三终端集合,确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系,根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端;
所述第二终端集合中的第二终端,被配置为根据视频超分指令确定目标待处理视频帧,并对目标待处理视频帧执行视频超分处理,获得对应的目标超分视频帧,并将目标超分视频帧发送至所述第三终端集合中的每个第三终端;
所述第三终端集合中的第三终端,被配置为接收所述第二终端集合中每个第二终端发送的超分视频帧,将每个超分视频帧进行拼接获得初始超分视频帧集合,对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
根据本申请实施例的第四方面,提供了一种视频处理装置,应用于第一终端集合中的第一终端,所述装置包括:
接收模块,被配置为接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;
获取模块,被配置为响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;
确定模块,被配置为确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;
发送模块,被配置为根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
根据本申请实施例的第五方面,提供了一种视频处理装置,应用于第三终端集合中的第三终端,所述装置包括:
接收模块,被配置为接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;
拼接模块,被配置为根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;
平滑编码模块,被配置为对所述初始超分视频帧集合中的超分视频帧做时域平滑处理 并编码获得目标视频流。
根据本申请实施例的第六方面,提供了一种计算设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,所述处理器执行所述计算机指令时实现所述视频处理方法的步骤。
根据本申请实施例的第七方面,提供了一种计算机可读存储介质,其存储有计算机指令,该计算机指令被处理器执行时实现所述视频处理方法的步骤。
本申请提供的视频处理方法,应用于第一终端集合中的第一终端,所述方法包括:接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合;确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
本申请一实施例实现了通过处理视频参数信息在待分配终端列表中确定第二终端集合,并将待处理视频帧集合中的每个待处理视频帧分配给第二终端集合中的第二终端进行视频超分处理,通过每个终端的算力对每个视频帧进行超分,利用了用户的带宽来进行流量的分发,减少了网站的带宽消耗,同时利用的第二终端的算力进行超分,节省了网站的运营成本,同时还保证了用户可以看到超分后的视频。
附图说明
图1是本申请一实施例提供的一种视频处理方法的流程图;
图2是本申请第二实施例提供的一种视频处理方法的流程图;
图3是本申请一实施例提供的一种视频处理处理系统的架构示意图;
图4是本申请一实施例提供的一种视频处理装置的结构示意图;
图5是本申请另一实施例提供的一种视频处理装置的结构示意图;
图6是本申请一实施例提供的一种计算设备的结构框图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此本申请不受下面公开的具体实施的限制。
在本申请一个或多个实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请一个或多个实施例。在本申请一个或多个实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本申请一个或多个实施例中使用的术语“和/或”是指并包含一个或 多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请一个或多个实施例中可能采用术语第一、第二等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请一个或多个实施例范围的情况下,第一也可以被称为第二,类似地,第二也可以被称为第一。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
首先,对本申请一个或多个实施例涉及的名词术语进行解释。
CDN:Content Delivery Network,即内容分发网络。CDN是构建在现有网络基础之上的智能虚拟网络,依靠部署在各地的边缘服务器,通过中心平台的负载均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率,CDN的关键技术主要有内容存储和分发技术。
视频超分:超分辨率技术(Super-Resolution,SR)是指从观测到的低分辨率图像重建出相应的高分辨率图像,在监控设备、卫星图像和医学影像等领域都有重要的应用价值。
视频编码:对音视频进行重新编码的过程。
在本申请中,提供了视频处理方法,本申请同时涉及视频处理装置,一种计算设备,以及一种计算机可读存储介质,在下面的实施例中逐一进行详细说明。
图1示出了根据本申请一实施例提供的一种视频处理方法的流程图,应用于第一终端集合中的第一终端,具体包括以下步骤:
步骤102:接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息。
随着互联网技术的发展,视频网站提供了丰富的视频业务服务,如电影、电视剧、综艺,用户还可以在视频网站观看直播、录播等等,但是有一些视频业务服务由于拍摄设备的原因,导致视频的分辨率较低,例如在直播场景下,主播的手机分辨率只有1080P,而观看直播的用户的终端支持2K,甚至支持4K的分辨率,则希望可以看到更高清晰的视频,因此,视频超分技术应运而生。
视频超分是指通过软件或硬件的方式提高原有图像的分辨率,将低分辨率的图像处理为高分辨率的图像,如将分辨率为1920*1080的图像放大为分辨率为4096*2160的图像。
但是视频超分技术会导致需要更高的码率来进行网络传输,会消耗更多的带宽,且视频超分技术需要终端具有很强大的算力,需要消耗终端较多的计算资源,这些都极大的提高了网站运营商的成本。
对于视频业务,尤其是在直播场景下,目前通常会使用CDN来为用户提供媒体内容分发服务器,CDN(Content Delivery Network,即内容分发网络),CDN是构建在现有网络基础之上的智能虚拟网络,依靠部署在各地的边缘服务器,通过中心平台的负载均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率,CDN的关键技术主要有内容存储和分发技术。
基于此,如果需要对某个视频资源进行超分,需要一台终端具备强大的算力,根据视 频的帧率对视频进行超分处理,以提供更清晰的画质,例如,对于一个30帧率的视频而言,终端需要具备在1秒内对30帧画面进行超分的能力,对终端的算力和资源都是一种极大的考验,因此,在本申请中提供一种视频处理方法,将视频超分任务拆分为若干个子任务,由多个终端来进行处理。
具体的,本申请中的第一终端集合具体是指对视频超分任务进行统筹处理的视频,在实际应用中,为了保证第一终端集合的可用性,第一终端集合中的每个第一终端均为参与到视频业务中的终端,例如,在直播场景中,某个直播间的观众有30人,则第一终端集合是由这30人中的部分终端组成;某个直播间的观众有3000人,则第一终端集合是由这3000人中的部分终端组成。
更进一步的,由于第一终端集合是用来对视频超分任务进行统筹处理的终端,因此,第一终端集合可以选择性能一般的终端,在实际应用中,在用户同意获取隐私权限的情况下,可以将目标视频任务对应的允许获取终端属性信息的终端的性能进行统计,并按照性能从低到高的顺序进行排序,选取预设数量的终端作为第一终端,例如,某直播间的观众有30人,将30个终端按照终端性能从低到高的顺序进行排序,并选择排名前10的终端作为第一终端集合,即第一终端集合中有10个第一终端。
视频超分任务具体是指由业务上游下发的对视频帧进行超分处理的任务,在实际的视频业务场景下,用户希望提高视频的分辨率,以达到更好的观看效果,因此就会有了视频超分任务,即提高原视频的分辨率,第一终端的终端性能较弱,在视频超分任务中负责统筹管理的工作,后续根据视频超分任务将待处理视频帧分发给性能较好的第二终端,由第二终端执行真正的视频超分操作。
例如在直播场景下,需要对主播的视频流进行超分处理,则第一终端接收到上游视频业务下发的视频超分任务,在视频超分任务中携带有待处理视频帧集合以及待处理视频参数信息,具体的,待处理视频帧集合是指针对某个第一终端需要进行调度的待处理视频帧集合,待处理视频参数信息具体是指待处理视频需要进行处理的参数信息,包括视频帧率信息、原始分辨率信息、目标分辨率信息等等,例如待处理视频的原始分辨率为1080P,需要将其超分至目标分辨率为2K,待处理视频参数信息中即包括原始分辨率,又包括有目标分辨率。
在本申请提供的一具体实施方式中,以第一终端集合中包括有3个第一终端为例,第一终端1接收视频超分任务1,视频超分任务1中携带有待处理视频帧集合(待处理视频帧1-10);第一终端2接收视频超分任务2,视频超分任务2中携带有待处理视频帧集合(待处理视频帧11-20);第一终端3接收视频超分任务3,视频超分任务3中携带有待处理视频帧集合(待处理视频每个帧21-30)。待处理视频参数信息均为将待处理视频帧由原始分辨率1280*720,超分至目标分辨率3840*2160。
步骤104:响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合。
其中,待分配终端列表具体是指目标视频任务对应的终端中,除去第一终端集合外的 其他终端列表,例如,在直播场景中,某个直播间由30个观众,其中,10个终端为第一终端集合,则其他20个终端组成的列表即为待分配终端列表;若某个直播间由500个观众,其中,30个终端为第一终端集合,则其他470个终端组成的列表即为待分配终端列表。
在获取待分配终端列表后,即在待分配终端列表中确定第二终端集合,具体的,第二终端集合具体是指用于对待处理视频帧进行超分处理的终端的集合。对待处理视频帧进行超分的终端需要具备较强的算力,即具备较强的终端性能,在实际应用中,还需要结合待处理视频参数信息来进一步确定第二终端集合,例如对于同样一个终端,将720P的待处理视频帧超分至2K的分辨率所需要的算力要高于将720P的待处理视频帧超分至1080P的分辨率的算力。因此,同样一个终端,在执行将720P的视频帧超分至1080P的视频帧时,可以作为第二终端,在执行将720P的视频帧超分至4K的视频帧时,则无法作为第二终端。
基于此,更进一步的,根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合,包括:
获取所述待分配终端列表中每个终端的终端属性信息;
根据所述待处理视频参数信息和每个终端的终端属性信息确定每个终端的终端性能权重;
根据每个终端的终端性能权重确定第二终端集合。
具体的,根据每个终端的终端性能权重确定第二终端集合,包括:
将每个终端按照终端性能权重从高到低的顺序进行排序;
在排序结果中选取预设数量的终端作为第二终端或选取终端性能权重超过预设阈值的终端作为第二终端。
在实际应用中,在用户同意获取隐私权限的情况下,获取待分配列表中每个终端的终端属性信息,具体的,终端属性信息包括终端的CPU型号、内存大小、可用资源信息等等。在根据待处理视频参数信息和每个终端的终端属性信息来计算每个终端的终端性能权重,具体的,可以通过确定每个终端的终端算力分值来确定每个终端的终端性能权重,在确定好每个终端的终端性能权重后,按照终端性能权重从高到底的顺序,并选取预设数量的终端作为第二终端,并组成第二终端集合。第二终端的数量可以是预先设定的,也可以根据视频的帧率确定,例如,可以确定第二终端集合中第二终端的数量为60个;或者获取视频的帧率为30帧/秒,进而可以确定第二终端的数量为30个;跟更进一步的,视频的帧率为30秒,待处理终端列表中有100个终端,还可以选取终端性能排名靠前的前60个终端作为第二终端集合。在本申请中,对如何确定第二终端的具体方式不做限定。
例如,待分配终端列表中有50个终端,根据每个终端的终端属性信息和待处理视频参数信息计算每个终端的算力值,并按照算力值从高到低的顺序进行排序,在排序完成后,选取前30个终端作为第二终端。
又例如,待分配终端列表中有50个终端,根据每个终端的终端属性信息和待处理视频参数信息计算每个终端的算力值,按照算力值从高到低的顺序进行排序,在排序完成后,选取算力值超过预设阈值的终端作为第二终端。
在实际应用中,第二终端集合中的第二终端是执行对待处理视频帧进行超分操作的终端,进将待处理视频帧进行超分,还无法组成流畅连贯的视频,因此,还需要有终端将完成超分处理的视频帧进行拼接,因此,本申请提供的视频处理方法还包括:
在所述待分配终端列表中确定第三终端集合。
其中,第三终端集合具体是指用于将完成超分任务的视频帧拼接为视频的终端,在实际应用中,第三终端集合也是由目标视频任务对应的终端来组成的,即本申请中提及的第一终端集合、第二终端集合、第三终端集合均为同一个目标视频任务对应的终端。例如,对于直播场景下,某直播间由100个用户,这100个用户中有80个用户同意获取隐私权限,则对这80个用户所使用的终端根据终端属性信息进行排序,将性能权重较差的终端作为第一终端集合,将性能权重较好的终端作为第二终端集合,将性能权重中等的终端作为第三终端集合,在实际应用中,终端性能权重的较差、中等、较好均是由相对而言的,需要将目标视频业务对应的终端按照终端性能权重进行排序,选取预设数量(或比例)的终端组成第一终端集合、第二终端集合和第三终端集合。
在本申请提供的一具体实施方式中,沿用上例,第一终端集合中有3个第一终端,根据视频超分任务确定30个第二终端组成第二终端集合,其中第二终端1-10对应第一终端1,第二终端11-20对应第一终端2,第二终端21-30对应第一终端3,同时确定第三终端集合中的5个第三终端,分别为第三终端1-5。
步骤106:确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系。
在确定第二终端集合后,需要将待处理视频帧集合中每个待处理视频帧分别发送至对应的第二终端集合中进行超分处理,在本申请中,是将一个视频超分任务划分为多个子任务,由多个终端分别执行视频帧超分处理,因此,要确定每个待处理视频帧由哪个第二终端进行超分处理。
具体的,确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系,包括:
确定所述待处理视频帧集合中待处理视频帧数量和所述第二终端集合中第二终端的终端数量;
基于所述待处理视频帧数量和所述终端数量确定每个待处理视频帧与每个第二终端的对应关系。
在实际应用中,每个第一终端可以知道自己对应第二终端的数量,通常情况下,第一终端会根据接收到的待处理视频帧集合中待处理视频帧的数量来确定第二终端中第二终端的终端数量,例如,待处理视频帧集合中有n个待处理视频帧,第一终端对应的第二终端集合中通常会包括n个第二终端,当第二终端集合中的第二终端算力较强时,第二终端集合中还可以包括n/2个终端,例如,第一终端接收的待处理视频帧集合有15个待处理视频帧,则第一终端对应的第二终端集合可以有15个第二终端;若第一终端接收的待处理视频帧集合有60个待处理视频帧,则第一终端对应的第二终端集合中可以有60个第二终端, 也可以有30个第二终端。
在确定待处理视频帧数量和第二终端的终端数量后,将待处理视频帧进行标号,例如有n个待处理视频帧,标号为1-n;相应的,第二终端包括n个第二终端,为每个第二终端进行标号,也标记为1-n,则可以将待处理视频帧1与第二终端1对应,待处理视频帧2与第二终端2对应……待处理视频帧n与第二终端n对应。
在本申请提供的一具体实施方式中,沿用上例,第一终端1接收到待处理视频帧1-10,第一终端1对应第二终端1-10;第一终端2接收到待处理视频帧11-20,第一终端2对应第二终端11-20;第一终端3接收到待处理视频帧21-30,第一终端3对应第二终端21-30,则待处理视频帧1对应第二终端1,待处理视频帧2对应第二终端2,……待处理视频帧30对应第二终端30。
步骤108:根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
在确定好每个待处理视频帧与每个第二终端的对应关系后,将每个待处理视频帧和待处理视频参数信息生成视频超分指令,并将视频超分指令将发送至每个待处理视频帧对应的第二终端。
视频超分指令具体是指对视频帧进行超分处理的指令,在视频超分指令中通常会携带待处理视频帧、待处理视频参数信息,第二终端标识,即将视频超分指令发送至第二终端标识对应的第二终端,以使第二终端获取视频超分指令中携带的待处理视频帧和待处理视频参数信息,并响应于视频超分指令根据待处理视频参数信息中的原始分辨率信息和目标分辨率信息将待处理视频帧进行超分处理。
在实际应用中,除了确定第二终端集合之外,还会确定第三终端集合,第三终端集合用于接收每个超分好的视频帧,并对视频帧进行拼接,生成目标视频,因此,根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,包括:
根据每个待处理视频帧、所述待处理视频参数信息和所述第三终端集合生成视频超分指令。
具体的,在生成视频超分指令时,还需要根据第三终端集合中每个第三终端的终端标识与待处理视频帧、待处理视频参数信息一起生成视频超分指令,是的待处理视频帧对应的第二终端在根据待处理视频帧和待处理视频参数信息完成视频超分操作之后,可以将完成超分操作的视频帧发送至第三终端中进行视频的拼接。
在本申请提供的一具体实施方式中,沿用上例,并以待处理视频帧1为例,根据待处理视频帧1、待处理视频参数信息“待处理视频帧由原始分辨率1280*720,超分至目标分辨率3840*2160”和第三终端1-5组成视频超分指令1,并发送至第二终端1中对待处理视频帧1进行视频超分操作,以使第二终端1在根据待处理视频参数信息将待处理视频帧1进行超分处理后,获得目标视频帧1,并将目标视频帧1分别发送至第三终端1-5。
本申请实施例提供的视频处理方法,应用于第一终端集合中的第一终端,包括接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息; 响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合;确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。通过本申请提供的视频处理方法,实现了通过处理视频参数信息在待分配终端列表中确定第二终端集合,以使第二终端对视频帧进行超分处理,通过每个终端的算力对每个视频帧进行超分,利用了用户的带宽来进行流量的分发,减少了网站的带宽消耗,同时利用的第二终端的算力进行超分,节省了网站的运营成本,同时还保证了用户可以看到超分后的视频。
参见图2,图2示出了本申请第二实施例提供的视频处理方法,该实施例提供的视频处理方法应用于第三终端集合中的第三终端,具体包括步骤202-步骤206:
步骤202:接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识。
第三终端集合具体是指用于将完成超分任务的视频帧进行拼接,生成目标视频的终端,在本申请提供的视频处理方法中,第三终端集合中的每个第三终端均会接收每个第二终端发送的完成超分的超分视频帧,并且每个超分视频帧均携带有该视频帧对应的超分视频帧标识。
在本申请提供的一具体实施方式中,以第三终端集合中包括3个第三终端为例,第二终端集合一共有30个第二终端,对于第三终端1来说,接收30个第二终端发送的超分视频帧,其中,每个超分视频帧均携带有对应的超分视频帧标识,如超分视频帧1、超分视频帧2、……超分视频帧30。
步骤204:根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合。
在每个第三终端中,均会接收到每个第二终端发送的超分视频帧,再根据每个超分视频帧的标识对接收到的超分视频帧进行排序并拼接,从而获得初始超分视频帧集合。初始超分视频帧集合中保存有在每个第二终端中完成超分后的视频帧。
在本申请提供的一具体实施方式中,沿用上例,依然以第三终端1为例,第三终端1接收到30个超分视频帧,并按照1-30的顺序将30个超分视频帧进行排序拼接,获得初始超分视频帧集合(超分视频帧1、超分视频帧2、……超分视频帧30)。
步骤206:对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
由于每个超分视频帧是分别在不同的第二终端中进行超分处理的,第二终端在进行视频超分时,并没有参考前后相邻的视频帧,因此在将多个超分视频帧进行拼接后,会出现画面不连贯的情况,因此,还需要对初始超分视频帧集合的超分视频帧的整体在始于上进行平滑操作,使得超分视频帧之间更加连贯平顺,之后再进行编码,获得可以直接播放的目标视频流,并将目标视频流分发到其他用户的终端中。
在实际应用中,第三终端集合中通常会有多个第三终端,每个第三终端均会进行合并超分视频帧生成目标视频流的操作,可以在多个第三终端中选择一个第三终端做为目标第三终端,由目标第三终端将目标视频流发送给其他用户,其他第三终端作为备用第三终端,当目标第三终端出现故障时,由备用第三终端将目标视频流发送给其他用户,保证了视频超分的时效性。
由于第三终端可能会分布在全国各个位置,还可以由每个第三终端将目标视频流上传至对应的CDN节点,减少视频网站的带宽消耗,由单一的CDN节点变为多CDN节点,提升了观众观看超分视频的时效性,提升了用户体验,也减少了视频网站的运营成本。
在本申请提供的具体实施方式中,对所述初始超分视频帧集合的每个超分视频帧做时域平滑处理,包括:
在平滑处理策略库中确定目标平滑处理策略;
基于目标平滑处理策略对所述初始超分视频帧集合的每个超分视频帧做时域平滑处理。
在实际应用中,对初始超分视频帧集合还需要进行时域平滑处理,具体的,会在平滑处理策略中确定目标平滑处理策略,其中,平滑处理策略库具体是指用于保存视频平滑处理策略的数据库,平滑处理策略库中保存有多种平滑处理视频的策略,如光流法处理策略、视频帧平滑模型策略、视频平滑滤镜策略等等。
其中,光流法是利用图像序列中像素在时间阈上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息,从而达到视频帧之间视频平滑的效果。
视频帧平滑模型策略是通过智能AI模型的方式,将相邻的两个视频帧输入到智能模型中,智能模型消除两个相邻视频帧的明显差异,使得两个视频帧之间的过渡更加平滑。
视频平滑滤镜策略具体是指通过平滑滤镜的方式,消除相邻视频帧之间的明显差异,使得相邻视频帧之间的过渡更加平滑。
在平滑处理策略库种确定目标平滑处理策略,可以根据第三终端的性能来确定,例如当第三终端的性能权重较高视,可以选择视频帧平滑模型来进行视频平滑处理,当第三终端的性能权重较低时,可以使用视频平滑滤镜策略或光流法处理策略来对视频做平滑处理。
在确定目标平滑处理策略后,即可根据该目标平滑处理策略对初始视频帧集合中的超分视频帧做时域平滑处理,使得超分视频帧的连接更加平顺、自然、连贯,在提升视频帧画质的同时,还可以保证视频的流畅性,使得用户体验到超分后的高清画质,提升用户的使用体验。
本申请实施例提供的视频处理方法,应用于第三终端集合中的第三终端,包括接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。通过本申请提供的视频处理方法,将超分后的视频帧进行拼接,并分发到各个观众的客户端,减少视频网站的带宽消耗,提升了观众观看超分视频的时效性,提升了用户体验,也减少了视频网 站的运营成本。同时对超分视频帧的时域平滑处理,使得超分视频帧的连接更加平顺、自然、连贯,在提升视频帧画质的同时,还可以保证视频的流畅性,使得用户体验到超分后的高清画质,提升用户的使用体验。
图3提供了本申请一实施例提供的视频处理系统的架构示意图,如图3所示,本申请提供的视频处理系统包括第一终端集合302、第二终端集合304和第三终端集合306,其中:
第一终端集合302中的第一终端,被配置为接收视频超分任务,响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合和第三终端集合,确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系,根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端;
所述第二终端集合304中的第二终端,被配置为根据视频超分指令确定目标待处理视频帧,并对目标待处理视频帧执行视频超分处理,获得对应的目标超分视频帧,并将目标超分视频帧发送至所述第三终端集合中的每个第三终端;
所述第三终端集合306中的第三终端,被配置为接收所述第二终端集合中每个第二终端发送的超分视频帧,将每个超分视频帧进行拼接获得初始超分视频帧集合,对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
在本申请提供的视频处理系统中,针对某个目标视频业务,其对应的每个用户的终端,既可以是资源的获取者,也可以是资源的提供者,在实际应用中,首选要先确定哪些用户同意了获取用户终端信息的权限,并将同意了获取用户终端信息权限的用户的终端信息采集后,根据性能进行排序,并按照性能从低到高划分为第一终端集合、第三终端集合和第二终端集合,其中,第二终端集合中的第二终端的终端性能高于第三终端集合中的第三终端,第三终端集合中的第三终端的终端性能高于第一终端集合中的第一终端,第一终端集合中的第一终端用于对超分任务进行统筹管理;第二终端集合中的第二终端用于对待处理视频帧进行超分,获得超分视频帧;第三终端集合中的第三终端用于接收第二终端发送的超分视频帧,并进行拼接,生成目标视频流,并将目标视频流分发给其他用户。
基于此,第一终端在接收到视频超分任务后,根据视频超分任务确定第二终端集合和第三终端集合,并根据待处理视频帧集合中每个待处理视频帧与每个第二终端的对应关系,生成视频超分指令,将待处理视频帧发送至对应的第二终端。例如待处理视频帧由1-t个视频帧,将第二终端划分为1-t共t个小组,分别将视频帧发送至对应编号的小组中,如待处理视频帧1发送至第1小组,待处理视频帧2发送至第二小组……。
第二终端集合中的每个第二终端在接收到待处理视频后,分别对单个待处理视频帧执行超分任务,获得超分视频帧,并标注每个超分视频帧的标识,将完成超分视频的超分视频帧发送至第三终端集合中的第三终端。例如视频的帧率为30秒,单个终端无法在1秒内完成30帧画面的超分任务,无法做到实时超分的要求,则可以选择30个处理一帧画面时间小于1秒的终端,分别对30个待处理视频帧进行超分即可,用30个终端分别处理待处理视频帧1-30,则可以在1秒内完成30个待处理视频帧的超分处理,达到实施超分的要求。 在完成待处理视频帧超分后,获得超分视频帧,并将超分视频帧发送中第三终端中的每个第三终端。
第三终端集合中的每个第三终端均可以接收每个第二终端发送的超分视频帧,并根据每个超分视频帧的标识对进行拼接,获得初始超分视频帧,再对初始超分视频帧集合中的超分视频帧做时域平滑处理并编码,获得目标视频流,再由第三终端将目标视频流发送至其他用户的终端。
在实际应用中,第一终端、第二终端、第三终端可以具有相同的属性信息,例如第一终端、第二终端、第三终端同属于一个运营商,或第一终端、第二终端、第三终端同属于一个地区等等。
本申请提供的视频处理系统,通过处理视频参数信息在待分配终端列表中确定第二终端集合,并将待处理视频帧集合中的每个待处理视频帧分配给第二终端集合中的第二终端进行视频超分处理,通过每个终端的算力对每个视频帧进行超分,利用了用户的带宽来进行流量的分发,减少了网站的带宽消耗,同时利用的第二终端的算力进行超分,节省了网站的运营成本,同时还保证了用户可以看到超分后的视频。
其次,在多个第二终端中分别进行视频帧的超分处理,将一整个任务划分为多个子任务,由多个终端并行处理同时完成,达到了实时超分的效果,提升了用户的使用体验,并将对单一终端的高要求平均到多个第二终端中完成,减少了视频网站的投入成本。
最后,由第三终端将超分后的视频帧进行拼接,并分发到各个观众的客户端,减少视频网站的带宽消耗,提升了观众观看超分视频的时效性,提升了用户体验,也减少了视频网站的运营成本。同时对超分视频帧的时域平滑处理,使得超分视频帧的连接更加平顺、自然、连贯,在提升视频帧画质的同时,还可以保证视频的流畅性,使得用户体验到超分后的高清画质,提升用户的使用体验。
与上述应用于第一终端集合中的第一终端的视频处理方法实施例相对应,本申请还提供了应用于第一终端集合中的第一终端的视频处理装置实施例,图4示出了本申请一实施例提供的一种视频处理装置的结构示意图。该装置应用于第一终端集合中的第一终端,如图4所示,该装置包括:
接收模块402,被配置为接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;
获取模块404,被配置为响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;
确定模块406,被配置为确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;
发送模块408,被配置为根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
可选的,所述获取模块404,进一步被配置为:
获取所述待分配终端列表中每个终端的终端属性信息;
根据所述待处理视频参数信息和每个终端的终端属性信息确定每个终端的终端性能权重;
根据每个终端的终端性能权重确定第二终端集合。
可选的,所述获取模块404,进一步被配置为:
将每个终端按照终端性能权重从高到低的顺序进行排序;
在排序结果中选取预设数量的终端作为第二终端或选取终端性能权重超过预设阈值的终端作为第二终端。
可选的,所述确定模块406,进一步被配置为:
确定所述待处理视频帧集合中待处理视频帧数量和所述第二终端集合中第二终端的终端数量;
基于所述待处理视频帧数量和所述终端数量确定每个待处理视频帧与每个第二终端的对应关系。
可选的,所述装置还包括:
终端确定模块,被配置为在所述待分配终端列表中确定第三终端集合。
可选的,所述发送模块408,进一步被配置为:
根据每个待处理视频帧、所述待处理视频参数信息和所述第三终端集合生成视频超分指令。
可选的,所述待处理视频参数信息包括视频帧率信息、原始分辨率信息、目标分辨率信息。
本申请实施例提供的视频处理装置,应用于第一终端集合中的第一终端,包括接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合;确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。通过本申请提供的视频处理装置,实现了通过处理视频参数信息在待分配终端列表中确定第二终端集合,以使第二终端对视频帧进行超分处理,通过每个终端的算力对每个视频帧进行超分,利用了用户的带宽来进行流量的分发,减少了网站的带宽消耗,同时利用的第二终端的算力进行超分,节省了网站的运营成本,同时还保证了用户可以看到超分后的视频。
上述为本实施例的一种应用于第一终端集合中的第一终端的视频处理装置的示意性方案。需要说明的是,该视频处理装置的技术方案与上述的应用于第一终端集合中的第一终端的视频处理方法的技术方案属于同一构思,应用于第一终端集合中的第一终端的视频处理装置的技术方案未详细描述的细节内容,均可以参见上述应用于第一终端集合中的第一终端的视频处理方法的技术方案的描述。
与上述应用于第三终端集合中的第三终端的视频处理方法实施例相对应,本申请还提 供了应用于第三终端集合中的第三终端的视频处理装置实施例,图5示出了本申请一实施例提供的另一种视频处理装置的结构示意图。如图5所示,该装置包括:
接收模块502,被配置为接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;
拼接模块504,被配置为根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;
平滑编码模块506,被配置为对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
可选的,所述平滑编码模块506,进一步被配置为:
在平滑处理策略库中确定目标平滑处理策略;
基于目标平滑处理策略对所述初始超分视频帧集合的每个超分视频帧做时域平滑处理。
可选的,所述平滑处理策略库包括光流法处理策略、视频帧平滑模型策略、视频平滑滤镜策略。
本申请实施例提供的视频处理装置,应用于第三终端集合中的第三终端,包括接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。通过本申请提供的视频处理装置,将超分后的视频帧进行拼接,并分发到各个观众的客户端,减少视频网站的带宽消耗,提升了观众观看超分视频的时效性,提升了用户体验,也减少了视频网站的运营成本。同时对超分视频帧的时域平滑处理,使得超分视频帧的连接更加平顺、自然、连贯,在提升视频帧画质的同时,还可以保证视频的流畅性,使得用户体验到超分后的高清画质,提升用户的使用体验。
上述为本实施例的一种应用于第三终端集合中的第三终端的视频处理装置的示意性方案。需要说明的是,该视频处理装置的技术方案与上述的应用于第三终端集合中的第三终端的视频处理方法的技术方案属于同一构思,应用于第三终端集合中的第三终端的视频处理装置的技术方案未详细描述的细节内容,均可以参见上述应用于第三终端集合中的第三终端的视频处理方法的技术方案的描述。
图6示出了根据本申请一实施例提供的一种计算设备600的结构框图。该计算设备600的部件包括但不限于存储器610和处理器620。处理器620与存储器610通过总线630相连接,数据库650用于保存数据。
计算设备600还包括接入设备640,接入设备640使得计算设备600能够经由一个或多个网络660通信。这些网络的示例包括公用交换电话网(PSTN)、局域网(LAN)、广域网(WAN)、个域网(PAN)或诸如因特网的通信网络的组合。接入设备640可以包括有线或无线的任何类型的网络接口(例如,网络接口卡(NIC))中的一个或多个,诸如IEEE802.11无线局域网(WLAN)无线接口、全球微波互联接入(Wi-MAX)接口、以太网接口、通用串行总线(USB)接口、蜂窝网络接口、蓝牙接口、近场通信(NFC)接口,等等。
在本申请的一个实施例中,计算设备600的上述部件以及图6中未示出的其他部件也可以彼此相连接,例如通过总线。应当理解,图6所示的计算设备结构框图仅仅是出于示例的目的,而不是对本申请范围的限制。本领域技术人员可以根据需要,增添或替换其他部件。
计算设备600可以是任何类型的静止或移动计算设备,包括移动计算机或移动计算设备(例如,平板计算机、个人数字助理、膝上型计算机、笔记本计算机、上网本等)、移动电话(例如,智能手机)、可佩戴的计算设备(例如,智能手表、智能眼镜等)或其他类型的移动设备,或者诸如台式计算机或PC的静止计算设备。计算设备600还可以是移动式或静止式的服务器。
其中,处理器620执行所述计算机指令时实现所述的视频处理方法的步骤。
上述为本实施例的一种计算设备的示意性方案。需要说明的是,该计算设备的技术方案与上述的视频处理方法的技术方案属于同一构思,计算设备的技术方案未详细描述的细节内容,均可以参见上述视频处理方法的技术方案的描述。
本申请一实施例还提供一种计算机可读存储介质,其存储有计算机指令,该计算机指令被处理器执行时实现如前所述视频处理方法的步骤。
上述为本实施例的一种计算机可读存储介质的示意性方案。需要说明的是,该存储介质的技术方案与上述的视频处理方法的技术方案属于同一构思,存储介质的技术方案未详细描述的细节内容,均可以参见上述视频处理方法的技术方案的描述。
上述对本申请特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
所述计算机指令包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分, 可以参见其它实施例的相关描述。
以上公开的本申请优选实施例只是用于帮助阐述本申请。可选实施例并没有详尽叙述所有的细节,也不限制该发明仅为所述的具体实施方式。显然,根据本申请的内容,可作很多的修改和变化。本申请选取并具体描述这些实施例,是为了更好地解释本申请的原理和实际应用,从而使所属技术领域技术人员能很好地理解和利用本申请。本申请仅受权利要求书及其全部范围和等效物的限制。

Claims (20)

  1. 一种视频处理方法,应用于第一终端集合中的第一终端,所述方法包括:
    接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;
    响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;
    确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;
    根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
  2. 如权利要求1所述的视频处理方法,其中,根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合,包括:
    获取所述待分配终端列表中每个终端的终端属性信息;
    根据所述待处理视频参数信息和每个终端的终端属性信息确定每个终端的终端性能权重;
    根据每个终端的终端性能权重确定第二终端集合。
  3. 如权利要求2所述的视频处理方法,其中,根据每个终端的终端性能权重确定第二终端集合,包括:
    将每个终端按照终端性能权重从高到低的顺序进行排序;
    在排序结果中选取预设数量的终端作为第二终端或选取终端性能权重超过预设阈值的终端作为第二终端。
  4. 如权利要求1所述的视频处理方法,其中,确定所述待处理视频帧集合中每个待处 理视频帧与所述第二终端集合中每个第二终端的对应关系,包括:
    确定所述待处理视频帧集合中待处理视频帧数量和所述第二终端集合中第二终端的终端数量;
    基于所述待处理视频帧数量和所述终端数量确定每个待处理视频帧与每个第二终端的对应关系。
  5. 如权利要求1所述的视频处理方法,其中,所述方法还包括:
    在所述待分配终端列表中确定第三终端集合。
  6. 如权利要求5所述的视频处理方法,其中,根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,包括:
    根据每个待处理视频帧、所述待处理视频参数信息和所述第三终端集合生成视频超分指令。
  7. 如权利要求1所述的视频处理方法,其中,所述待处理视频参数信息包括视频帧率信息、原始分辨率信息、目标分辨率信息。
  8. 一种视频处理方法,应用于第三终端集合中的第三终端,所述方法包括:
    接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;
    根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;
    对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
  9. 如权利要求8所述的视频处理方法,其中,对所述初始超分视频帧集合的每个超分视频帧做时域平滑处理,包括:
    在平滑处理策略库中确定目标平滑处理策略;
    基于目标平滑处理策略对所述初始超分视频帧集合的每个超分视频帧做时域平滑处理。
  10. 如权利要求9所述的视频处理方法,其中,所述平滑处理策略库包括光流法处理策 略、视频帧平滑模型策略、视频平滑滤镜策略。
  11. 一种视频处理系统,所述系统包括:
    第一终端集合中的第一终端,被配置为接收视频超分任务,响应于所述视频超分任务获取待分配终端列表,并根据待处理视频参数信息在所述待分配终端列表中确定第二终端集合和第三终端集合,确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系,根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端;
    所述第二终端集合中的第二终端,被配置为根据视频超分指令确定目标待处理视频帧,并对目标待处理视频帧执行视频超分处理,获得对应的目标超分视频帧,并将目标超分视频帧发送至所述第三终端集合中的每个第三终端;
    所述第三终端集合中的第三终端,被配置为接收所述第二终端集合中每个第二终端发送的超分视频帧,将每个超分视频帧进行拼接获得初始超分视频帧集合,对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
  12. 一种视频处理装置,应用于第一终端集合中的第一终端,所述装置包括:
    接收模块,被配置为接收视频超分任务,其中,所述视频超分任务中携带有待处理视频帧集合和待处理视频参数信息;
    获取模块,被配置为响应于所述视频超分任务获取待分配终端列表,并根据所述待处理视频参数信息在所述待分配终端列表中确定第二终端集合;
    确定模块,被配置为确定所述待处理视频帧集合中每个待处理视频帧与所述第二终端集合中每个第二终端的对应关系;
    发送模块,被配置为根据每个待处理视频帧和所述待处理视频参数信息生成视频超分指令,并根据所述对应关系将每个视频超分指令发送至对应的第二终端。
  13. 如权利要求12所述的视频处理装置,其中,所述获取模块还被配置为:
    获取所述待分配终端列表中每个终端的终端属性信息;
    根据所述待处理视频参数信息和每个终端的终端属性信息确定每个终端的终端性能权重;
    根据每个终端的终端性能权重确定第二终端集合。
  14. 如权利要求13所述的视频处理装置,其中,所述获取模块还被配置为:
    将每个终端按照终端性能权重从高到低的顺序进行排序;
    在排序结果中选取预设数量的终端作为第二终端或选取终端性能权重超过预设阈值的终端作为第二终端。
  15. 如权利要求12所述的视频处理装置,其中,所述确定模块还被配置为:
    确定所述待处理视频帧集合中待处理视频帧数量和所述第二终端集合中第二终端的终端数量;
    基于所述待处理视频帧数量和所述终端数量确定每个待处理视频帧与每个第二终端的对应关系。
  16. 如权利要求12所述的视频处理装置,其中,所述装置还包括:
    终端确定模块,被配置为在所述待分配终端列表中确定第三终端集合。
  17. 如权利要求16所述的视频处理装置,其中,所述发送模块还被配置为:
    根据每个待处理视频帧、所述待处理视频参数信息和所述第三终端集合生成视频超分指令。
  18. 一种视频处理装置,应用于第三终端集合中的第三终端,所述装置包括:
    接收模块,被配置为接收第二终端集合中每个第二终端发送的超分视频帧,其中,超分视频帧携带有超分视频帧标识;
    拼接模块,被配置为根据每个超分视频帧标识对每个超分视频帧进行拼接获得初始超分视频帧集合;
    平滑编码模块,被配置为对所述初始超分视频帧集合中的超分视频帧做时域平滑处理并编码获得目标视频流。
  19. 一种计算设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,所述处理器执行所述计算机指令时实现权利要求1-7或者8-10任意一项所述方法的步骤。
  20. 一种计算机可读存储介质,其存储有计算机指令,该计算机指令被处理器执行时实现权利要求1-7或者8-10任意一项所述方法的步骤。
PCT/CN2022/144030 2022-01-04 2022-12-30 视频处理方法、装置及系统 WO2023131076A2 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210006283.5 2022-01-04
CN202210006283.5A CN114363703B (zh) 2022-01-04 2022-01-04 视频处理方法、装置及系统

Publications (2)

Publication Number Publication Date
WO2023131076A2 true WO2023131076A2 (zh) 2023-07-13
WO2023131076A3 WO2023131076A3 (zh) 2023-08-31

Family

ID=81107791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/144030 WO2023131076A2 (zh) 2022-01-04 2022-12-30 视频处理方法、装置及系统

Country Status (2)

Country Link
CN (1) CN114363703B (zh)
WO (1) WO2023131076A2 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291810A (zh) * 2023-11-27 2023-12-26 腾讯科技(深圳)有限公司 视频帧的处理方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363703B (zh) * 2022-01-04 2024-01-23 上海哔哩哔哩科技有限公司 视频处理方法、装置及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5921469B2 (ja) * 2013-03-11 2016-05-24 株式会社東芝 情報処理装置、クラウドプラットフォーム、情報処理方法およびそのプログラム
US10268901B2 (en) * 2015-12-04 2019-04-23 Texas Instruments Incorporated Quasi-parametric optical flow estimation
CN111045795A (zh) * 2018-10-11 2020-04-21 浙江宇视科技有限公司 资源调度方法及装置
CN111614965B (zh) * 2020-05-07 2022-02-01 武汉大学 基于图像网格光流滤波的无人机视频稳像方法及系统
CN111314741B (zh) * 2020-05-15 2021-01-05 腾讯科技(深圳)有限公司 视频超分处理方法、装置、电子设备及存储介质
CN114363703B (zh) * 2022-01-04 2024-01-23 上海哔哩哔哩科技有限公司 视频处理方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291810A (zh) * 2023-11-27 2023-12-26 腾讯科技(深圳)有限公司 视频帧的处理方法、装置、设备及存储介质
CN117291810B (zh) * 2023-11-27 2024-03-12 腾讯科技(深圳)有限公司 视频帧的处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
WO2023131076A3 (zh) 2023-08-31
CN114363703A (zh) 2022-04-15
CN114363703B (zh) 2024-01-23

Similar Documents

Publication Publication Date Title
WO2023131076A2 (zh) 视频处理方法、装置及系统
Petrangeli et al. An http/2-based adaptive streaming framework for 360 virtual reality videos
Joseph et al. NOVA: QoE-driven optimization of DASH-based video delivery in networks
CN111314741B (zh) 视频超分处理方法、装置、电子设备及存储介质
Liu et al. Vues: Practical mobile volumetric video streaming through multiview transcoding
Reddy et al. Qos-Aware Video Streaming Based Admission Control And Scheduling For Video Transcoding In Cloud Computing
CN102882829A (zh) 一种转码方法及系统
US20160029079A1 (en) Method and Device for Playing and Processing a Video Based on a Virtual Desktop
Sun et al. VU: Edge computing-enabled video usefulness detection and its application in large-scale video surveillance systems
EP3624453A1 (en) A transcoding task allocation method, scheduling device and transcoding device
Paglierani et al. Techno‐economic analysis of 5G immersive media services in cloud‐enabled small cell networks: The neutral host business model: Providing techno‐economic guidelines for the successful provision of 5G innovative services in small cell networks
CN108156459A (zh) 可伸缩视频传输方法及系统
CN111818383B (zh) 视频数据的生成方法、系统、装置、电子设备及存储介质
CN110784731B (zh) 一种数据流转码方法、装置、设备及介质
CN114363651A (zh) 直播流处理方法及装置
Laghari et al. The state of art and review on video streaming
CN114173160A (zh) 直播推流方法及装置
Nguyen et al. Scalable multicast for live 360-degree video streaming over mobile networks
Zhu et al. When cloud meets uncertain crowd: An auction approach for crowdsourced livecast transcoding
US11431770B2 (en) Method, system, apparatus, and electronic device for managing data streams in a multi-user instant messaging system
Wu et al. Mobile live video streaming optimization via crowdsourcing brokerage
Liu et al. QoE-driven HAS live video channel placement in the media cloud
US10462248B2 (en) Digital content sharing cloud service system, digital content sharing cloud service device, and method using the same
Chao et al. 5G Edge Computing Experiments with Intelligent Resource Allocation for Multi-Application Video Analytics
US20240184632A1 (en) A method and apparatus for enhanced task grouping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918517

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE