WO2019119950A1 - 视频编码处理方法、装置及具有视频编码功能的应用 - Google Patents

视频编码处理方法、装置及具有视频编码功能的应用 Download PDF

Info

Publication number
WO2019119950A1
WO2019119950A1 PCT/CN2018/110816 CN2018110816W WO2019119950A1 WO 2019119950 A1 WO2019119950 A1 WO 2019119950A1 CN 2018110816 W CN2018110816 W CN 2018110816W WO 2019119950 A1 WO2019119950 A1 WO 2019119950A1
Authority
WO
WIPO (PCT)
Prior art keywords
encoder
encoding
parameter
coding
frame data
Prior art date
Application number
PCT/CN2018/110816
Other languages
English (en)
French (fr)
Inventor
时永方
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019119950A1 publication Critical patent/WO2019119950A1/zh
Priority to US16/670,842 priority Critical patent/US10931953B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2383Channel coding or modulation of digital bit-stream, e.g. QPSK modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a video encoding processing method and apparatus, and an application having a video encoding function.
  • the present application example provides an application having a video encoding function, including the video encoding processing apparatus as described above.
  • FIG. 1A is a schematic structural diagram of a system involved in an example of the present application.
  • 1B is a schematic structural diagram of a system involved in an example of the present application.
  • FIG. 1C is a schematic flow chart of a video encoding processing method according to an exemplary embodiment of the present application.
  • FIG. 1D is a schematic diagram of an IPPP reference structure shown in accordance with an illustrative example of the present application.
  • FIG. 3 is a schematic flowchart diagram of a video encoding processing method according to an exemplary embodiment of the present application.
  • FIG. 4 is a flow chart showing a video encoding processing method according to an exemplary embodiment of the present application.
  • FIG. 5 is a structural block diagram of a video encoding processing apparatus according to an illustrative example of the present application.
  • FIG. 6 is a structural block diagram of a computing device shown in accordance with an illustrative example of the present application.
  • the CPU when using a software scheme for video encoding, the CPU has a high occupancy rate, large power consumption, fast power consumption, and the real-time performance of the encoding is subject to coding complexity. Restriction, can not meet the real-time encoding requirements of high-definition video, while using hardware scheme for video encoding, although the CPU usage is low, power consumption is small, but the customization is not strong, can not flexibly choose the video characteristics suitable for network transmission, and resist The problem of poor packet loss capability, the application example of the present application provides a video encoding processing method.
  • the video coding processing method provided by the example of the present application can be applied to a scenario in which video coding is required, including but not limited to video call, video conference, live video, live game, video surveillance, and the like.
  • the application scenario may be as shown in FIG. 1A.
  • the user uses the terminal 101 to perform a video call, a video conference, a live video, a live game, and the like.
  • the video encoding processing method provided by the example of the present application is applied to an application having a video encoding function, for example, an application such as QQ, WeChat, Now Live, etc., as shown by 102 in FIG. 1A, during the current processing cycle.
  • the second coding parameter may be determined according to the coding state parameter and the load information of the terminal, thereby determining the second coding parameter.
  • the first encoder is adjusted according to the second encoding parameter, or the second encoder is configured according to the second encoding parameter to perform encoding processing on the frame data in the next processing period;
  • the second encoder and the first encoder belong to different types of encoders. Therefore, by adjusting the coding parameters and the encoder according to the network state and the load information, it is possible to use the hardware encoder to encode the high-resolution video and improve the video resolution when the network packet loss is small and the bandwidth is sufficient;
  • the software coding method is used for compression coding, which reduces the video jam, improves the flexibility of the video coding, and improves the user experience.
  • the application scenario of the example of the present application may also be as shown in FIG. 1B, that is, the application 103 includes an encoding module 104, where the encoding module 104 includes a CODEC (CODER-DECoder) control module 105, and the CODEC is a type A program or device that supports video and audio compression (CO) and decompression (DEC), which can compress and encode an original video signal into a binary data file of a specific format and can decode the data file.
  • CODEC control module 105 adjusts the encoding parameters of the encoder or selects an appropriate encoder according to the network status and the load information, and encodes the video data to obtain the encoded output data.
  • the video encoding processing method provided by the present application is described in detail, and the method is applied to an application having a video encoding function for illustration.
  • FIG. 1C is a schematic flow chart of a video encoding processing method according to an illustrative example of the present application.
  • the video encoding processing method provided by the example of the present application may be performed by an application having a video encoding function to perform encoding processing on a video.
  • the application is set in the terminal, and the type of the terminal may be many, such as a mobile phone or a computer.
  • the processing cycle refers to an encoding processing cycle that is set in advance according to needs.
  • the processing period can be set according to the number of frame data, for example, one frame of data corresponds to one processing cycle; or, it can be set according to time, for example, 1 second (s) or 2 s time corresponds to one processing cycle, etc. There is no restriction on the application.
  • the first coding parameter may include a code rate, a resolution, and/or a frame rate of the first encoder.
  • the code rate is also called a bit rate, and may be a data size compiled by the encoder per second, in units of kbps.
  • 800 kbps means that the encoder generates 800 kb of data per second;
  • the frame rate can be Frames per Second (FPS);
  • the resolution can be the number of pixels included in the unit of inches.
  • the first encoder may be a hardware encoder or a software encoder.
  • the coding status parameter may include an average packet loss ratio, a Peak Signal to Noise Ratio (PSNR), an average transmission code rate, and an average network bandwidth.
  • PSNR Peak Signal to Noise Ratio
  • the load information may include the remaining battery life of the terminal and the average CPU usage.
  • the average CPU usage is the occupancy of the load to the central processor during the current processing cycle.
  • Step S102 The application determines the second encoding parameter according to the encoding state parameter and the load information of the terminal.
  • the second encoding parameter includes an encoder's code rate, resolution, and/or frame rate.
  • the coding state parameter of the terminal and the correspondence between the load information and the coding parameter may be preset, so that after acquiring the coding state parameter and the load information of the terminal in the current processing period, the terminal may determine the terminal according to the preset correspondence.
  • the average network packet loss rate AvgLoss the average CPU usage AvgCPU, the average coding efficiency AvgEfficiency (calculated from the average peak signal-to-noise ratio AvgPSNR, the average transmission code rate AvgBr), and the average network bandwidth BW (Bandwidth) are assumed in advance.
  • the remaining life time T of the terminal is divided into three levels: short, normal and long.
  • the resolution of the coding parameters is divided into three levels: low, medium and high.
  • the encoding state parameter of the current processing period is determined as follows: the average network packet loss rate AvgLoss is small, and the network bandwidth BW is sufficient, that is, When the minimum bandwidth requirement of the i+1 resolution level is reached, the resolution level can be up-regulated to i+1 to determine the resolution in the second encoding parameter: i+1 file resolution. If the i+1 file resolution corresponds to medium resolution or high resolution, then an attempt can be made to switch to the hardware encoder for encoding of the next processing cycle.
  • the coding state parameter of the current processing period is: when the network packet loss rate AvgLoss is large, or the bandwidth BW falls below the minimum bandwidth requirement of the i file, the resolution level can be lowered to i-1 to determine the second coding parameter.
  • the resolution in is: i-1 file resolution. If the i-1 file resolution corresponds to the low resolution file, the software coding mode can be used to encode the next processing cycle.
  • the correspondence between the coding state parameter of the terminal and the load information and the coding parameter may be set according to the following principles.
  • I Intra-Prediction
  • I frame an I frame is an intra prediction frame
  • P Predictive-Frame
  • FIG. 1E The software encoder can be configured with a Hierarchical P-frame Prediction (HPP) reference structure as shown in FIG. 1E.
  • HPP Hierarchical P-frame Prediction
  • Video coding using a software encoder can reduce the impact of bursts. Therefore, when the network packet loss rate of the terminal is large, the software encoder can be used for video encoding. Therefore, when the network packet loss rate of the terminal is large, the coding parameters corresponding to the coding state parameter and the load information of the terminal may be set according to the coding parameters applicable to the software encoder.
  • the corresponding coding parameter can be set to the coding parameter applicable to the software encoder, that is, the low resolution. Level, low bit rate level.
  • the hardware encoder does not occupy CPU resources, the power of the terminal can be better saved. Therefore, when the average CPU usage of the load of the terminal is large, the hardware encoder can be preferentially used for video encoding. Therefore, when the average CPU usage of the load of the terminal is large, the coding parameter corresponding to the coding state parameter and the load information of the terminal may be set according to the coding parameter corresponding to the hardware encoder.
  • the resolution level needs to be down-regulated to i-1.
  • the resolution of the i-1 file corresponds to the low resolution file, that is, the coding of the next processing cycle can be performed by using the software coding mode.
  • the terminal load AvgCPU is occupied high, or the current remaining life time T is short. If the software encoder is switched to the software encoder, the AvgCPU of the terminal load is occupied more, so the hardware coding is preferred. Device.
  • the encoder may be further inhibited from cutting back to the hardware encoder after the encoder has switched back to the software encoder to prevent the average encoding efficiency from continuing to decrease.
  • the second coding parameter is determined according to the coding state parameter and the load information of the terminal, and the encoder corresponding to the second coding parameter is determined to be the software coding. It is also a hardware encoder.
  • Step S103 the application adjusts the first encoder according to the second encoding parameter when determining that the second encoding parameter is different from the first encoding parameter, or configures the second encoder according to the second encoding parameter to use the frame in the next processing period.
  • the data is encoded.
  • the second encoder and the first encoder belong to different types of encoders, and the second encoder may also be a hardware encoder or a software encoder.
  • the first encoder may be a software encoder
  • the second encoder may be a hardware encoder
  • the first encoder may be a hardware encoder
  • the second encoder may be a software encoder
  • the hardware encoder may be a hardware encoder module in a hardware chip, such as a Qualcomm chip, and an H.264 hardware encoder module in Apple's A10 chip.
  • the software encoder can be a piece of code contained in the application itself.
  • the code rate, resolution, and/or frame rate of the first encoder may be adjusted.
  • the code rate, resolution, and/or frame rate of the second encoder can be configured.
  • the first encoder may be adjusted according to the second coding parameter. Encoding the frame data in the next processing cycle by using the adjusted first encoder, or configuring the second encoder according to the second encoding parameter to use the second encoder to frame the next processing cycle The data is encoded.
  • whether the first encoder or the second encoder is adjusted according to the load information of the terminal in the current processing period may be determined to perform the next processing.
  • the frame data in the period is encoded.
  • the second encoding parameter may be adjusted according to the second encoding parameter.
  • the first encoder if it is determined that the type of the target encoder used in the next processing cycle is different from the type of the first encoder used in the current processing cycle, the second encoder may be configured according to the second encoding parameter.
  • the method before configuring the second encoder according to the second encoding parameter in step S103, the method further includes:
  • the load information of the terminal in the current processing period it is determined that the type of the target encoder is different from the type of the first encoder used in the current processing period.
  • the average network packet loss rate AvgLoss the average CPU usage AvgCPU
  • the average coding efficiency AvgEfficiency calculated from the average peak signal-to-noise ratio AvgPSNR, the average transmission code rate AvgBr
  • the average network bandwidth BW is sequentially It is divided into three levels: small, general and large.
  • the remaining life time T of the terminal is divided into three levels: short, normal and long.
  • the resolution of the coding parameters is divided into three levels: low, medium and high.
  • the corresponding coding parameter is: the resolution level is i (low).
  • the coding state parameter of the current processing period is determined as follows: when the average network packet loss rate AvgLoss is small, and the network bandwidth BW is sufficient to reach the minimum bandwidth requirement of the i+1 resolution level, the resolution level can be adjusted to i. +1, that is, the resolution in the encoding parameter determined according to the above encoding state parameter is: i+1 file resolution.
  • the encoder type used in the next processing cycle is a hardware encoder, so that the hardware encoder can be configured according to the resolution of the i+1 level, and then the hardware coding is attempted.
  • the device performs encoding for the next processing cycle.
  • the software encoder can be adjusted according to the resolution of the i+1 grade, and then the software encoder is used to encode the next processing cycle.
  • the encoding when the application adjusts the first encoder or configures the second encoder according to the second encoding parameter, the encoding may be adjusted or configured according to the type of the encoder.
  • the application may send the second encoding parameter to the hardware encoder through an interface of the hardware encoder, so that the hardware encoder configures the encoding circuit according to the second encoding parameter.
  • the encoder is a software encoder (its own code)
  • the application can determine the parameters of the calling function according to the second encoding parameter when the software encoder is called, thereby implementing adjustment or configuration of the software encoder.
  • the application when the hardware encoder is used to encode the frame data in the next processing cycle, the application may send the frame data in the next processing cycle to the hardware through the interface of the hardware encoder.
  • the encoder then encodes the frame data in the next processing cycle by the hardware encoder.
  • the encoded code stream is sent to the application through the interface, so that the application can send the received code stream to the Other terminal applications.
  • the application can directly call its own code (ie, software encoder) to encode the frame data in the next processing cycle, and the encoding is completed. After that, the encoded code stream can be sent to the application of other terminals.
  • its own code ie, software encoder
  • the video encoding processing method provided in the example of the present application may be used to obtain the encoding state parameter, the load information of the terminal, and the first encoding parameter of the first encoder used in the current processing period, and may be based on the encoding state parameter of the terminal. And the load information, determining the second encoding parameter, so that when determining that the second encoding parameter is different from the first encoding parameter, adjusting the first encoder according to the second encoding parameter, or configuring the second encoder by using the second encoding parameter, The frame data in one processing cycle is subjected to encoding processing.
  • the hardware encoder is used to encode the high-resolution video when the packet loss is small and the bandwidth is sufficient, and the video resolution is improved;
  • the software coding method is used for compression coding, which reduces the video jam, improves the flexibility of video coding, and improves the user experience.
  • the video encoding processing method provided by the present application is further described below with reference to FIG. 2 .
  • FIG. 2 is a schematic flowchart diagram of a video encoding processing method according to another exemplary embodiment of the present application.
  • the video encoding processing method includes the following steps:
  • Step 201 The application acquires an encoding state parameter of the terminal in the current processing period, load information, and a first encoding parameter of the first encoder used in the current processing period.
  • Step 202 The application determines the second encoding parameter according to the encoding state parameter and the load information of the terminal.
  • the second encoding parameter includes an encoder's code rate, resolution, and/or frame rate.
  • step 201-step 202 can refer to the specific description of the step 101-step 102 in the above example, and details are not described herein again.
  • Step 203 The application determines whether the second encoding parameter is the same as the first encoding parameter. If yes, step 204 is performed; otherwise, step 205 is performed.
  • Step 204 The application continues to encode the frame data in the next processing cycle by the first encoder.
  • the second encoding parameter may be compared with the first encoding parameter of the first encoder used in the current processing period. If the second encoding parameter is the same as the first encoding parameter, the first encoder may not be adjusted, and the first encoder may continue to encode the frame data in the next processing cycle.
  • the application may continue to send the frame data in the next processing cycle to the first encoder through the interface of the first encoder, and the first encoder is next to the next encoder.
  • the frame data in the processing cycle is encoded.
  • the first encoder encoded code stream is received through the interface, so that the application can send the received code stream to other terminal applications.
  • the application can continue to encode the frame data in the next processing cycle by using its own code (ie, the first encoder), and after the encoding is completed, the encoding can be performed.
  • the subsequent code stream is sent to the application of other terminals.
  • Step 205 The application determines, according to the load information of the terminal in the current processing period, whether the type of the target encoder is the same as the type of the first encoder used in the current processing period. If yes, step 206 is performed; otherwise, step 208 is performed.
  • step 206 the application adjusts the code rate, resolution, and/or frame rate of the first encoder.
  • Step 207 The application performs encoding processing on the frame data in the next processing cycle by using the adjusted first encoder.
  • the first encoder may be adjusted according to the second coding parameter. Encoding the frame data in the next processing cycle by using the adjusted first encoder, or configuring the second encoder according to the second encoding parameter to use the second encoder to frame the next processing cycle The data is encoded.
  • the type of the target encoder and the first encoder used in the current processing period may be determined according to load information of the terminal in the current processing period. Whether the types are the same, thereby determining whether to adjust the first encoder or the second encoder to encode the frame data in the next processing cycle.
  • the application determines that the type of the target encoder is the same as the type of the first encoder used in the current processing period according to the load information of the terminal in the current processing period, the code rate, resolution, and / or the frame rate is adjusted, and then the frame data in the next processing cycle is encoded by the adjusted first encoder.
  • the application may adjust the code rate, resolution, and/or frame rate of the first encoder, and then pass the frame data in the next processing period to the first encoder.
  • the interface is sent to the first encoder, and after the first encoder is encoded, the first encoder encoded code stream is received through the interface, so that the application can send the received code stream to other terminal applications.
  • the application may call the first encoder according to the second encoding parameter (ie, call its own code) to encode the frame data in the next processing cycle, and the encoding is completed. After that, the encoded code stream can be sent to the application of other terminals.
  • the second encoding parameter ie, call its own code
  • Step 208 the application configures a code rate, a resolution, and/or a frame rate of the second encoder.
  • the second encoder and the first encoder belong to different types of encoders.
  • step 209 the application performs encoding processing on the frame data in the next processing cycle by using the second encoder.
  • the code rate, resolution, and resolution of the second encoder may be configured. / or frame rate, and then use the configured second encoder to encode the frame data in the next processing cycle.
  • the application may send the frame data in the next processing period to the second encoder through the interface of the second encoder, and after the second encoder is encoded, Then, the second encoder encoded code stream is received through the interface, so that the application can send the received code stream to other terminals.
  • the application may call its own code (ie, the second encoder) according to the second encoding parameter to encode the frame data in the next processing cycle, after the encoding is completed.
  • the encoded code stream can be sent to other terminal applications.
  • step 210 the application determines the first frame data in the next processing period as an intra prediction frame.
  • the application needs to determine the first frame data in the next processing cycle as the intra prediction frame because the encoder changes. That is, I frame, so that other frame data in the next processing cycle can be video encoded using the first frame data.
  • the video encoding processing method provided in the example of the present application after acquiring the encoding state parameter of the terminal in the current processing period, the load information, and the first encoding parameter of the first encoder used in the current processing period, may be based on the encoding state of the terminal.
  • Parameter and load information determining a second encoding parameter, if the second encoding parameter is the same as the first encoding parameter, continuing to encode the frame data in the next processing cycle by the first encoder, if the second encoding parameter and the second encoding parameter If the encoding parameters are different, and the first encoder is determined to be adjusted, the code rate, resolution, and/or frame rate of the first encoder are adjusted, and the adjusted first encoder is used to compare the frame data in the next processing period.
  • the second encoding parameter is different from the first encoding parameter, and determining to configure the second encoder, configuring a code rate, a resolution, and/or a frame rate of the second encoder, and using the second encoder pair
  • the frame data in the next processing cycle is subjected to encoding processing, and finally the first frame data in the next processing cycle is determined as an intra prediction frame.
  • the hardware encoder to encode the high-resolution video and improve the video resolution when the network packet loss is small and the bandwidth is sufficient;
  • the software coding method is used for compression coding, which reduces the video jam, improves the flexibility of the video coding, and improves the user experience.
  • the above analysis shows that after acquiring the coding state parameter, the load information of the terminal and the first coding parameter of the first encoder used in the current processing period, the application may determine according to the coding state parameter and the load information of the terminal. a second encoding parameter, such that when determining that the second encoding parameter is different from the first encoding parameter, adjusting the first encoder according to the second encoding parameter, or configuring the second encoder according to the second encoding parameter, for the next processing cycle
  • the frame data is encoded.
  • the encoding effect may not be as good as that of the first encoder in the previous processing cycle. The above situation will be described in detail.
  • FIG. 3 is a flow chart showing a video encoding processing method according to another illustrative example.
  • the video encoding processing method may include the following steps:
  • Step 301 The application acquires an encoding state parameter, a load information, and a first encoding parameter of the first encoder used in the current processing period in the current processing period.
  • Step 302 The application determines the second encoding parameter according to the encoding state parameter and the load information of the terminal.
  • Step 303 The application determines whether the second encoding parameter is the same as the first encoding parameter. If yes, step 304 is performed; otherwise, step 305 is performed.
  • step 304 the application continues to encode the frame data in the next processing cycle by the first encoder.
  • Step 305 The application determines a first encoding efficiency of the first encoder in the current processing period.
  • the first coding efficiency may be determined according to an average peak signal to noise ratio and an average code rate of the first encoder in the current processing period.
  • the average PSNR is fixed, the larger the average code rate, the lower the coding efficiency.
  • Step 306 the application configures the second encoder according to the second encoding parameter.
  • the second encoder and the first encoder belong to different types of encoders.
  • Step 307 The application performs encoding processing on the frame data in the next processing cycle by using the configured second encoder.
  • the correspondence between the coding state parameter of the terminal and the load information and the coding parameter may also be set according to the following principles.
  • the coding efficiency of the software encoder and the hardware encoder may be different, the coding efficiency of the software encoder is relatively good, and the coding efficiency of the hardware encoder is relatively discrete, so the coding efficiency of the currently used hardware encoder is high.
  • the software encoder can be used for video encoding. Therefore, when the coding efficiency of the currently used hardware encoder is too low, the coding parameters corresponding to the coding state parameter and the load information of the terminal may be set according to the coding parameters corresponding to the software encoder.
  • the corresponding coding parameter may be set as the coding parameter corresponding to the software encoder. That is, the low resolution level and the low code rate level can be switched to video coding using a software encoder.
  • Step 308 the application determines a second encoding efficiency of the second encoder in the next processing cycle.
  • the second coding efficiency may be determined according to an average PSNR and an average code rate of the second encoder in the next processing cycle.
  • the average PSNR is fixed, the larger the average code rate, the lower the coding efficiency.
  • Step 309 The application determines, when the second coding efficiency is less than the first coding efficiency, and the difference is greater than the threshold, determining the third coding parameter according to the coding parameter and the load information of the terminal in the next processing cycle.
  • Step 310 The application configures the first encoder according to the third encoding parameter to perform encoding processing on the frame picture in the latter processing period adjacent to the next processing period.
  • the threshold value may be set as needed, for example, may be set to 20%, 30%, etc. of the first coding efficiency.
  • the second coding efficiency is less than the first coding efficiency, and the difference is greater than the threshold, it means that when the configured second encoder is used to encode the frame data in the next processing cycle, the coding effect is not as good as in the previous processing cycle.
  • the coding effect using the first encoder is good. Therefore, in the example of the present application, the application can switch back to the original first encoder to perform encoding processing on the frame data in the latter processing cycle adjacent to the next processing cycle.
  • the use may be utilized.
  • the first encoder performs encoding processing on the frame data in the latter processing cycle adjacent to the next processing cycle, it is also necessary to reconfigure the encoding parameters of the first encoder.
  • the third encoding parameter may be determined according to the encoding state of the terminal and the load information in the next processing period, so that the first encoder is configured according to the third encoding parameter to be adjacent to the next processing cycle adjacent to the next processing cycle.
  • the inner frame picture is encoded.
  • the method for determining the third coding parameter is the same as the method for determining the first coding parameter or the second coding parameter, and details are not described herein again.
  • the application may continue to determine the third coding parameter in the subsequent processing cycle adjacent to the next processing cycle according to the coding state parameter and the load information of the terminal in the next processing cycle, and according to the third The encoding parameter configures the second encoder to perform encoding processing on the frame picture in the latter processing period adjacent to the next processing period.
  • the second encoder encodes the frame data in the subsequent processing cycle adjacent to the next processing cycle, and continues the frame data in the subsequent processing cycle adjacent to the next processing cycle by the original first encoder pair.
  • the encoding process will have a better coding effect. Therefore, in the example of the present application, if the coding state parameter and the load information of the terminal are unchanged, the application may preferentially encode the frame data in the latter processing cycle adjacent to the next processing cycle by the first encoder.
  • the method may further include:
  • the video encoding processing method provided in the example of the present application after acquiring the encoding state parameter of the terminal in the current processing period, the load information, and the first encoding parameter of the first encoder used in the current processing period, may be based on the encoding state of the terminal.
  • Parameter and load information determining a second encoding parameter, and determining whether the second encoding parameter is the same as the first encoding parameter, and if so, continuing to encode the frame data in the next processing period by the first encoder, otherwise, determining a first encoding efficiency of the first encoder in the current processing period, and configuring the second encoder according to the second encoding parameter, and encoding the frame data in the next processing period by using the second encoder, determining a second encoding efficiency of the second encoder in a processing period, so that when it is determined that the second encoding efficiency is less than the first encoding efficiency, and the difference is greater than the threshold, determining the first according to the encoding parameter and the load information of the terminal in the next processing period Three encoding parameters, and configuring the first encoder according to the third encoding parameter to be adjacent to the next processing cycle
  • the frame picture in a processing cycle is subjected to encoding processing.
  • the application uses the second encoder to encode the frame data in the next processing cycle, and after obtaining the encoded code stream, the application may divide the encoded code stream into N real-time transport protocol packets (real-time transport). Protocol, RTP), and forward error correction (Eursure Correction, FEC) encoding, to generate M redundant packets. Where M can be less than or equal to N. Then, the application can add a packet header identifier to the M+N packets, and package and send the encoded stream corresponding to the frame data in the next processing period according to the packet header identifier.
  • RTP Real-time transport protocol
  • FEC Forward error correction
  • the performance of each encoder may include the anti-drop capability of each encoder, the coding efficiency, whether CPU resources are occupied, and the like.
  • encoding the initial frame data with the third encoder may include: applying the third encoder to encode the initial frame data at the lowest resolution.
  • the frame data may be subjected to preprocessing such as video format conversion, encoding size adjustment, video enhancement processing, and video denoising processing.
  • the video encoding processing method provided in the example of the present application after acquiring the encoding state parameter of the terminal in the current processing period, the load information, and the first encoding parameter of the first encoder used in the current processing period, may be based on the encoding state of the terminal. Parameter and load information, determining a second encoding parameter, and then determining whether the second encoding parameter is the same as the first encoding parameter, and if so, continuing to encode the frame data in the next processing period by the first encoder, if not, Then, according to the second encoding parameter, the second encoder is selected from the current available encoder list, and the second encoder is configured to perform encoding processing on the frame data in the next processing cycle.
  • the video encoding processing apparatus may include:
  • the obtaining module 51 is configured to obtain an encoding state parameter of the terminal in the current processing period, load information, and a first encoding parameter of the first encoder used in the current processing period;
  • a fifth determining module configured to determine, when the encoding state parameter and the load information of the terminal are unchanged, that the priority of the first encoder is higher than the priority of the second encoder.
  • the device further includes:
  • the device further includes:
  • the load information includes the remaining battery life of the terminal and the average CPU usage.
  • the device further includes:
  • a second selecting module configured to: when the video encoding instruction is obtained, select a third encoder to encode the initial frame data from the available encoder list according to performance of each encoder in the current available encoder list Wherein the third encoder is a software encoder.
  • the third encoder is controlled to encode the initial frame data at a lowest resolution.
  • a tenth determining module configured to determine, according to an encoder type that encodes frame data in the next processing period, a header identifier of a real-time transport protocol packet corresponding to the frame data in the next processing period;
  • the hardware encoder is used to encode the high-resolution video when the packet loss is small and the bandwidth is sufficient, and the video resolution is improved;
  • the software coding method is used for compression coding, which reduces the video jam, improves the flexibility of video coding, and improves the user experience.
  • an application having a video encoding function comprising the video encoding processing apparatus of the second aspect.
  • a computing device is also provided.
  • FIG. 6 is a structural block diagram of a computing device (a terminal or other physical device that performs computational processing), according to an illustrative example.
  • the computing device includes:
  • the video encoding processing method includes:
  • the second encoder and the first encoder belong to different types of encoders.
  • the terminal may obtain the coding state parameter of the terminal, the load information, and the first coding parameter of the first encoder used in the current processing period, and may be based on the coding state parameter and the load information of the terminal. Determining a second encoding parameter, so that when determining that the second encoding parameter is different from the first encoding parameter, adjusting the first encoder according to the second encoding parameter, or configuring the second encoder according to the second encoding parameter, for the next processing cycle
  • the frame data within is encoded.
  • the hardware encoder is used to encode the high-resolution video when the packet loss is small and the bandwidth is sufficient, and the video resolution is improved;
  • the software coding method is used for compression coding, which reduces the video jam, improves the flexibility of video coding, and improves the user experience.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a video encoding processing method as described in the above examples.
  • the video encoding processing method includes:
  • the second encoder and the first encoder belong to different types of encoders.
  • the computer readable storage medium provided by the example of the present application may be disposed in a device capable of video encoding, and by performing a video encoding processing method stored thereon, the encoding parameter and the encoder may be adjusted according to the network state and the load information. Therefore, when the network packet loss is small and the bandwidth is sufficient, the hardware encoder is used to encode the high-resolution video to improve the video resolution; when the network packet loss is large, the software coding method is used for compression coding, thereby reducing the video jam. Improves the flexibility of video coding and improves the user experience.
  • a computer program product is also provided that, when executed by an instruction processor in the computer program product, performs a video encoding processing method as described in the above examples.
  • the video encoding processing method includes:
  • the computer program product provided by the example of the present application can be written into a device capable of video encoding, and by executing a program corresponding to the video encoding processing method, the encoding parameter and the encoder can be adjusted according to the network state and the load information, thereby realizing
  • the hardware encoder is used to encode the high-resolution video to improve the video resolution.
  • the software coding method is used for compression coding, which reduces the video jam and improves the video.
  • the flexibility of video coding improves the user experience.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” and “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality” is two or more unless specifically and specifically defined otherwise.
  • the description of the terms “one instance”, “an example”, “example”, “specific example”, or “some examples” and the like means that the specific features or features described in connection with the example or example include In at least one example or example of the application.
  • the schematic representation of the above terms is not necessarily directed to the same examples or examples.
  • the particular features or characteristics described may be combined in a suitable manner in any one or more examples or examples.
  • different examples or examples described in the specification, as well as features of different examples or examples may be combined and combined.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented with any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each example of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the examples of the present application have been shown and described above, it is understood that the foregoing examples are illustrative and are not to be construed as limiting the scope of the application. Changes, modifications, substitutions, and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请是关于一种视频编码处理方法、装置及具有视频编码功能的应用,所述方法包括:获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数;根据终端的编码状态参数及负载信息,确定第二编码参数;在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。

Description

视频编码处理方法、装置及具有视频编码功能的应用
本申请要求于2017年12月19日提交中国专利局、申请号为201711371988.2、发明名称为“视频编码处理方法、装置及具有视频编码功能的应用”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种视频编码处理方法、装置及具有视频编码功能的应用。
背景
随着带宽的提高以及移动互联网的发展,人们对高清、超高清视频体验的追求越来越高。为了将分辨率不断提高的视频的码率降低到网络可以承载的水平,需要对视频进行压缩编码。
技术内容
本申请实例提供了一种视频编码处理方法,应用于计算设备,所述方法包括:获取当前处理周期内编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;根据所述编码状态参数及负载信息,确定第二编码参数;在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据所述第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;其中,所述第二编码器与所述第一编码器分属不同类型编码器。
本申请实例提供了一种视频编码处理装置,所述装置包括:获取模块,用于获取当前处理周期内编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;第一确定模块,用于根据所述编码状态参数及负载信息,确定第二编码参数;第一处理模块,用于在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据所述第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;其中,所述第二编码 器与所述第一编码器分属不同类型编码器。
本申请实例提供了一种具有视频编码功能的应用,包括如上所述的视频编码处理装置。
本申请实例提供了一种终端,包括:
存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,当所述处理器执行所述程序时实现如前所述的视频编码处理方法。
本申请实例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前所述的视频编码处理方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图简要说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实例,并与说明书一起用于解释本申请的原理。
图1A为本申请实例涉及的一种系统构架示意图;
图1B为本申请实例涉及的一种系统构架示意图;
图1C是根据本申请一个示例性实例示出的一种视频编码处理方法的流程示意图;
图1D是根据本申请一个示例性实例示出的IPPP参考结构的示意图;
图1E是根据本申请一个示例性实例示出的HPP参考结构的示意图;
图2是根据本申请一个示例性实例示出的一种视频编码处理方法的流程示意图;
图3是根据本申请一个示例性实例示出的一种视频编码处理方法的流程示意图;
图4是根据本申请一个示例性实例示出的一种视频编码处理方法的流程示意图;
图5是根据本申请一个示例性实例示出的一种视频编码处理装置的结构框图;
图6是根据本申请一个示例性实例示出的计算设备的结构框图。
通过上述附图,已示出本申请明确的实例,后文中将有更详细的描述。这些附 图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实例为本领域技术人员说明本申请的概念。
实施方式
这里将详细地对示例性实例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在一些实例中,由于高分辨率的视频编码复杂度高,因而采用软件方案进行视频编码时,CPU的占用率较高、功耗大、耗电快,且编码的实时性受到编码复杂度的限制,无法满足高清视频的实时编码需求,而采用硬件方案进行视频编码时,虽然CPU的占用率低、功耗小、但可定制性不强,无法灵活选择适合网络传输的视频特性,且抗丢包能力差的问题,本申请实例提出一种视频编码处理方法。
本申请实例提供的视频编码处理方法,可应用于需要进行视频编码的场景中,包括但不限于视频通话、视频会议、视频直播、游戏直播、视频监控等应用场景。上述应用场景可以如图1A所示,用户在使用终端101进行视频通话、视频会议、视频直播、游戏直播等。
在一些实例中,本申请实例提供的视频编码处理方法,应用于具有视频编码功能的应用,例如,QQ、微信、Now直播等应用,如图1A中的102所示,在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;其中,所述第二编码器与所述第一编码器分属不同类型编码器。由此,通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在一些实例中,本申请实例的应用场景还可以如图1B所示,即应用103包括编码模块104,其中编码模块104包括编解码器(CODEC,COder-DECoder)控制模块105,CODEC是一种支持视频和音频压缩(CO)与解压缩(DEC)的程序或设备,可以将原始视频信号压缩编码成特定格式的二进制数据文件,并能解码该数据文件。应用103接收到视频输入后,CODEC控制模块105根据网络状态及负载信息,调整编码器的编码参数或选择合适的编码器,对视频数据进行编码,从而得到编码输出数据。
下面结合附图,对本申请提供的视频编码处理方法、装置及具有视频编码功能的应用进行详细说明。
首先对本申请提供的视频编码处理方法进行详细说明,并以该方法应用于具有视频编码功能的应用进行举例说明。
图1C是根据本申请一个示例性实例示出的一种视频编码处理方法的流程示意图。
如图1C所示,该视频编码处理方法,包括以下步骤:
步骤S101,应用获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数。
具体的,本申请实例提供的视频编码处理方法,可以由具有视频编码功能的应用执行,以对视频进行编码处理。其中,应用设置在终端中,终端的类型可以有很多,比如可以是手机、电脑等。
其中,处理周期,是指根据需要预先设置的编码处理周期。处理周期的大小可以根据帧数据的数量进行设置,例如一帧数据对应一个处理周期;或者,也可以根据时间进行设置,例如1秒(s)或2s的时间对应一个处理周期,等等,本申请对此不作限制。
第一编码参数,可以包括第一编码器的码率、分辨率和/或帧率等,其中,码率又称为比特率,可以是编码器每秒编出的数据大小,单位是kbps,比如800kbps代表编码器每秒产生800kb的数据;帧率可以是每秒显示帧数(Frames per Second,简称:FPS);分辨率可以为单位英寸中所包含的像素点数。
其中,第一编码器可以为硬件编码器或软件编码器。
编码状态参数,可以包括平均丢包率、平均峰值信噪比(Peak Signal to Noise  Ratio,简称PSNR)、平均发送码率及平均网络带宽。
负载信息,可以包括终端剩余续航时间及平均中央处理器占用率。
其中,平均中央处理器占用率,是指当前处理周期内,负载对中央处理器的占用率。
步骤S102,应用根据终端的编码状态参数及负载信息,确定第二编码参数。
其中,第二编码参数,包括编码器的码率、分辨率和/或帧率等。
具体的,可以预先设置终端的编码状态参数及负载信息与编码参数的对应关系,从而应用在获取当前处理周期内终端的编码状态参数及负载信息后,可以根据预先设置的对应关系,确定与终端的编码状态参数及负载信息对应的第二编码参数。
举例来说,假设预先将平均网络丢包率AvgLoss、平均中央处理器占用率AvgCPU、平均编码效率AvgEfficiency(由平均峰值信噪比AvgPSNR、平均发送码率AvgBr计算得到)及平均网络带宽BW(Bandwidth)均依次分为小、一般、大三个等级,将终端剩余续航时间T依次分为短、一般、长三个等级,将编码参数中分辨率依次分为低、中、高三个等级。
假设终端当前处理周期的编码参数为:分辨率档次为i(低),当前处理周期结束后,确定当前处理周期的编码状态参数为:网络平均丢包率AvgLoss小,且网络带宽BW充足,即达到i+1分辨率档次的最低带宽要求时,则可上调分辨率档次到i+1,即可以确定第二编码参数中的分辨率为:i+1档分辨率。若i+1档分辨率对应中分辨率或者高分辨率,则可以尝试切换到硬件编码器进行下一处理周期的编码。
而当前处理周期的编码状态参数为:网络丢包率AvgLoss较大,或带宽BW降到低于i档最低带宽要求时,则可下调分辨率档次到i-1,即可以确定第二编码参数中的分辨率为:i-1档分辨率。若i-1档分辨率对应低分辨率档,则可以使用软件编码方式进行下一处理周期的编码。
需要说明的是,在本申请实例中,终端的编码状态参数及负载信息与编码参数的对应关系,可以根据以下原则设置。
原则一
由于硬件编码器只能配置如图1D所示的IPPP参考结构,其中,“I”(Intra-Prediction)表示I帧,I帧是帧内预测帧,解码时不依赖其它帧,是随机存取的入点,同时是解码的基准帧。“P”表示P帧,P帧(Predictive-Frame)是前向 预测帧,P帧依次参考相邻的上一帧。在图1D中,仅示有P帧,P i表示i时刻的帧图像。而软件编码器可以配置如图1E所示的分层P帧预测(Hierarchical P-frame Prediction,简称HPP)参考结构,其中,在图1E中,仅示有P帧,P i表示i时刻的帧图像。使用软件编码器进行视频编码可以降低突发丢包的影响,因此,在终端的网络丢包率大时,可以尽量使用软件编码器进行视频编码。从而在终端的网络丢包率大时,可以根据软件编码器适用的编码参数,设置与终端的编码状态参数及负载信息对应的编码参数。
比如,假设软件编码器适用的编码参数为低分辨率等级、低码率等级,则在终端的网络丢包率大时,可以设置对应的编码参数为软件编码器适用的编码参数即低分辨率等级、低码率等级。
原则二
由于硬件编码器不占用CPU资源,可以更好的节省终端的电量,因此在终端的负载的平均CPU占用率较大时,可以优先考虑使用硬件编码器进行视频编码。从而在终端的负载的平均CPU占用率较大时,可以根据硬件编码器对应的编码参数,设置与终端的编码状态参数及负载信息对应的编码参数。
举例来说,若上述如果当前处理周期内网络丢包率AvgLoss较大,或带宽BW降到低于i档最低带宽要求时,则可确定需要下调分辨率档次到i-1。且i-1档分辨率对应低分辨率档,即可以使用软件编码方式进行下一处理周期的编码。
但是此时,根据终端的负载信息,确定终端负载AvgCPU占用高,或当前剩余续航时间T短,此时若切换至软件编码器,会使得终端负载的AvgCPU占用更高,因此需优先使用硬件编码器。
进一步的,编码器切换到硬件编码器后,如发现平均编码效率降低20%以上,则需切回到软件编码器。
在一些实例中,在编码器切回到软件编码器之后可以进一步禁止编码器再切到硬件编码器,以防止平均编码效率持续降低。
通过上述分析可知,应用在确定终端的编码状态参数及负载信息后,根据终端的编码状态参数及负载信息确定第二编码参数的同时,也就确定了第二编码参数对应的编码器为软件编码器还是硬件编码器。
步骤S103,应用在确定第二编码参数与第一编码参数不同时,根据第二编码参 数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
其中,第二编码器与第一编码器分属不同类型编码器,且第二编码器也可以为硬件编码器或软件编码器。
在本申请实例中,第一编码器可以为软件编码器、第二编码器可以为硬件编码器,或者,也可以是第一编码器为硬件编码器、第二编码器为软件编码器,此处不作限制。
其中,硬件编码器可以是硬件芯片中的硬件编码器模块,比如高通的芯片、苹果的A10芯片中的H.264硬件编码器模块。软件编码器,可以是应用本身包含的一段代码等。
具体的,调整第一编码器时,可以调整第一编码器的码率、分辨率和/或帧率等。配置第二编码器时,可以配置第二编码器的码率、分辨率和/或帧率等。
可以理解的是,不同的编码器,可以适用的编码参数范围可能相同,也可能存在交叠,因此,应用确定第二编码参数后,可能既可以根据第二编码参数对第一编码器进行调整,以利用调整后的第一编码器对下一处理周期内的帧数据进行编码处理,也可以根据第二编码参数配置第二编码器,以利用第二编码器对下一处理周期内的帧数据进行编码处理。
在本申请实例中,应用在确定第二编码参数与第一编码参数不同时,可以根据当前处理周期内终端的负载信息,确定调整第一编码器还是配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
具体的,若根据当前处理周期内终端的负载信息,确定下一处理周期内使用的目标编码器的类型与当前处理周期内使用的第一编码器的类型相同,则可以根据第二编码参数调整第一编码器;若确定下一处理周期内使用的目标编码器的类型与当前处理周期内使用的第一编码器的类型不同,则可以根据第二编码参数配置第二编码器。
相应的,在步骤S103中根据第二编码参数配置第二编码器前,还可以包括:
根据当前处理周期内终端的负载信息,确定目标编码器的类型与当前处理周期内使用的第一编码器的类型不同。
举例来说,假设预先将平均网络丢包率AvgLoss、平均中央处理器占用率 AvgCPU、平均编码效率AvgEfficiency(由平均峰值信噪比AvgPSNR、平均发送码率AvgBr计算得到)及平均网络带宽BW均依次分为小、一般、大三个等级,将终端剩余续航时间T依次分为短、一般、长三个等级,将编码参数中分辨率依次分为低、中、高三个等级。
则在具体使用时,若终端当前处理周期使用的是软件编码器,对应的编码参数为:分辨率档次为i(低)。当前处理周期结束后,确定当前处理周期的编码状态参数为:网络平均丢包率AvgLoss小,且网络带宽BW充足达到i+1分辨率档次的最低带宽要求时,则可上调分辨率档次到i+1,即根据上述编码状态参数确定出的编码参数中的分辨率为:i+1档分辨率。进一步的,若当前处理周期内AvgCPU高,则可确定下一处理周期所用的编码器类型为硬件编码器,从而即可根据i+1档次的分辨率配置硬件编码器,进而尝试切换到硬件编码器进行下一处理周期的编码。
而若当前处理周期内AvgCPU低,由于软件编码器的平均编码效率高,则可以根据i+1档次的分辨率,调整软件编码器后,继续使用软件编码器进行下一处理周期的编码。
具体实现时,应用根据第二编码参数调整第一编码器或配置第二编码器时,可以根据编码器的类型,采用不同的方式,对编码进行调整或配置。
具体的,若编码器为硬件编码器,则应用可以通过硬件编码器的接口,将第二编码参数发送给硬件编码器,从而使硬件编码器根据第二编码参数,对其编码电路进行配置。若编码器为软件编码器(其本身自带的代码),则应用可以在调用软件编码器时,根据第二编码参数确定调用函数的参数,从而实现对软件编码器的调整或配置。
需要说明的是,在本申请实例中,使用硬件编码器对下一处理周期内的帧数据进行编码处理时,应用可以通过硬件编码器的接口,将下一处理周期内的帧数据发送给硬件编码器,然后由硬件编码器对下一处理周期内的帧数据进行编码,硬件编码器编码完成后,再通过接口将编码后的码流发送给应用,从而应用可以将接收的码流发送给其它终端的应用。
使用软件编码器对下一处理周期内的帧数据进行编码处理时,应用可以直接调用其本身自带的代码(即软件编码器),对下一处理周期内的帧数据进行编码,在编码完成后,即可将编码后的码流发送给其它终端的应用。
本申请实例提供的视频编码处理方法,应用在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
下面结合图2,对本申请提供的视频编码处理方法进行进一步说明。
图2是本申请另一个示例性实例示出的一种视频编码处理方法的流程示意图。
如图2所示,该视频编码处理方法,包括以下步骤:
步骤201,应用获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数。
步骤202,应用根据终端的编码状态参数及负载信息,确定第二编码参数。
其中,第二编码参数,包括编码器的码率、分辨率和/或帧率等。
具体的,步骤201-步骤202的具体实现过程及原理,可以参照上述实例中步骤101-步骤102的具体描述,此处不再赘述。
步骤203,应用判断第二编码参数与第一编码参数是否相同,若是,则执行步骤204,否则,执行步骤205。
步骤204,应用继续以第一编码器对下一处理周期内的帧数据进行编码处理。
具体的,应用在根据终端的编码状态参数及负载信息,确定第二编码参数后,即可将第二编码参数与当前处理周期内使用的第一编码器的第一编码参数进行对比。若第二编码参数与第一编码参数相同,则可以不对第一编码器进行调整,继续以第一编码器对下一处理周期内的帧数据进行编码处理。
具体实现时,若第一编码器为硬件编码器,则应用可以继续将下一处理周期内的帧数据通过第一编码器的接口,发送给第一编码器,由第一编码器对下一处理周期内的帧数据进行编码。应用在第一编码器编码完成后,再通过接口接收第一编码器编码后的码流,从而应用可以将接收的码流发送给其它终端的应用。
若第一编码器为软件编码器,则应用可以继续利用其本身自带的代码(即第一编码器),对下一处理周期内的帧数据进行编码,在编码完成后,即可将编码后的码流发送给其它终端的应用。
步骤205,应用根据当前处理周期内终端的负载信息,确定目标编码器的类型与当前处理周期内使用的第一编码器的类型是否相同,若是,则执行步骤206,否则,执行步骤208。
步骤206,应用调整第一编码器的码率、分辨率和/或帧率。
步骤207,应用利用调整后的第一编码器对下一处理周期内的帧数据进行编码处理。
可以理解的是,不同的编码器,可以适用的编码参数范围可能相同,也可能存在交叠,因此,应用确定第二编码参数后,可能既可以根据第二编码参数对第一编码器进行调整,以利用调整后的第一编码器对下一处理周期内的帧数据进行编码处理,也可以根据第二编码参数配置第二编码器,以利用第二编码器对下一处理周期内的帧数据进行编码处理。
在本申请实例中,应用在确定第二编码参数与第一编码参数不同时,可以根据当前处理周期内终端的负载信息,确定目标编码器的类型与当前处理周期内使用的第一编码器的类型是否相同,从而确定调整第一编码器还是配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
具体的,若应用根据当前处理周期内终端的负载信息,确定目标编码器的类型与当前处理周期内使用的第一编码器的类型相同,则可以对第一编码器的码率、分辨率和/或帧率进行调整,然后利用调整后的第一编码器对下一处理周期内的帧数据进行编码处理。
具体实现时,若第一编码器为硬件编码器,则应用可以调整第一编码器的码率、分辨率和/或帧率,然后再将下一处理周期内的帧数据通过第一编码器的接口,发送给第一编码器,并在第一编码器编码完成后,再通过接口接收第一编码器编码后的码流,从而应用可以将接收的码流发送给其它终端的应用。
若第一编码器为软件编码器,则应用可以根据第二编码参数,调用第一编码器(即调用其本身自带的代码),对下一处理周期内的帧数据进行编码,在编码完成后,即可将编码后的码流发送给其它终端的应用。
步骤208,应用配置第二编码器的码率、分辨率和/或帧率。
其中,第二编码器与第一编码器分属不同类型编码器。
步骤209,应用利用第二编码器对下一处理周期内的帧数据进行编码处理。
需要说明的是,上述实例的具体实现过程及原理,也适用于本申请实例,此处不再赘述。
具体的,若应用根据当前处理周期内终端的负载信息,确定目标编码器的类型与当前处理周期内使用的第一编码器的类型不同,则可以配置第二编码器的码率、分辨率和/或帧率,然后利用配置的第二编码器对下一处理周期内的帧数据进行编码处理。
具体的,若第二编码器为硬件编码器,则应用可以将下一处理周期内的帧数据通过第二编码器的接口,发送给第二编码器,并在第二编码器编码完成后,再通过接口接收第二编码器编码后的码流,从而应用可以将接收的码流发送给其它终端的应用。
若第二编码器为软件编码器,则应用可以根据第二编码参数,调用其本身自带的代码(即第二编码器),对下一处理周期内的帧数据进行编码,在编码完成后,即可将编码后的码流发送给其它终端的应用。
步骤210,应用将下一处理周期内的第一个帧数据确定为帧内预测帧。
具体的,若利用配置的第二编码器对下一处理周期内的帧数据进行编码处理,由于编码器发生改变,则应用需要将下一处理周期内的第一帧数据确定为帧内预测帧,即I帧,从而使下一处理周期内的其它帧数据可以利用第一帧数据进行视频编码。
本申请实例提供的视频编码处理方法,应用在获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,若第二编码参数与第一编码参数相同,则继续以第一编码器对下一处理周期内的帧数据进行编码处理,若第二编码参数与第一编码参数不相同,且确定调整第一编码器,则调整第一编码器的码率、分辨率和/或帧率,并利用调整后的第一编码器对下一处理周期内的帧数据进行编码处理,若第二编码参数与第一编码参数不相同,且确定配置第二编码器,则配置第二编码器的码率、分辨率和/或帧率,并利用第二编码器对下一处理周期内 的帧数据进行编码处理,最后将下一处理周期内的第一个帧数据确定为帧内预测帧。由此,通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
通过上述分析可知,应用在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。在实际运用中,利用配置的第二编码器对下一处理周期内的帧数据进行编码处理时,编码效果可能不如之前处理周期内利用第一编码器的编码效果好,下面结合图3,对上述情况进行详细说明。
图3是根据另一示例性实例示出的一种视频编码处理方法的流程示意图。
如图3所示,该视频编码处理方法,可以包括以下步骤:
步骤301,应用获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数。
步骤302,应用根据终端的编码状态参数及负载信息,确定第二编码参数。
步骤303,应用判断第二编码参数与第一编码参数是否相同,若是,则执行步骤304,否则,执行步骤305。
步骤304,应用继续以第一编码器对下一处理周期内的帧数据进行编码处理。
其中,步骤301-步骤304的具体实现过程及原理,可以参照上述实例中的具体描述,此处不再赘述。
步骤305,应用确定当前处理周期内第一编码器的第一编码效率。
具体的,第一编码效率,可以根据当前处理周期内第一编码器的平均峰值信噪比及平均码率确定。通常平均PSNR固定时,平均码率越大,编码效率越低。
步骤306,应用根据第二编码参数配置第二编码器。
其中,第二编码器与第一编码器分属不同类型编码器。
步骤307,应用利用配置的第二编码器,对下一处理周期内的帧数据进行编码处理。
其中,上述步骤306-307的具体实现过程及原理,可以参照上述实例的详细描述,此处不再赘述。
需要说明的是,终端的编码状态参数及负载信息与编码参数的对应关系,还可以根据以下原则设置。
原则三
由于软件编码器和硬件编码器的编码效率可能不同,软件编码器的编码效率的一致性比较好,而硬件编码器的编码效率的离散性较大,因此在当前使用的硬件编码器的编码效率过低时,可以优先使用软件编码器进行视频编码。从而在当前使用的硬件编码器的编码效率过低时,可以根据软件编码器对应的编码参数,设置与终端的编码状态参数及负载信息对应的编码参数。
比如,假设软件编码器对应的编码参数为低分辨率等级、低码率等级,则在当前使用的硬件编码器的编码效率过低时,可以设置对应的编码参数为软件编码器对应的编码参数即低分辨率等级、低码率等级,从而可以切换为利用软件编码器进行视频编码。
步骤308,应用确定下一处理周期内第二编码器的第二编码效率。
具体的,第二编码效率,可以根据下一处理周期内第二编码器的平均PSNR及平均码率确定。通常平均PSNR固定时,平均码率越大,编码效率越低。
步骤309,应用在确定第二编码效率小于第一编码效率,且差值大于阈值时,根据下一处理周期内终端的编码参数及负载信息,确定第三编码参数。
步骤310,应用根据第三编码参数配置第一编码器,以对与下一处理周期相邻的后一处理周期内的帧画面进行编码处理。
其中,阈值,可以根据需要设置,比如,可以设置为第一编码效率的20%、30%等等。
具体的,若第二编码效率小于第一编码效率,且差值大于阈值,则表示利用配置的第二编码器对下一处理周期内的帧数据进行编码处理时,编码效果不如之前处理周期内利用第一编码器的编码效果好。因此,在本申请实例中,应用可以切换回原来的第一编码器,以对与下一处理周期相邻的后一处理周期内的帧数据进行编码处理。
在切换回原来的第一编码器时,由于下一处理周期内终端的编码状态参数及负 载信息,相比当前处理周期内终端的编码状态参数及负载信息,可能已经发生了变化,因此,利用第一编码器对与下一处理周期相邻的后一处理周期内的帧数据进行编码处理时,还需要对第一编码器的编码参数进行重新配置。
具体的,可以根据下一处理周期内终端的编码状态及负载信息,确定第三编码参数,从而根据第三编码参数配置第一编码器,以对与下一处理周期相邻的后一处理周期内的帧画面进行编码处理。具体的确定第三编码参数的方法,与确定第一编码参数或第二编码参数的方法相同,此处不再赘述。
另外,若第二编码效率大于或等于第一编码效率,或第二编码效率小于第一编码效率,且差值小于或等于阈值,则表示利用配置的第二编码器对下一处理周期内的帧数据进行编码处理时,编码效果比之前处理周期内利用第一编码器的编码效果好。则在本申请实例中,应用可以继续根据下一处理周期内的终端的编码状态参数、负载信息,确定与下一处理周期相邻的后一处理周期内的第三编码参数,并根据第三编码参数配置第二编码器,以对与下一处理周期相邻的后一处理周期内的帧画面进行编码处理。
进一步的,在第二编码效率小于第一编码效率,且差值大于阈值时,应用根据第三编码参数配置第一编码器后,若终端的编码状态参数及负载信息不变,则相比以第二编码器对与下一处理周期相邻的后一处理周期内的帧数据进行编码处理,继续以原来的第一编码器对与下一处理周期相邻的后一处理周期内的帧数据进行编码处理会有更好的编码效果。因此,在本申请实例中,若终端的编码状态参数及负载信息不变,则应用可以优先以第一编码器对与下一处理周期相邻的后一处理周期内的帧数据进行编码处理。
即,在步骤310中根据第三编码参数配置第一编码器之后,还可以包括:
应用在终端的编码状态参数及负载信息不变时,确定第一编码器的优先级高于第二编码器的优先级。
需要说明的是,本申请实例中,终端的编码状态参数及负载信息不变,是一种理想的情况。在实际运用中,在终端的编码状态参数及负载信息的变化在预设范围内时,均可以确定第一编码器的优先级高于第二编码器的优先级。
本申请实例提供的视频编码处理方法,应用在获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数后,可 以根据终端的编码状态参数及负载信息,确定第二编码参数,并判断第二编码参数与第一编码参数是否相同,若是,则继续以第一编码器对下一处理周期内的帧数据进行编码处理,否则,确定当前处理周期内第一编码器的第一编码效率,并在根据第二编码参数配置第二编码器,及利用第二编码器,对下一处理周期内的帧数据进行编码处理后,确定下一处理周期内第二编码器的第二编码效率,从而在确定第二编码效率小于第一编码效率,且差值大于阈值时,根据下一处理周期内终端的编码参数及负载信息,确定第三编码参数,并根据第三编码参数配置第一编码器,以对与下一处理周期相邻的后一处理周期内的帧画面进行编码处理。由此,通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。且通过根据编码参数切换前后的编码效率,对编码参数或编码器的切换过程进行校验,进一步提高了视频编码的灵活性。
通过上述分析可知,应用在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。下面结合图4,对第二编码器的获取过程进行具体说明。
如图4所示,该视频编码处理方法,可以包括以下步骤:
步骤401,应用获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数。
步骤402,应用根据终端的编码状态参数及负载信息,确定第二编码参数。
步骤403,应用判断第二编码参数与第一编码参数是否相同,若是,则执行步骤404,否则,执行步骤405。
步骤404,应用继续以第一编码器对下一处理周期内的帧数据进行编码处理。
其中,步骤401-步骤404的具体实现过程及原理,可以参照上述实例中的具体描述,此处不再赘述。
步骤405,应用根据第二编码参数,从当前的可用编码器列表中选取第二编码器。
步骤406,应用配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
其中,第二编码器与第一编码器分属不同类型编码器。
具体的,可以预先确定当前的可用编码器列表,从而应用在确定第二编码参数后,可以从当前的可用编码器列表中选取第二编码器,并根据第二编码参数,配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
具体实现时,可以通过下面的方法,确定当前的可用编码器列表。即,在步骤405之前,还可以包括:
步骤407,应用根据应用的配置信息及终端的配置信息,确定初始编码器列表,其中初始编码器列表中包括硬件编码器及软件编码器。
可以理解的是,在终端中通常会配置有硬件编码器,而在应用中,通常会配置有软件编码器,因此,通过应用的配置信息及终端的配置信息,可以确定包括硬件编码器及软件编码器的初始编码器列表。
步骤408,应用初始化初始编码器列表中各编码器。
步骤409,应用根据各编码器的初始化结果,确定当前的可用编码器列表。
可以理解的是,由于初始编码器列表中的各编码器,可能存在因故障等而不可用的情况,因此,在本申请实例中,确定了初始编码器列表后,应用可以初始化初始编码器列表中各编码器,以检测终端中系统软硬件环境,对各编码器进行正确设置,并返回初始化结果,进而根据各编码器的初始化结果,确定当前的可用编码器列表。
具体的,应用确定了当前的可用编码器列表后,即可根据可用编码器列表中各编码器适用的编码参数范围及第二编码参数,从各编码器中选取编码参数范围中包括第二编码参数的编码器作为第二编码器,进而根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。
进一步的,应用利用第二编码器对下一处理周期内的帧数据进行编码处理,得到编码后的码流后,应用可以将编码后的码流分成N个实时传输协议包(real-time transport protocol,RTP),并进行前向纠错(Forward Error/Erasure Correction,简称FEC)编码,生成M个冗余包。其中,M可以小于等于N。然后,应用即可给M+N个包加上包头标识,并根据包头标识,将下一处理周期内的帧数据对应的编码流进行打包发送。
即,在步骤406之后,还可以包括:
应用根据对下一处理周期内的帧数据进行编码的编码器类型,确定与下一处理周期内的帧数据对应的实时传输协议包的包头标识;
应用根据包头标识,将下一处理周期内的帧数据对应的编码流进行打包发送。
其中,包头标识,用来唯一标识实时传输协议包。
具体的,可以预先设置不同的编码器类型对应不同的自定义字段,从而在应用确定对下一处理周期内的帧数据进行编码的编码器后,可以根据确定的编码器的类型,确定自定义字段,并以自定义字段作为包头标识,将下一处理周期内的帧数据对应的编码流进行打包发送。
需要说明的是,在包头标识中,除了编码器类型对应的自定义字段外,还可以包括标志该包的序号信息,如0x1、0x2、0x3等等。
在一些实例中,所述序号信息与编码器类型对应的自定义字段之间的对应关系可以如表一所示。
序号 编码器类型
0x1 H.264软编码器
0x2 H.264硬编码器
0x3 H.265软编码器
0x4 H.265硬编码器
0x5 VP9软编码器
0x6 VP9硬编码器
... ...
表一
举例来说,假设预先设置编码器为基于H.264标准的软件编码器时,对应的自定义字段为“H.264软编码器”;编码器为基于H.264标准的硬件编码器时,对应的自定义字段为“H.264硬编码器”;编码器为基于H.265标准的软件编码器时,对应的自定义字段为“H.265软编码器”;编码器为基于H.265标准的硬件编码器时,对应的自定义字段为“H.265硬编码器”。则确定对下一处理周期内的帧数据进行编码的编码器类型为基于H.264标准的软件编码器,且下一处理周期内的帧数据对应的编码流为发送的第一个编码流时,可以确定包头标识为“0x1H.264软编码器”,从 而可以根据“0x1H.264软编码器”将下一处理周期内的帧数据对应的编码流进行打包发送。
需要说明的是,应用在获取到视频编码指令时,由于当前的可用编码器中可能包括多个编码器,因此,应用需要从各编码器中选取合适的编码器对初始帧数据进行编码。
具体的,由于硬件编码器的抗丢包能力差,因此,为了减小丢包率,应用可以根据当前的可用编码器列表中各编码器的性能,从各编码器中选择抗丢包能力强的软件编码器,对初始帧数据进行编码。
即,在本申请实例提供的视频编码处理方法中,还可以包括:
应用在获取到视频编码指令时,根据当前的可用编码器列表中各编码器的性能,从可用编码器列表中选择第三编码器对初始帧数据进行编码,其中第三编码器为软件编码器。
其中,各编码器的性能,可以包括各编码器的抗丢包能力强弱、编码效率好坏、是否占用CPU资源等等。
另外,由于以高分辨率等级对初始帧数据进行编码处理时,对CPU的占用较大,若应用在获取到视频编码指令时,直接以高分辨率等级对初始帧数据进行编码处理,可能会对终端的系统运行造成影响。因此为了减小终端中的应用开始进行视频编码时对CPU的占用,应用可以控制第三编码器以最低分辨率对初始帧数据进行编码。
即,利用第三编码器对初始帧数据进行编码可以包括:应用控制第三编码器以最低分辨率对初始帧数据进行编码。
需要说明的是,应用对各处理周期内的帧数据进行编码处理前,还可以对各帧数据进行视频格式转换、编码尺寸调整、视频增强处理、视频去噪处理等预处理。
本申请实例提供的视频编码处理方法,应用在获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,然后判断第二编码参数与第一编码参数是否相同,若是,则继续以第一编码器对下一处理周期内的帧数据进行编码处理,若否,则根据第二编码参数,从当前的可用编码器列表中选取第二编码器,并配置第二编码器,以对下一处理周期内的帧数据进行编码处理。通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带 宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
下述为本申请装置实例,可以用于执行本申请方法实例。对于本申请装置实例中未披露的细节,请参照本申请方法实例。
图5是根据一个示例性实例示出的一种视频编码处理装置的结构图。
如图5所示,本申请实例提供的视频编码处理装置,可以包括:
获取模块51,用于获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
第一确定模块52,用于根据所述终端的编码状态参数及负载信息,确定第二编码参数;
第一处理模块53,用于在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
其中,所述第二编码器与所述第一编码器分属不同类型编码器。
具体的,本申请实例提供的视频编码处理装置,可以用来执行本申请实例提供的视频编码处理方法。其中,该装置可以被配置在任意具有视频编码功能的应用中,从而进行视频编码处理。其中,应用设置在终端中,终端的类型可以有很多,比如可以是手机、电脑等。
在本申请实例一种可能的实现形式中,所述装置,还包括:
第二确定模块,用于确定所述当前处理周期内所述第一编码器的第一编码效率;
第三确定模块,用于确定所述下一处理周期内所述第二编码器的第二编码效率;
第四确定模块,用于在确定所述第二编码效率小于所述第一编码效率,且差值大于阈值时,根据所述下一处理周期内终端的编码状态参数及负载信息,确定第三编码参数;
第二处理模块,用于根据所述第三编码参数配置所述第一编码器,以对与所述下一处理周期相邻的后一处理周期内的帧画面进行编码处理。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第五确定模块,用于在所述终端的编码状态参数及负载信息不变时,确定所述 第一编码器的优先级高于所述第二编码器的优先级。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第六确定模块,用于根据所述当前处理周期内终端的负载信息,确定目标编码器的类型与所述当前处理周期内使用的第一编码器的类型不同。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第七确定模块,用于在所述下一处理周期内使用的编码器为第二编码器时,将所述下一处理周期内的第一个帧数据确定为帧内预测帧。
在本申请实例另一种可能的实现形式中,所述编码状态参数,包括:平均丢包率、平均峰值信噪比、平均发送码率及平均网络带宽;
所述负载信息,包括所述终端剩余续航时间及平均中央处理器占用率。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第一选取模块,用于根据所述第二编码参数,从当前的可用编码器列表中选取所述第二编码器。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第八确定模块,用于根据应用的配置信息及所述终端的配置信息,确定所述初始编码器列表,其中所述初始编码器列表中包括硬件编码器及软件编码器;
初始化模块,用于初始化所述初始编码器列表中各编码器;
第九确定模块,用于根据所述各编码器的初始化结果,确定所述当前的可用编码器列表。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第二选取模块,用于在获取到视频编码指令时,根据所述当前的可用编码器列表中各编码器的性能,从所述可用编码器列表中选择第三编码器对初始帧数据进行编码,其中第三编码器为软件编码器。
在本申请实例另一种可能的实现形式中,所述第二选取模块,具体用于:
控制所述第三编码器以最低分辨率对所述初始帧数据进行编码。
在本申请实例另一种可能的实现形式中,所述装置,还包括:
第十确定模块,用于根据对所述下一处理周期内的帧数据进行编码的编码器类型,确定与所述下一处理周期内的帧数据对应的实时传输协议包的包头标识;
发送模块,用于根据所述包头标识,将所述下一处理周期内的帧数据对应的编 码流进行打包发送。
需要说明的是,前述对视频编码处理方法实例的解释说明也适用于该实例的视频编码处理装置,此处不再赘述。
本申请实例提供的视频编码处理装置,在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在示例性实例中,还提供了一种具有视频编码功能的应用,包括如第二方面所述的视频编码处理装置。
本申请实例提供的具有视频编码功能的应用,在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在示例性实例中,还提供了一种计算设备。
图6是根据一个示例性实例示出的计算设备(终端或其他进行计算处理的物理设备)的结构框图。
如图6所示,该计算设备包括:
存储器61、处理器62及存储在所述存储器61上并可在所述处理器62上运行的计算机程序,其特征在于,当所述处理器62执行所述程序时实现如前所述的视频 编码处理方法。
具体的,本申请实例提供的终端,可以是手机、电脑等。
具体的,视频编码处理方法包括:
获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
根据所述终端的编码状态参数及负载信息,确定第二编码参数;
在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
其中,所述第二编码器与所述第一编码器分属不同类型编码器。
需要说明的是,前述对视频编码处理方法实例的解释说明也适用于该实例的终端,此处不再赘述。
本申请实例提供的终端,在获取当前处理周期内终端的编码状态参数、负载信息及当前处理周期内使用的第一编码器的第一编码参数后,可以根据终端的编码状态参数及负载信息,确定第二编码参数,从而在确定第二编码参数与第一编码参数不同时,根据第二编码参数调整第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理。通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在示例性实例中,还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述实例所述的视频编码处理方法。
具体的,视频编码处理方法包括:
获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
根据所述终端的编码状态参数及负载信息,确定第二编码参数;
在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
其中,所述第二编码器与所述第一编码器分属不同类型编码器。
需要说明的是,前述对视频编码处理方法实例的解释说明也适用于该实例的计算机可读存储介质,此处不再赘述。
本申请实例提供的计算机可读存储介质,可以设置在能够进行视频编码的设备中,通过执行其上存储的视频编码处理方法,可以实现通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在示例性实例中,还提供了一种计算机程序产品,当所述计算机程序产品中的指令处理器执行时,执行如上述实例所述的视频编码处理方法。
具体的,视频编码处理方法包括:
获取当前处理周期内终端的编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
根据所述终端的编码状态参数及负载信息,确定第二编码参数;
在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
其中,所述第二编码器与所述第一编码器分属不同类型编码器。
需要说明的是,前述对视频编码处理方法实例的解释说明也适用于该实例的计算机程序产品,此处不再赘述。
本申请实例提供的计算机程序产品,可写入能够进行视频编码的设备中,通过执行对应视频编码处理方法的程序,可以实现通过根据网络状态及负载信息,调整编码参数及编码器,从而实现了在网络丢包小且带宽充足时,使用硬件编码器进行高分辨率视频的编码,提高视频分辨率;网络丢包较大时,使用软件编码方式进行压缩编码,减少了视频卡顿,提高了视频编码的灵活性,改善了用户体验。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本说明书的描述中,参考术语“一个实例”、“一些实例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实例或示例描述的具体特征或者特点包含于本申请的至少一个实例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实例或示例。而且,描述的具体特征或者特点可以在任一个或多个实例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实例或示例以及不同实例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样, 可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实例的步骤之一或其组合。
此外,在本申请各个实例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实例,可以理解的是,上述实例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实例进行变化、修改、替换和变型。

Claims (15)

  1. 一种视频编码处理方法,应用于计算设备,包括:
    获取当前处理周期内编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
    根据所述编码状态参数及负载信息,确定第二编码参数;
    在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据所述第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
    其中,所述第二编码器与所述第一编码器分属不同类型编码器。
  2. 如权利要求1所述的方法,其中,所述根据所述第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理之前,还包括:
    确定所述当前处理周期内所述第一编码器的第一编码效率;
    所述对下一处理周期内的帧数据进行编码之后,还包括:
    确定所述下一处理周期内所述第二编码器的第二编码效率;
    在确定所述第二编码效率小于所述第一编码效率,且差值大于阈值时,根据所述下一处理周期内编码状态参数及负载信息,确定第三编码参数;
    根据所述第三编码参数配置所述第一编码器,以对与所述下一处理周期相邻的后一处理周期内的帧画面进行编码处理。
  3. 如权利要求2所述的方法,其中,所述根据所述第三编码参数配置所述第一编码器之后,还包括:
    在所述编码状态参数及负载信息不变时,确定所述第一编码器的优先级高于所述第二编码器的优先级。
  4. 如权利要求1所述的方法,其中,所述根据所述第二编码参数配置第二编码器前,还包括:
    根据所述当前处理周期内终端的负载信息,确定目标编码器的类型与所述当前处理周期内使用的第一编码器的类型不同。
  5. 如权利要求1所述的方法,其中,所述对下一处理周期内的帧数据进行编码处理之后,还包括:
    若所述下一处理周期内使用的编码器为第二编码器,则将所述下一处理周期内的第一个帧数据确定为帧内预测帧。
  6. 如权利要求1所述的方法,其中,所述编码状态参数,包括:平均丢包率、平均峰值信噪比、平均发送码率及平均网络带宽;
    所述负载信息,包括剩余续航时间及平均中央处理器占用率。
  7. 如权利要求1-5任一所述的方法,其中,所述配置第二编码器之前,还包括:
    根据所述第二编码参数,从当前的可用编码器列表中选取所述第二编码器。
  8. 如权利要求7所述的方法,其中,所述从当前的可用编码器列表中选取所述第二编码器之前,还包括:
    根据所述计算设备的配置信息,确定所述初始编码器列表,其中所述初始编码器列表中包括硬件编码器及软件编码器;
    初始化所述初始编码器列表中各编码器;
    根据所述各编码器的初始化结果,确定所述当前的可用编码器列表。
  9. 如权利要求7所述的方法,其中,还包括:
    在获取到视频编码指令时,根据所述当前的可用编码器列表中各编码器的性能,从所述可用编码器列表中选择第三编码器对初始帧数据进行编码,其中第三编码器为软件编码器。
  10. 如权利要求9所述的方法,其中,所述对初始帧数据进行编码,包括:
    控制所述第三编码器以最低分辨率对所述初始帧数据进行编码。
  11. 如权利要求7所述的方法,其中,所述对下一处理周期内的帧数据进行编码处理之后,还包括:
    根据对所述下一处理周期内的帧数据进行编码的编码器类型,确定与所述下一处理周期内的帧数据对应的实时传输协议包的包头标识;
    根据所述包头标识,将所述下一处理周期内的帧数据对应的编码流进行打包发送。
  12. 一种视频编码处理装置,包括:
    获取模块,用于获取当前处理周期内编码状态参数、负载信息、及当前处理周期内使用的第一编码器的第一编码参数;
    第一确定模块,用于根据所述编码状态参数及负载信息,确定第二编码参数;
    第一处理模块,用于在确定所述第二编码参数与所述第一编码参数不同时,根据所述第二编码参数调整所述第一编码器,或者根据所述第二编码参数配置第二编码器,以对下一处理周期内的帧数据进行编码处理;
    其中,所述第二编码器与所述第一编码器分属不同类型编码器。
  13. 一种具有视频编码功能的应用,包括如权利要求12所述的视频编码处理装置。
  14. 一种计算设备,包括:
    存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,当所述处理器执行所述程序时实现如权利要求1-11任一所述的视频编码处理方法。
  15. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求1-11任一所述的视频编码处理方法。
PCT/CN2018/110816 2017-12-19 2018-10-18 视频编码处理方法、装置及具有视频编码功能的应用 WO2019119950A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/670,842 US10931953B2 (en) 2017-12-19 2019-10-31 Video coding processing method, device and application with video coding function

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711371988.2 2017-12-19
CN201711371988.2A CN109936744B (zh) 2017-12-19 2017-12-19 视频编码处理方法、装置及具有视频编码功能的应用

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/670,842 Continuation US10931953B2 (en) 2017-12-19 2019-10-31 Video coding processing method, device and application with video coding function

Publications (1)

Publication Number Publication Date
WO2019119950A1 true WO2019119950A1 (zh) 2019-06-27

Family

ID=66983322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110816 WO2019119950A1 (zh) 2017-12-19 2018-10-18 视频编码处理方法、装置及具有视频编码功能的应用

Country Status (3)

Country Link
US (1) US10931953B2 (zh)
CN (1) CN109936744B (zh)
WO (1) WO2019119950A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933513A (zh) * 2019-11-18 2020-03-27 维沃移动通信有限公司 一种音视频数据传输方法及装置
CN113242434B (zh) * 2021-05-31 2023-02-28 山东云海国创云计算装备产业创新中心有限公司 一种视频压缩方法及相关装置
CN113259673B (zh) * 2021-07-05 2021-10-15 腾讯科技(深圳)有限公司 伸缩性视频编码方法、装置、设备及存储介质
CN113259690A (zh) * 2021-07-05 2021-08-13 人民法院信息技术服务中心 一种跨网系的音视频实时在线协同系统及方法
CN114827662B (zh) * 2022-03-18 2024-06-25 百果园技术(新加坡)有限公司 视频分辨率自适应调节方法、装置、设备和存储介质
CN116055715B (zh) * 2022-05-30 2023-10-20 荣耀终端有限公司 编解码器的调度方法及电子设备
CN115103211B (zh) * 2022-07-27 2023-01-10 广州迈聆信息科技有限公司 数据传输方法、电子装置、设备及计算机可读存储介质
CN117412062A (zh) * 2023-09-28 2024-01-16 协创芯片(上海)有限公司 一种支持h265编码的多媒体芯片
CN117676249B (zh) * 2023-12-07 2024-06-21 书行科技(北京)有限公司 直播视频的处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130034151A1 (en) * 2011-08-01 2013-02-07 Apple Inc. Flexible codec switching
US20140161172A1 (en) * 2012-12-11 2014-06-12 Jason N. Wang Software hardware hybrid video encoder
CN104159113A (zh) * 2014-06-30 2014-11-19 北京奇艺世纪科技有限公司 安卓系统中视频编码方式的选择方法和装置
US20150172676A1 (en) * 2011-06-17 2015-06-18 Microsoft Technology Licensing, Llc Adaptive codec selection
CN106161991A (zh) * 2016-07-29 2016-11-23 青岛海信移动通信技术股份有限公司 一种摄像头视频处理方法及终端
CN106331717A (zh) * 2015-06-30 2017-01-11 成都鼎桥通信技术有限公司 视频码率自适应调整方法及发送端设备
CN106454413A (zh) * 2016-09-20 2017-02-22 北京小米移动软件有限公司 直播编码切换方法、装置及设备
CN106993190A (zh) * 2017-03-31 2017-07-28 武汉斗鱼网络科技有限公司 软硬件协同编码方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5224484B2 (ja) * 2008-07-16 2013-07-03 トムソン ライセンシング 映像および音声データの符号化装置とその符号化方法、および、ビデオ編集システム
US20120183040A1 (en) * 2011-01-19 2012-07-19 Qualcomm Incorporated Dynamic Video Switching
US9392295B2 (en) * 2011-07-20 2016-07-12 Broadcom Corporation Adaptable media processing architectures
US10045089B2 (en) * 2011-08-02 2018-08-07 Apple Inc. Selection of encoder and decoder for a video communications session
US9467708B2 (en) * 2011-08-30 2016-10-11 Sonic Ip, Inc. Selection of resolutions for seamless resolution switching of multimedia content
EP2613552A3 (en) * 2011-11-17 2016-11-09 Axell Corporation Method for moving image reproduction processing and mobile information terminal using the method
US9179144B2 (en) * 2012-11-28 2015-11-03 Cisco Technology, Inc. Fast switching hybrid video decoder
GB2548789B (en) * 2016-02-15 2021-10-13 V Nova Int Ltd Dynamically adaptive bitrate streaming
CN107396123A (zh) * 2017-09-25 2017-11-24 南京荣膺软件科技有限公司 便携式智能动态软硬件切换转码系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150172676A1 (en) * 2011-06-17 2015-06-18 Microsoft Technology Licensing, Llc Adaptive codec selection
US20130034151A1 (en) * 2011-08-01 2013-02-07 Apple Inc. Flexible codec switching
US20140161172A1 (en) * 2012-12-11 2014-06-12 Jason N. Wang Software hardware hybrid video encoder
CN104159113A (zh) * 2014-06-30 2014-11-19 北京奇艺世纪科技有限公司 安卓系统中视频编码方式的选择方法和装置
CN106331717A (zh) * 2015-06-30 2017-01-11 成都鼎桥通信技术有限公司 视频码率自适应调整方法及发送端设备
CN106161991A (zh) * 2016-07-29 2016-11-23 青岛海信移动通信技术股份有限公司 一种摄像头视频处理方法及终端
CN106454413A (zh) * 2016-09-20 2017-02-22 北京小米移动软件有限公司 直播编码切换方法、装置及设备
CN106993190A (zh) * 2017-03-31 2017-07-28 武汉斗鱼网络科技有限公司 软硬件协同编码方法及系统

Also Published As

Publication number Publication date
US10931953B2 (en) 2021-02-23
CN109936744A (zh) 2019-06-25
US20200068201A1 (en) 2020-02-27
CN109936744B (zh) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2019119950A1 (zh) 视频编码处理方法、装置及具有视频编码功能的应用
US11227612B2 (en) Audio frame loss and recovery with redundant frames
JP4660545B2 (ja) 配信されたソース符号化技術に基づいたサイドチャネルを利用して予測的なビデオコデックのロバスト性を高める方法、装置、及びシステム
US10045089B2 (en) Selection of encoder and decoder for a video communications session
CN108370580B (zh) 匹配用户装备和网络调度周期
CN110784718B (zh) 视频数据编码方法、装置、设备和存储介质
US8842159B2 (en) Encoding processing for conferencing systems
US11044278B2 (en) Transcoding capability configuration method and device and computer storage medium
WO2013000304A1 (zh) 环路滤波编解码方法及装置
KR20150131175A (ko) Http를 통한 동적 적응형 스트리밍에서 미디어 세그먼트들의 손실 존재시의 회복력
CN110572695A (zh) 媒体数据的编码、解码方法及电子设备
CN1643932A (zh) 用于数据流式传输系统的数据结构
KR20140056296A (ko) 코딩된 비트스트림들 간의 동적 스위칭 기법
JP2011029868A (ja) 端末装置、遠隔会議システム、端末装置の制御方法、端末装置の制御プログラム、及び端末装置の制御プログラムを記録したコンピュータ読み取り可能な記録媒体
CN111147892A (zh) 用于视频传输的方法和装置,存储介质和电子设备
WO2021057480A1 (zh) 视频编解码方法和相关装置
KR20140124415A (ko) 다층 레이트 제어 기법
JP2011192229A (ja) サーバ装置および情報処理方法
WO2021057478A1 (zh) 视频编解码方法和相关装置
EP3145187B1 (en) Method and apparatus for response of feedback information during video call
CN114079534B (zh) 编码、解码方法、装置、介质和电子设备
CN111245566B (zh) 不可靠网络的抗丢包方法、装置、存储介质及电子设备
CN106231618A (zh) 一种发送编解码重协商请求的方法及装置
KR20070075134A (ko) 대역폭에 적응적인 멀티미디어 데이터 처리방법 및 이를적용한 호스트장치
WO2018161790A1 (zh) 一种视频传输方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18890903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18890903

Country of ref document: EP

Kind code of ref document: A1