WO2023010992A1 - 视频编码方法、装置、计算机可读介质及电子设备 - Google Patents

视频编码方法、装置、计算机可读介质及电子设备 Download PDF

Info

Publication number
WO2023010992A1
WO2023010992A1 PCT/CN2022/097341 CN2022097341W WO2023010992A1 WO 2023010992 A1 WO2023010992 A1 WO 2023010992A1 CN 2022097341 W CN2022097341 W CN 2022097341W WO 2023010992 A1 WO2023010992 A1 WO 2023010992A1
Authority
WO
WIPO (PCT)
Prior art keywords
key frame
next key
historical
information
modulation
Prior art date
Application number
PCT/CN2022/097341
Other languages
English (en)
French (fr)
Inventor
毛峻岭
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023010992A1 publication Critical patent/WO2023010992A1/zh
Priority to US18/137,950 priority Critical patent/US20230262232A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/65Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2408Monitoring of the upstream path of the transmission network, e.g. client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6131Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6181Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network

Definitions

  • the present application belongs to the technical field of communication, and specifically relates to a video encoding method, a video encoding device, a computer-readable medium, and electronic equipment.
  • Video coding whose purpose is to eliminate redundant information existing between video signals.
  • the amount of original video data has made the existing transmission network bandwidth and storage resources unbearable, so encoded and compressed video is the information suitable for transmission in the network, and video coding technology has become the current It is one of the hotspots of academic research and industrial application at home and abroad.
  • video coding technology has become the current It is one of the hotspots of academic research and industrial application at home and abroad.
  • a larger amount of video data puts forward higher requirements for video coding standards.
  • the current video encoding method based on the 5G private network has the problem of long transmission delay. How to reduce the transmission delay is an urgent problem to be solved.
  • the purpose of the present application is to provide a video encoding method, device, computer-readable medium and electronic equipment, at least to a certain extent, to overcome technical problems such as reducing transmission delay in related technologies.
  • a video coding method executed by an electronic device, including:
  • the strength information of the next key frame is used to represent the wireless network signal strength for transmitting the next key frame;
  • a video encoding device including:
  • An acquisition module configured to acquire historical strength information within a historical time period, where the historical strength information is used to represent the wireless network signal strength of the video transmission corresponding to each historical moment in the historical time period;
  • a prediction module configured to predict the strength information of the next key frame according to the historical strength information, and the strength information of the next key frame is used to indicate the wireless network signal strength for transmitting the next key frame;
  • the determining module is configured to determine the target data volume of the next key frame according to the intensity information of the next key frame, and perform intra-coding on the next key frame according to the target data volume.
  • a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the video encoding method in the above technical solution is implemented.
  • an electronic device the electronic device includes: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the The above executable instructions are used to execute the video encoding method in the above technical solution.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the video encoding method in the above technical solution.
  • the intensity information of the next key frame is predicted through historical intensity information, and the target data amount of the next key frame is determined according to the predicted intensity information of the next key frame.
  • the target data amount is predetermined, and then encoded in the next key frame according to the target data amount, so that the transmission of the key frame can be completed within one uplink time slot, without waiting for a frame cycle to complete the transmission. Thereby reducing the transmission delay.
  • Fig. 1 schematically shows a schematic diagram of an exemplary 5G network data transmission process.
  • Fig. 2 schematically shows a block diagram of an exemplary system architecture applying the technical solution of the present application.
  • FIG. 3 schematically shows a flow of steps of a video encoding method provided by an embodiment of the present application.
  • Fig. 4 schematically shows a flow of steps for determining the target data volume of the next key frame according to the intensity information of the next key frame in an embodiment of the present application.
  • Fig. 5 schematically shows a flow of steps for determining a modulation and coding strategy for a next key frame according to intensity information of the next key frame in an embodiment of the present application.
  • FIG. 6 schematically shows a flow of steps for obtaining a modulation and coding strategy in a mapping relationship table between modulation and coding strategies and intensity information in an embodiment of the present application.
  • FIG. 7 schematically shows a process of comparing the calculated data volume with the actual data volume to determine the target data volume of the next key frame in an embodiment of the present application.
  • FIG. 8 schematically shows a flow of steps for adjusting the calculated data volume in an embodiment of the present application.
  • Fig. 9 schematically shows a flow of steps for obtaining the load redundancy of the current network in an embodiment of the present application.
  • Fig. 10 schematically shows a flow of steps for determining the load redundancy of the current network according to delay information in an embodiment of the present application.
  • FIG. 11 schematically shows a structural block diagram of a video coding apparatus provided by an embodiment of the present application.
  • Fig. 12 schematically shows a structural block diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present application.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this application will be thorough and complete, and will fully convey the concepts of example embodiments to those skilled in the art.
  • uplink and downlink wireless transmission resources will be configured using different frames. Only uplink data transmission (from the terminal to the base station) is allowed on the uplink frame, and only downlink data transmission (from the base station to the base station) is allowed on the downlink frame. terminal).
  • FIG. 1 schematically shows a schematic diagram of an exemplary 5G network data transmission process.
  • the frame configuration of the common 5G network is 3D1U, that is, a time period of 5ms, with 3 downlink frames, 1 uplink frame (U frame), and 1 special subframe, and the time length of each frame is 1ms.
  • the amount of uplink data that can be carried by one U frame is specifically related to the signal strength from the terminal to the base station and the scheduling of the base station.
  • the I frame data of the uplink video stream of the terminal cannot be completely transmitted in the current U frame, it will be scheduled to the next U frame or the next U frame for uplink transmission until all data transmission is completed.
  • the existing 5G network has technical problems of transmission delay during video transmission.
  • the commonly used method is to use adaptive coding technology based on network perception.
  • the existing adaptive coding mainly includes the following two methods.
  • the first one is a direct feedback adjustment method, which directly adjusts the coding parameters mainly based on the time delay and packet loss fed back by the sender.
  • the second is to predict and adjust based on the routing congestion model, that is, based on the delay and delay jitter fed back by the sender, use Wiener filtering or time sequence sequence prediction to predict network delay and packet loss rate, and perform adaptive coding adjustment .
  • the prediction model is mainly for public network scenarios, considering that the network delay is mainly caused by router forwarding. When the router is congested, there will be a large delay jitter, and even packet loss may occur.
  • the adjustment method directly based on the feedback from the receiving end has a certain lag in the encoder adjustment, and the prediction adjustment method based on the router congestion model cannot adapt to the previous 5G private network scenarios where 5G air interface delay is the main factor.
  • this application proposes a coding method for the design of 5G air interface characteristics to improve the delay problem in the 5G private network scenario.
  • FIG. 2 schematically shows a block diagram of an exemplary system architecture applying the technical solution of the present application.
  • the system architecture includes a video sending terminal 210 and a network 220 , and the video sending terminal 210 and the network 220 are connected in communication.
  • the video sending terminal 210 includes an encoding end 211 and a 5G module 212 , the video stream is encoded by the encoding end 211 , and the encoded video stream is sent to the 5G module 212 after encoding at the encoding end 211 .
  • it is specifically determined through the following video encoding manner In order to reduce the transmission delay, when encoding at the encoding end 211, it is necessary to first determine the target data volume of key frames to be transmitted. In order to determine the target data amount of the next key frame, it is specifically determined through the following video encoding manner.
  • the video key frame is usually an I frame, that is, a frame encoded by intra-frame coding technology in video coding, which can be decoded independently and does not depend on other frames when decoding, so its size is usually larger, which is different from that using intra-frame coding technology.
  • a frame encoded by an inter-frame coding technique therefore, an I frame is used to refer to a video key frame in this application.
  • FIG. 3 schematically shows a flow of steps of a video encoding method provided by an embodiment of the present application, which may be executed by the electronic device shown in FIG. 12 .
  • the present application discloses a video encoding method which mainly includes the following steps S301 to S303.
  • step S301 the terminal obtains historical strength information in a historical time period, and the historical strength information is used to indicate the wireless network signal strength of video transmission corresponding to a historical moment in the historical time period.
  • the terminal When the terminal transmits the uplink video stream, it starts to execute the video encoding method of the present application at time T1 before the arrival of the next intra-frame key frame (I frame).
  • the specific video encoding method is that the terminal obtains the historical strength information in the historical time period, where Reference Signal Receiving Power (RSRP) represents the network signal strength, and first obtains the RSRP received in the latest time window T2 time series.
  • RSRP Reference Signal Receiving Power
  • Step S302 predict the strength information of the next key frame according to the historical strength information, and the strength information of the next key frame is used to indicate the wireless network signal strength for transmitting the next key frame.
  • the prediction results obtained in this way are relatively stable. If only a single value is used for prediction, jitter will occur. Specifically, when predicting the RSRP of the next key frame, linear regression, zero-order hold, XGBOOST, or a neural network model can be used for prediction.
  • when predicting the RSRP of the next key frame using linear regression specifically, first obtain the RSRP data set corresponding to the historical frame; then generate clusters according to the RSRP data set corresponding to the historical frame; Calculate the linear regression coefficients of the generated clusters; then obtain the linear regression function of the clusters through the generated clusters and linear regression coefficients; then calculate the linear regression of K-time clusters according to the linear regression functions of the obtained clusters; finally, according to the obtained The linear regression result of the K-time clustering obtains the RSRP of the next key frame.
  • a continuous model is constructed based on the RSRP corresponding to the historical frame, and the continuous model is discretized using the step response invariant method to obtain zero
  • the first-order preservation discretization model is introduced, and then the discretization model is optimized by introducing the calculation time delay, so as to establish the zero-order preservation discretization model; based on the zero-order preservation discretization model, the calculation time delay item and the disturbance item are respectively introduced, Obtain the delay model and disturbance model; based on the disturbance model, use the state space method to design the extended state observer; establish the zero-order preserving discretized state equation considering the disturbance, and design the disturbance state observation based on the zero-order preserving discretized state equation
  • the controller estimates the disturbance; taking into account the system robustness and system dynamic performance, the method of direct zero-pole configuration is used to select the appropriate RSRP parameters, so as to obtain the RSRP of the next key frame.
  • the RSRP of the next key frame is predicted using the XGBOOST algorithm
  • the RSRP corresponding to the historical frame is obtained, and the RSRP corresponding to the historical frame is used as the historical feature, which will affect the characteristics and historical features of the RSRP data
  • XGBOOST is used to train and learn the training data set to obtain the prediction model; the prediction model is used to predict the RSRP at the prediction time, and the RSRP prediction value of the next key frame is obtained.
  • the RSRP of the next key frame is predicted using a neural network model
  • continuous video frames are selected as training samples, and the inter-frame differences of the training samples are extracted, and the inter-frame differences are used as the generator model
  • the input of the encoder in the middle is based on the loss function training to obtain the neural network weights of the encoder and decoder, and solve the prediction frame when the loss function value is minimized, so as to obtain the RSRP prediction value of the next key frame.
  • Step S303 according to the intensity information of the next key frame, determine the target data amount of the next key frame, and perform intra-frame encoding on the next key frame according to the target data amount.
  • the target data amount of the next key frame can be obtained, and the next key frame is carried out.
  • encoding is performed with a determined target data volume.
  • the size of the key frame encoding is pre-calculated.
  • the pre-calculated key frame encoding size It refers to the target data volume of the next key frame.
  • the encoded key frame size is the preset encoding size. The target data volume at this time enables the transmission of the key frame to be completed within one uplink time slot, thereby reducing the transmission delay.
  • the intensity information of the next key frame is predicted through historical intensity information, and the target data amount of the next key frame is determined according to the predicted intensity information of the next key frame.
  • the transmission of the key frame can be completed within the uplink time slot of one 5G frame without waiting for the frame period of 5 ms.
  • the encoding range of 1080P key frames is about 50-200KB, and the amount of data that can be transmitted when a 5G uplink time slot is empty is about 125KB. According to the state of the network, it is feasible to perform adaptive transmission of key frames.
  • FIG. 4 schematically shows a flow of steps for determining the target data amount of the next key frame according to the intensity information of the next key frame in an embodiment of the present application.
  • step S303 determining the target data amount of the next key frame according to the intensity information of the next key frame, may mainly include the following steps S401 to S403.
  • Step S401 according to the intensity information of the next key frame, determine the modulation and coding strategy of the next key frame.
  • Modulation and Coding Scheme is generally used to describe the selection of channel coding and constellation modulation during physical layer transmission.
  • the transmitted data will be processed by the MCS, and different MCSs will have different encoding rates, so as to affect the amount of transmitted data that can actually be carried on the same radio resource block.
  • MCS Modulation and Coding Scheme
  • Step S402 according to the modulation and coding strategy of the next key frame, calculate the data volume of the next key frame.
  • the corresponding coding rate is obtained by obtaining the modulation and coding strategy of the next key frame.
  • the encoding rate corresponds to the calculated data volume of the next key frame.
  • Step S403 comparing the calculated data volume with the actual data volume to determine the target data volume of the next key frame.
  • the calculated data volume is not directly used as the target data volume of the next key frame, but it is necessary to compare the calculated data volume with the video stream's target data volume. Compare the actual data volume, and select the appropriate data volume as the target data volume, so as to ensure that the video stream can be transmitted normally while reducing the transmission delay, without greatly reducing the quality of the video.
  • the target data volume of the next key frame is estimated through the intensity information of the next key frame, and the target data volume is determined in advance before the next key frame is encoded, and the calculated data volume needs to be calculated when determining the target data volume
  • the actual data volume is compared with the actual data volume, and a more appropriate data volume is selected as the target data volume, so as to reduce the transmission delay and ensure the normal transmission of the video stream.
  • FIG. 5 schematically shows a flow of steps for determining a modulation and coding strategy for a next key frame according to intensity information of the next key frame in an embodiment of the present application.
  • step S401 according to the intensity information of the next key frame, determines the modulation and coding strategy of the next key frame, which may mainly include the following steps S501 to S502.
  • step S501 the terminal acquires a mapping relationship table between modulation and coding strategies and intensity information.
  • a mapping relationship table between modulation and coding strategies and strength information that is, a mapping relationship table between MCS and RSRP, which describes the MCS levels corresponding to different RSRP segments.
  • the rules for determining the MCS table can be pre-configured, so that during the access process of the video sending terminal, the MCS table used by each channel can be determined according to the pre-configured rules.
  • the pre-configured rule may be to determine the MCS tables used by all or part of the channels during the access process of the video sending terminal according to the current measurement value of RSRP.
  • the current RSRP measurement value of the video sending terminal can reflect the current user capability of the video sending terminal and the current actual transmission performance of the video sending terminal, it can be determined according to the current RSRP measurement value of each channel during the random access process.
  • the MCS table of the MCS table is used to better meet the current real-time performance requirements of the video sending terminal. Therefore, the video sending terminal can measure the RSRP during the access process, so as to determine the MCS table used by each channel during the access process according to the range of the current RSRP measurement value.
  • the video sending terminal after the video sending terminal measures the current measured value of RSRP, it can send the current measured value of RSRP to the base station, so that the base station can determine the MCS table used by each channel that matches the current measured value of RSRP, so that it can be obtained Mapping relationship between MCS and RSRP.
  • Step S502 according to the intensity information of the next key frame, look up the mapping relationship table between the modulation and coding strategy and the intensity information, and determine the modulation and coding strategy of the next key frame.
  • the mapping relationship table between the modulation coding strategy and the intensity information is the MCS and RSRP mapping relationship table, and the corresponding relationship between MCS and RSRP. Since MCS is discrete, RSRP is continuous. Generally speaking, an RSRP in a certain interval corresponds to an MCS , to establish an interval corresponding to an MCS.
  • the new RSRP After obtaining the new RSRP, that is, after the strength information of the next key frame, it will be compared, because all historical RSRPs are classified into the corresponding MCS class, and then find the new RSRP and all RSRPs in which MCS The average distance of is the smallest, indicating that the new RSRP belongs to that MCS, that is, the distance of the RSRP matching the historical MCS is the smallest. If the MCS is 0 and 1, there are many RSRPs.
  • the historical RSRP either corresponds to the MCS of 0 or the MCS of 1.
  • the MCS of 0 corresponds to many historical values of RSRP
  • the MCS of 1 also corresponds to the history of many RSRPs.
  • the mapping relationship table between the modulation coding strategy and the intensity information determines the modulation coding strategy of the next key frame.
  • the encoding strategy adjustment is performed based on the network perception of the video sending terminal without waiting for feedback from the receiving end, and the adjustment is more timely.
  • FIG. 6 schematically shows a flow of steps for obtaining a modulation and coding strategy in a mapping relationship table between modulation and coding strategies and intensity information in an embodiment of the present application.
  • obtaining the mapping relationship table between the modulation coding strategy and the intensity information may mainly include the following steps S601 to step S602.
  • Step S601 the terminal obtains the historical modulation and coding strategy and the historical intensity information assigned by the base station in real time, and uses the coding information corresponding to the historical modulation and coding strategy as a classification label, and the historical modulation and coding strategy is used to represent the modulation and coding strategy corresponding to the historical moment.
  • Step S602 clustering the historical intensity information according to the coded information classification labels to obtain a cluster classifier.
  • Step S603 input each intensity value in the distribution interval of the historical intensity information value into the clustering classifier, and obtain the classification information corresponding to the classification label of the encoding information, so as to obtain the mapping relationship table between the modulation encoding strategy and the intensity information.
  • the terminal records the MCS information and RSRP information allocated by the base station in the recent uplink transmission obtained from the 5G module, and establishes the MCS and RSRP mapping relationship table.
  • the specific table building method can be carried out by using a clustering method, etc., so that the average distance between the historical real MCS and the MCS obtained by using RSRP to look up the mapping relationship table is the smallest.
  • the terminal cannot obtain the MCS information allocated by the base station from the 5G module in real time, it can use simulation or testing to measure offline, establish the MCS and RSRP mapping relationship table in advance, and store it in the terminal.
  • FIG. 7 schematically shows a process of comparing the calculated data volume with the actual data volume to determine the target data volume of the next key frame in an embodiment of the present application.
  • step S403 is to compare the calculated data volume with the actual data volume to determine the target data volume of the next key frame, which may mainly include the following steps S701 to S702.
  • step S701 the terminal adjusts the calculated data volume.
  • the terminal After the terminal calculates the amount of data, it adjusts the calculated amount of data, specifically by adjusting a coefficient, so as to ensure that the amount of data obtained can be transmitted normally.
  • step S702 the adjusted data volume is compared with the actual data volume, and the data volume with a smaller value is selected as the target data volume of the next key frame.
  • the adjusted data volume Compare the adjusted data volume with the actual data volume of the video stream. If the adjusted data volume is smaller than the actual data volume, select the adjusted data volume as the target data volume for the next key frame, and follow the The target amount of data is intra-coded for the next key frame. If the actual data amount is smaller than the adjusted data amount, the actual data amount is selected as the target data amount of the next key frame, and the next key frame is intra-coded according to the target data amount.
  • the smaller data volume is selected as the target data volume of the next key frame, and the smaller data volume is selected as the target data volume, which can greatly reduce the transmission time. In addition, while reducing the transmission delay, it can ensure the normal transmission of the video stream.
  • further adjustments can be made through quantization parameters, so as to reduce the bit rate while ensuring the video quality. Specifically, obtain the current quantization parameter value; judge whether the output code rate corresponding to the current quantization parameter value meets the requirements of the preset threshold, if yes, no adjustment is required, and if not, set the current quantization parameter value to The quantization parameter value is adjusted to the target quantization parameter value.
  • the quantization parameter (Quality Parameter, QP) is one of the main parameters when performing video encoding.
  • QP Quality Parameter
  • the QP takes the minimum value of 0 it means that the quantization of the video is the finest.
  • the QP takes the maximum value it means that the quantization of the video is the roughest.
  • video content providers such as video websites need to transcode the original video in order to make the video content meet the requirements of transmission and playback on the Internet.
  • Video transcoding is the basis of almost all Internet video services, including live broadcast, on-demand and so on. The goal of video transcoding is very simple, which is to obtain smooth and clear video data. However, fluency and clarity are two conflicting requirements.
  • the code rate refers to the number of data bits transmitted per unit of time during data transmission, usually in kbps, that is, "kilobits per second". Usually, for a video, when the bit rate is too low, the picture is not clear; and when the bit rate is too high, it cannot be played smoothly on the network.
  • the current quantization parameter value can be adjusted to obtain the target quantization parameter value.
  • the bit rate of the output video is inversely proportional to the quantization parameter value adopted by the video encoder.
  • the larger the quantization parameter value the lower the bit rate of the transcoded output video.
  • the larger the code rate therefore, the quantization parameter value of the video encoder can be adjusted according to the comparison result between the code rate of the transcoded output video and the preset requirement. For example, when the code rate of the transcoded output video is greater than the preset requirement When , it means that the currently used quantization parameter value is relatively small, and a certain value can be raised accordingly to obtain the target quantization parameter value.
  • FIG. 8 schematically shows a flow of steps for adjusting the calculated data volume in an embodiment of the present application.
  • step S701 adjusting the calculated data volume, may mainly include the following steps S801 to S802.
  • step S801 the terminal acquires the load redundancy of the current network.
  • the load redundancy of the current network represents the state of the current network, and by obtaining the state of the current network, it is convenient for subsequent adjustments to the amount of data.
  • the load redundancy of the current network can dynamically adjust the redundancy according to the results of packet loss and network bandwidth estimation.
  • step S802 an adjusted data volume is obtained according to the product of the calculated data volume, load redundancy, preset protection ratio, and preset adjustment coefficient.
  • the protection ratio P for example, the maximum target data volume is 100k
  • the protection ratio is set to 90 %
  • setting the protection ratio to reduce the target data volume to 100*90% to transmit even if there is a deviation in the prediction, the target data volume can still be transmitted normally, so setting a protection ratio is to reduce the final impact of the deviation on the result. So as to eliminate the impact of forecast bias.
  • the preset adjustment coefficient is 1/(1+R), where R represents the redundancy rate of forward error correction coding;
  • the default adjustment factor is 1.
  • the forward error correction coding method that is, the FEC coding method is adopted, when the next key frame is coded, the size of the product will become larger, so adjust it through the preset adjustment coefficient to obtain a target data size that can be transmitted .
  • FIG. 9 schematically shows a flow of steps for obtaining the load redundancy of the current network in an embodiment of the present application.
  • step S801 obtains the load redundancy of the current network, which may mainly include the following steps S901 to S902.
  • Step S901 acquiring time delay information of historical key frames.
  • the time delay information of historical key frames is used as feedback to determine the current network state.
  • Step S902 determining the load redundancy of the current network according to the delay information.
  • the load redundancy of the current network is obtained through the delay information of the historical key frames, which is conducive to obtaining a more accurate current load redundancy of the network and is conducive to obtaining a more accurate target data volume.
  • the load redundancy of the current network is determined according to the delay information, that is, the video sending terminal performs real-time statistics on the network transmission status, and dynamically adjusts the load redundancy in real time, specifically including: counting over a period of time N consecutive data packet network round-trip delays, obtain the average value and standard deviation of the initial data packet round-trip delays in this period of time; perform statistics on the continuous data packet round-trip delays in this period of time to determine the delay threshold; obtain The sending end transmits the current round-trip delay of a single data packet; compares the current round-trip delay of the data packet with the delay threshold obtained by counting the transmission delays of consecutive data packets before the data packet; if the current data packet If the round-trip delay value is greater than or equal to the time threshold, it means that the round-trip delay of the data packet is too long, and it is determined that the current data packet is lost during network transmission, and a packet loss is recorded; if the current data packet round-trip delay value is less than the time Threshold
  • the window size is set to M. Whenever a new data packet confirmation message is obtained, the earliest round-trip delay in the window is removed, and the new result is added to the window.
  • the real-time monitoring of the round-trip delay and packet loss of the data packet is realized, so as to obtain the result of the packet loss rate; according to the statistics of the packet loss rate of N times of data packet transmission, the average value of the packet loss rate of the data packet within this period is obtained and standard deviation; calculate the reference ratio of the load redundancy adjustment of the current network; adjust the redundancy of the video sending terminal according to the reference ratio of the redundancy adjustment, update the number of data packets and the number of redundant packets; use the sliding window
  • the window size is set to M. Whenever a new packet loss rate is obtained, the earliest packet loss rate data in the window is removed, and the new result is added to the window to realize real-time monitoring of the network status.
  • the video sending terminal realizes the real-time estimation of the network status based on the statistical calculation of the packet loss situation based on the video sending terminal, and the statistics of the network packet loss situation at the video sending terminal reduces the statistical process delay caused by the feedback process of the receiving end .
  • Statistics on the packet loss rate by the video sending terminal can obtain the transmission result of the data packet within a round-trip delay, and the packet loss statistics result can be obtained at the end of the packet loss rate statistical cycle, while the receiving end requires additional feedback and processing.
  • the mean value and standard deviation used for the test are updated through the sliding window during the online statistical test process, making the statistical process more timely.
  • the stable and efficient transmission of data packets can be realized. On the one hand, it can ensure the data entering the network per unit time The amount remains constant, and on the other hand, the delay caused by the difference in the sending interval of the data packet is reduced.
  • step S901 acquiring time delay information of historical key frames includes:
  • the delay information can be obtained based on the feedback delay of the receiving end or the delay information of clearing the transmission buffer observed from the 5G module, which facilitates the acquisition of the delay information and makes the obtained delay information more accurate.
  • FIG. 10 schematically shows a flow of steps for determining the load redundancy of the current network according to delay information in an embodiment of the present application.
  • step S902 determining the load redundancy of the current network according to the delay information, may mainly include the following steps S1001 to S1003.
  • Step S1001 setting an initial load redundancy.
  • Step S1002 if the time window of the delay information is greater than the preset time window of the delay, reduce the load redundancy according to the first preset step size, and use the adjusted final load redundancy as the load redundancy of the current network .
  • the load redundancy is reduced according to a certain step size until the time window of the time delay information and the time window of the preset time delay are adjusted When they are equal, the last adjusted load redundancy is taken as the load redundancy of the current network.
  • Step S1003 if the time window of the delay information is smaller than the preset time window of the delay, increase the load redundancy according to the second preset step size, and use the adjusted final load redundancy as the load redundancy of the current network.
  • a margin wherein the first preset step size is greater than the second preset step size.
  • the load redundancy is increased according to the second preset step size until the time window of the time delay information and the time window of the preset time delay are adjusted.
  • the final adjusted load redundancy is used as the load redundancy of the current network.
  • the load redundancy is adjusted by setting different steps of the delay information to realize the rapid adjustment of the load redundancy and dynamically adjust the load redundancy to obtain a more accurate current load redundancy of the network.
  • the redundancy is conducive to obtaining a more accurate target data volume.
  • the time window of the preset delay is determined according to the delay of the forward reference frame transmission, or determined according to the test value of the empty-load network environment.
  • Group of Pictures refers to a group of continuous pictures in video, which is used as a frame group in video coding.
  • the first frame after encoding is an I frame
  • the subsequent frames are forward reference frames, that is, P frames.
  • Delay is OK.
  • the window of the preset delay can also be set according to the corresponding network environment when there is no load.
  • the method further includes: the terminal acquires a media frame, and puts the acquired media frame into a buffer queue, and determines the total number of media frames in the buffer queue and the last calculated bandwidth to the current
  • the total length of the media frame sent between time, and according to the total number of media frames and the total length of the media frame calculate the current bandwidth and the current network congestion level; according to the current bandwidth and network congestion level, determine the stream adjustment type, and Calculate the encoding adjustment parameters, which not only need to calculate the adjusted code stream value, but also need to calculate the adjusted frame frequency value, so that when the code stream is lowered, the frame frequency is also reduced accordingly, because when the code stream is relatively low, the frame rate that is too high It doesn’t make much sense to lower the frame rate, but lowering the frame rate by the ratio can effectively reduce the image quality deterioration caused by the bit stream reduction, and when the bit stream adjustment type is down, it is based on the current bandwidth to calculate the bit stream value that needs to be lowered Therefore, while ensuring the fluency
  • FIG. 11 schematically shows a structural block diagram of a video coding apparatus provided by an embodiment of the present application.
  • the video encoding device 1100 includes:
  • the acquiring module 1110 is configured to acquire historical strength information in a historical time period, and the historical strength information is used to indicate the wireless network signal strength of video transmission corresponding to each historical moment in the historical time period;
  • the prediction module 1120 is used to predict the strength information of the next key frame according to the historical strength information, and the strength information of the next key frame is used to represent the wireless network signal strength for transmitting the next key frame;
  • the determination module 1130 is configured to determine the target data volume of the next key frame according to the intensity information of the next key frame, and perform intra-frame encoding on the next key frame according to the target data volume.
  • the determining module 1130 includes:
  • the first determining unit is used to determine the modulation and coding strategy of the next key frame according to the intensity information of the next key frame;
  • a calculation unit configured to calculate the data volume of the next key frame according to the modulation and coding strategy of the next key frame
  • the second determination unit is configured to determine the target data volume of the next key frame by comparing the calculated data volume with the actual data volume.
  • the first determining unit is used to obtain a mapping relationship table between the modulation coding strategy and the intensity information; according to the intensity information of the next key frame, search for the mapping between the modulation coding strategy and the intensity information
  • the relationship table determines the modulation and coding strategy of the next key frame.
  • the first determination unit is used to obtain the historical modulation and coding strategy and historical strength information assigned by the base station in real time, and use the coding information corresponding to the historical modulation and coding strategy as a classification label, historical The modulation and encoding strategy is used to represent the modulation and encoding strategy corresponding to the historical moment; the historical intensity information is clustered according to the classification labels of the encoded information to obtain a cluster classifier; the intensity values in the value distribution interval of the historical intensity information are input into the cluster classifier The classification information corresponding to the classification label of the coding information is obtained, so as to obtain a mapping relationship table between the modulation coding strategy and the intensity information.
  • the second determination unit is used to adjust the calculated data volume; compare the adjusted data volume with the actual data volume, and select a smaller data volume The amount of data to target for the next keyframe.
  • the second determination unit is used to obtain the load redundancy of the current network; according to the calculated data volume, load redundancy, preset protection ratio and preset adjustment The product of the coefficients gives the adjusted data volume.
  • the second determination unit is configured to preset the adjustment coefficient to be 1/(1+R) if the next key frame adopts forward error correction coding, where R Represents the redundancy rate of forward error correction coding; if the next key frame does not use forward error correction coding, the default adjustment factor is 1.
  • the second determining unit is configured to obtain delay information of historical key frames; determine the load redundancy of the current network according to the delay information.
  • the second determination unit is used to use the delay information fed back by the receiving end as the delay information of the historical key frame, or to use the monitored delay information of clearing the sending buffer Latency information as historical keyframes.
  • the second determination unit is used to set the initial load redundancy; if the time window of the delay information is greater than the time window of the preset delay, then according to the first preset step Reduce the load redundancy for a long time, and use the adjusted final load redundancy as the load redundancy of the current network; if the time window of the delay information is smaller than the time window of the preset delay, then increase according to the second preset step For the load redundancy, the adjusted final load redundancy is used as the load redundancy of the current network; wherein, the first preset step size is greater than the second preset step size.
  • the time window of the preset delay is determined according to the delay of the forward reference frame transmission, or determined according to the test value of the empty-load network environment.
  • the acquisition module there is a sliding time window with a preset time interval between the historical time period and the encoding moment of the next key frame.
  • Fig. 12 schematically shows a structural block diagram of a computer system for implementing an electronic device according to an embodiment of the present application.
  • the computer system 1200 includes a central processing unit 1201 (Central Processing Unit, CPU), which can be stored in a program in a read-only memory 1202 (Read-Only Memory, ROM) or loaded from a storage part 1208 to a random Various appropriate actions and processes are executed by accessing programs in the memory 1203 (Random Access Memory, RAM). In random access memory 1203, various programs and data necessary for system operation are also stored.
  • the central processing unit 1201 , the read-only memory 1202 and the random access memory 1203 are connected to each other through a bus 1204 .
  • An input/output interface 1205 (Input/Output interface, ie, I/O interface) is also connected to the bus 1204 .
  • the following components are connected to the input/output interface 1205: an input part 1206 including a keyboard, a mouse, etc.; an output part 1207 including a cathode ray tube (Cathode Ray Tube, CRT), a liquid crystal display (Liquid Crystal Display, LCD), etc., and a speaker ; a storage section 1208 including a hard disk or the like; and a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the Internet.
  • a driver 1210 is also connected to the input/output interface 1205 as needed.
  • a removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 1210 as necessary so that a computer program read therefrom is installed into the storage section 1208 as necessary.
  • the processes described in the respective method flowcharts can be implemented as computer software programs.
  • the embodiments of the present application include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flow charts.
  • the computer program may be downloaded and installed from a network via communication portion 1209 and/or installed from removable media 1211 .
  • the central processing unit 1201 various functions defined in the system of the present application are executed.
  • the computer-readable medium shown in the embodiment of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof.
  • Computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable one of the above The combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer readable signal medium may also be any computer readable medium other than a computer readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that includes one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by a A combination of dedicated hardware and computer instructions.
  • the technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, server, touch terminal, or network device, etc.) execute the method according to the embodiment of the present application.
  • a non-volatile storage medium which can be CD-ROM, U disk, mobile hard disk, etc.
  • a computing device which may be a personal computer, server, touch terminal, or network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请属于通信技术领域,具体涉及一种视频编码方法、装置、计算机可读介质及电子设备。该视频编码方法由电子设备执行,包括:获取历史时间段内的历史强度信息,历史强度信息用于表示历史时间段内的各个历史时刻对应的无线视频传输的网络信号强度;根据历史强度信息预测下一关键帧的强度信息,下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;根据下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照目标数据量对下一关键帧进行帧内编码。

Description

视频编码方法、装置、计算机可读介质及电子设备
本申请要求于2021年8月2日提交中国专利局、申请号为202110883015.7、发明名称为“视频编码方法、装置、计算机可读介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于通信技术领域,具体涉及一种视频编码方法、视频编码装置、计算机可读介质以及电子设备。
发明背景
视频编码,其目的是消除视频信号间存在的冗余信息。随着多媒体数字视频应用的不断发展,原始视频数据量已使现有传输网络带宽和存储资源无法承受,因而经编码压缩后的视频才是宜在网络中传输的信息,视频编码技术已成为目前国内外学术研究和工业应用的热点之一。另外,随着人工智能的发展和5G时代的到来,更加庞大的视频数据量对视频编码标准提出了更高的要求。
而目前基于5G专网下的视频编码方式存在传输时延较长的问题,如何减少传输时延是亟待解决的问题。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本申请的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本申请的目的在于提供一种视频编码方法、装置、计算机可读介质及电子设备,至少在一定程度上克服相关技术中减少传输时延等技术问题。
本申请的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本申请的实践而习得。
根据本申请实施例的一个方面,提供一种视频编码方法,由电子设备执行,包括:
获取历史时间段内的历史强度信息,所述历史强度信息用于表示所述历史时间段内的各个历史时刻对应的视频传输的无线网络信号强度;
根据所述历史强度信息预测下一关键帧的强度信息,所述下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;
根据所述下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照所述目标数据量对所述下一关键帧进行帧内编码。
根据本申请实施例的一个方面,提供一种视频编码装置,包括:
获取模块,用于获取历史时间段内的历史强度信息,所述历史强度信息用于表示所述历史时间段内各个历史时刻对应的视频传输的无线网络信号强度;
预测模块,用于根据所述历史强度信息预测下一关键帧的强度信息,所述下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;
确定模块,用于根据所述下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照所述目标数据量对所述下一关键帧进行帧内编码。
根据本申请实施例的一个方面,提供一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如以上技术方案中的视频编码方法。
根据本申请实施例的一个方面,提供一种电子设备,该电子设备包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器被配置为经由执行所述可执行指令来执行如以上技术方案中的视频编码方法。
根据本申请实施例的一个方面,提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行如以上技术方案中的视频编码方法。
在本申请实施例提供的技术方案中,通过历史强度信息预测下一关键帧的强度信息,并根据预测得到的下一关键帧的强度信息,确定下一关键帧的目标数据量。帧在编码之前预先确定目标数据量,之后按照目标数据量通过在下一关键帧内编码,这样,使得关键帧的传输可以在一个上行时隙内完成传输,不需要等待一个帧周期才传输完成,从而减少了传输时延。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的, 并不能限制本申请。
附图简要说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示意性地示出了示例性的5G网络数据传输的过程示意图。
图2示意性地示出了应用本申请技术方案的示例性系统架构框图。
图3示意性地示出了本申请实施例提供的视频编码方法步骤流程。
图4示意性地示出了本申请一实施例中根据下一关键帧的强度信息,确定下一关键帧的目标数据量的步骤流程。
图5示意性地示出了本申请一实施例中根据下一关键帧的强度信息,确定下一关键帧的调制编码策略的步骤流程。
图6示意性地示出了本申请一实施例中获取调制编码策略与强度信息的映射关系表的调制编码策略的步骤流程。
图7示意性地示出了本申请一实施例中根据计算得到的数据量与实际的数据量进行比较,以确定下一关键帧的目标数据量的步骤流程。
图8示意性地示出了本申请一实施例中对计算得到的数据量进行调整的步骤流程。
图9示意性地示出了本申请一实施例中获取当前网络的负载冗余度的步骤流程。
图10示意性地示出了本申请一实施例中根据时延信息确定当前网络的负载冗余度的步骤流程。
图11示意性地示出了本申请实施例提供的视频编码装置的结构框图。
图12示意性示出了适于用来实现本申请实施例的电子设备的计算机系统结构框图。
实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本申请将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本申请的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本申请的技术方案而没有特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知方法、装置、实现或者操作以避免模糊本申请的各方面。
附图中所示的方框图仅仅是功能实体,不一定必须与物理上独立的实体相对应。即,可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解,而有的操作/步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。
随着人工智能的发展和5G时代的到来,更加庞大的视频数据量对视频编码标准提出了更高的要求。同一个视频,视频编码的压缩率越高,压缩失真越高,用户体验的视频质量越差;压缩率越低,视频的存储和传输的成本越高。如何在两者之间找到一个平衡是视频编码技术中的一个难点。目前基于网络感知的自适应编码技术,已成为音视频实时通信中的关键技术。
在5G网络中,上下行无线传输资源会利用不同的帧来进行配置,上行帧上只能允许上行数据的传输(从终端到基站),下行帧上只能允许下行数据的传输(从基站到终端)。
参见图1,图1示意性地示出了示例性的5G网络数据传输的过程示意图。目前常见的5G网络的帧配置为3D1U,即一个时间周期5ms,有3个下行帧,1个上行帧(U帧),还有1个特殊子帧,每个帧时间长度为1ms。1个U帧可以承载 的上行数据量具体跟终端到基站的信号强度以及基站的调度等相关。当终端上行视频流的I帧数据无法在当前U帧上传输完成时,会被调度到下一个U帧或下下个U帧进行上行传输,直到数据全部传输完成。参见图1,若I帧数据在当前U帧上只上传完成第一部分Part1,则剩下的第二部分Part2只能被调度到下一个U帧进行上行传输,则需要再等待5ms的帧周期。因此,现有的5G网络,在视频传输时存在传输时延的技术问题。
为了解决传输时延的问题,目前常用的方式是基于网络感知采用自适应编码技术。
目前现有的自适应编码主要包括如下两种方式,第一种是直接反馈调整的方式,其主要基于发端反馈的时延和丢包情况,来直接调整编码参数。第二种是基于路由拥塞模型预测调整,即基于发端反馈的时延和时延抖动,利用维纳滤波或者时序序列预测等方式,预测网络的时延和丢包率,进行自适应的编码调整。预测模型主要针对公网场景,考虑网络时延主要是由路由器转发造成的,在路由器拥塞时,会有较大的时延抖动,甚至可能会发生丢包。但是上述提到的两种自适应编码方式都会存在问题,具体地直接根据收端反馈来调整的方式,编码器调整存在一定的滞后性,而基于路由器拥塞模型来预测调整的方式,无法适应以5G空口时延为主的5G专网场景。
针对上述情况,本申请提出了针对5G空口特性设计,改善5G专网场景下的时延问题对应的编码方法。
具体地,参见图2,图2示意性地示出了应用本申请技术方案的示例性系统架构框图。
参见图2所示,系统架构包括视频发送终端210和网络220,视频发送终端210和网络220通信连接。视频发送终端210包括编码端211和5G模块212,视频流经过编码端211进行编码,在编码端211经过编码后将编码后的视频流发送到5G模块212。为了降低传输时延,在编码端211进行编码时,需要先确定待传输的关键帧目标数据量。为了确定下一关键帧的目标数据量,具体通过如下的视频编码方式确定。
需要说明的是,视频关键帧通常为I帧,即视频编码中采用帧内编码技术编码后的帧,可以独立解码,解码时不依赖于其他帧,因此通常其大小较大,有区 别于采用帧间编码技术编码后的帧,因此,本申请中用I帧来指代视频关键帧。
下面结合具体实施方式对本申请提供的视频编码方法做出详细说明。
参见图3,图3示意性地示出了本申请实施例提供的视频编码方法步骤流程,可以由图12所示的电子设备执行。本申请公开了一种视频编码方法主要可以包括如下步骤S301至步骤S303。
步骤S301,终端获取历史时间段内的历史强度信息,历史强度信息用于表示历史时间段内历史时刻对应的视频传输的无线网络信号强度。
终端进行上行视频流传输时,在下一个帧内关键帧(I帧)到来前的T1时间,开始执行本申请的视频编码方法。具体的视频编码方法为,终端获取历史时间段内的历史强度信息,其中参考信号接收功率(Reference Signal Receiving Power,RSRP)代表网络信号强度,先获取最近时间窗口T2时间序列内收到的RSRP。
在本申请的一个实施例中,历史时间段与下一关键帧的编码时刻具有预设时间间隔的滑动时间窗口,时间窗口会随时间推移向前滑动。这样便于获取得到相对应的历史时刻的视频传输的网络信号强度。
步骤S302,根据历史强度信息预测下一关键帧的强度信息,下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度。
根据预设时间窗口内对应的历史强度信息对下一关键帧的强度信息进行预测,根据时间窗口T2内收到的RSRP时间序列值对关键帧传输时刻的RSRP进行预测,通过设置一个时间段内对应的多个RSRP值作预测,这样得到的预测结果比较稳定,若只是用单个值做预测,这样会有抖动的情况发生。具体地,在对下一关键帧的RSRP进行预测时可用线性回归、零阶保持、XGBOOST或神经网络模型等进行预测。
在一实施例中,对下一关键帧的RSRP采用线性回归的方式进行预测时,具体地,首先获取历史帧对应的RSRP数据集;然后根据历史帧对应的RSRP数据集生成聚类;同时根据生成的聚类计算线性回归系数;再通过生成的聚类和线性回归系数,得到聚类的线性回归函数;再根据所得聚类的线性回归函数,计算K次聚类的线性回归;最后根据所得K次聚类的线性回归结果得到下一关键帧的RSRP。
在一实施例中,对下一关键帧的RSRP采用零阶保持的方式进行预测时,具 体地,基于历史帧对应的RSRP构建连续模型,采用阶跃响应不变法对连续模型进行离散化得到零阶保持离散化模型,接着引入计算时间延时对离散化模型进行优化,以此建立零阶保持离散化模型;以零阶保持离散化模型为基础,分别引入计算时间延时项和扰动项,获取延时模型和扰动模型;以扰动模型为基础,采用状态空间的方法设计扩张状态观测器;建立考虑扰动的零阶保持离散化状态方程,基于此零阶保持离散化状态方程设计扰动状态观测器对扰动进行估计;兼顾系统鲁棒性和系统动态性能,采用直接零极点配置的方法选取合适的RSRP参数,从而得到下一关键帧的RSRP。
在一实施例中,对下一关键帧的RSRP采用XGBOOST算法进行预测时,具体地,获取历史帧对应的RSRP,将历史帧对应的RSRP做为历史特征,将影响RSRP数据的特征和历史特征作为XGBOOST的训练数据集,利用XGBOOST对训练数据集进行训练学习,得到预测模型;用预测模型对预测时间的RSRP进行预测,得到下一关键帧的RSRP预测值。
在一实施例中,对下一关键帧的RSRP采用神经网络模型进行预测时,具体地,选择连续的视频帧作为训练样本,并提取训练样本的帧间差,将帧间差作为生成器模型中编码器的输入,基于损失函数训练获得编码器与解码器的神经网络权值,求解使得损失函数值最小时的预测帧,从而得到下一关键帧的RSRP预测值。
步骤S303,根据下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照目标数据量对下一关键帧进行帧内编码。
由于1个上行帧可以承载的上行数据量具体跟终端到基站的信号强度相关,根据预测得到的下一关键帧的强度信息,从而可以得到下一关键帧的目标数据量,在下一关键帧进行帧内编码时是通过确定的目标数据量进行编码。
由于视频流包括很多帧,视频流在传输的时候,是一帧一帧的过来,在下一个关键帧要编码之前,预先去计算这个关键帧编码的大小是多少,预先计算的这个关键帧编码大小就是指下一个关键帧的目标数据量。在确定下一关键帧的目标数据量之后,通过去设置编码器编码速率的值,然后使得这个视频帧在进行编码的时候,编码出来的关键帧大小是事先预设的编码大小。此时的目标数据量使得关键帧的传输可以在一个上行时隙内完成传输,从而降低了传输时延。
在本申请实施例提供的技术方案中,通过历史强度信息预测下一关键帧的强度信息,并根据预测得到的下一关键帧的强度信息,确定下一关键帧的目标数据量。通过在下一关键帧在编码之前预先确定目标数据量,之后按照目标数据量对下一关键帧进行帧内编码,这样,使得关键帧的传输可以在一个上行时隙内完成传输,不需要等待一个帧周期才传输完成,从而减少了传输时延。本申请通过牺牲一点图像质量来换取更短的时延,使得视频流的传输速率大大提升。
需要说明的是,通过自适应调整关键帧的大小,使得关键帧的传输可以在一个5G帧的上行时隙内完成传输,而不需要再等待5ms的帧周期。目前1080P的关键帧的编码范围为50-200KB左右,一个5G上行时隙空载时可以传输的数据量在125KB左右。根据网络状态,进行关键帧的自适应传输是可行的。
参见图4,图4示意性地示出了本申请一实施例中根据下一关键帧的强度信息,确定下一关键帧的目标数据量的步骤流程。在本申请的一个实施例中,步骤S303,根据下一关键帧的强度信息,确定下一关键帧的目标数据量,主要可以包括如下的步骤S401至步骤S403。
步骤S401,根据下一关键帧的强度信息,确定下一关键帧的调制编码策略。
在无线通信中,调制编码策略(Modulation and Coding Scheme,简称MCS),一般用于描述物理层传输时信道编码和星座调制的选择。一般发送数据会经过MCS进行处理,不同的MCS会有不同的编码速率,以影响在同一个无线资源块上能够实际承载的发送数据量。为了能够在下一关键帧在编码之前预先确定目标数据量,需要先得到下一关键帧对应的调制编码策略,为了得到下一关键帧的调制编码策略,通过下一关键帧的强度信息去确定。
步骤S402,根据下一关键帧的调制编码策略,计算得到下一关键帧的数据量。
由于不同的调制编码策略会有不同的编码速率,不同的编码速率影响在同一个无线资源块上能够实际承载的发送数据量,通过得到下一关键帧的调制编码策略得到对应的编码速率,通过编码速率对应计算得到下一关键帧的数据量。
步骤S403,根据计算得到的数据量与实际的数据量进行比较,确定下一关键帧的目标数据量。
为了在降低传输时延的同时保证视频流能够正常传输,并不是直接将计算得到的数据量作为下一关键帧的目标数据量,而是需要进行比较,将计算得到的数 据量与视频流的实际数据量进行比较,选择合适的数据量作为目标数据量,这样以保证在降低传输时延的同时保证视频流能够正常传输,而不至于大大降低视频的质量。
这样,通过下一关键帧的强度信息,去预估下一关键帧的目标数据量,通过在下一关键帧在编码之前预先确定目标数据量,且在确定目标数据量时需要将计算得到的数据量与实际的数据量进行比较,选择比较合适的数据量作为目标数据量,从而实现在降低传输时延的同时保证视频流能够正常传输。
参见图5,图5示意性地示出了本申请一实施例中根据下一关键帧的强度信息,确定下一关键帧的调制编码策略的步骤流程。在本申请的一个实施例中,步骤S401,根据下一关键帧的强度信息,确定下一关键帧的调制编码策略,主要可以包括如下步骤S501至步骤S502。
步骤S501,终端获取调制编码策略与强度信息的映射关系表。
调制编码策略与强度信息的映射关系表,即MCS和RSRP的映射关系表,该表描述的是不同的RSRP段对应的MCS等级。
可以预先配置确定MCS表格的规则,从而在视频发送终端的接入过程中,可以根据预先配置的规则,确定各信道使用的MCS表格。比如,预先配置的规则可以是根据RSRP当前的测量值,确定视频发送终端在接入过程中全部或部分信道使用的MCS表格。
由于视频发送终端对RSRP当前的测量值可以反映视频发送终端当前的用户能力,以及视频发送终端当前的实际传输性能,因此,可以根据RSRP当前的测量值,确定在随机接入过程中各信道使用的MCS表格,以更加符合视频发送终端当前的实时性能需求。因此,视频发送终端可以在接入过程中对RSRP进行测量,以根据RSRP当前的测量值所处的范围,确定在接入过程中各信道使用的MCS表格。进一步的,视频发送终端在测量出RSRP的当前的测量值之后,可以将RSRP当前的测量值发送至基站,以使基站确定与RSRP当前的测量值匹配的各信道使用的MCS表格,从而可以得到MCS和RSRP的映射关系表。
步骤S502,根据下一关键帧的强度信息,查找调制编码策略与强度信息的映射关系表,确定下一关键帧的调制编码策略。
通过查找调制编码策略与强度信息的映射关系表,以得到下一关键帧的强度 信息对应的调制编码策略。具体地,调制编码策略与强度信息的映射关系表即MCS和RSRP映射关系表,MCS和RSRP的对应关系,由于MCS是离散的,RSRP是连续的,一般情况为某一个区间的RSRP对应一个MCS,建立一个区间对应一个MCS。
在获得新的RSRP之后,也就是下一关键帧的强度信息之后,会进行比较,由于所有历史的RSRP都归到了对应的MCS的类里面,然后去找新的RSRP跟哪个MCS里面的所有RSRP的平均距离是最小的,则说明新的RSRP属于那个MCS,即与历史的MCS匹配的RSRP的距离最小。假如MCS为0和1,RSRP有很多,历史的RSRP要么对应属于0的MCS要么对应属于1的MCS,为0的MCS对应很多个RSRP的历史值,为1的MCS也对应很多个RSRP的历史值,然后判断新的RSRP是属于哪个MCS,则是看是和为0的MCS对应的所有的历史RSRP的平均距离最小,还是和为1的MCS对应的所有历史RSRP的平均距离最小,如果和为0的MCS对应的所有RSRP的平均距离最小,则新的RSRP归为MCS等于0这一类,从而确定了下一关键帧的调制编码策略,实现了根据下一关键帧的强度信息,查找调制编码策略与强度信息的映射关系表,确定下一关键帧的调制编码策略。
这样,通过查表的方式,有利于根据得到的强度信息去确定下一关键帧的调制编码策略。另外,需要说明的是,基于视频发送终端网络感知进行编码策略调整,无需等待收端反馈,调整较为及时。
参见图6,图6示意性地示出了本申请一实施例中获取调制编码策略与强度信息的映射关系表的调制编码策略的步骤流程。在本申请的一个实施例中,对于调制编码策略与强度信息的映射关系表的获取方式,具体地,步骤S501,获取调制编码策略与强度信息的映射关系表,主要可以包括如下步骤S601至步骤S602。
步骤S601,终端实时获取由基站分配的历史调制编码策略和所述历史强度信息,并以历史调制编码策略对应的编码信息作为分类标签,历史调制编码策略用于表示历史时刻对应的调制编码策略。
步骤S602,将历史强度信息按编码信息分类标签进行聚类,以得到聚类分类器。
步骤S603,将历史强度信息值分布区间中各强度数值输入聚类分类器,得到 与编码信息分类标签对应的分类信息,以得到调制编码策略与强度信息的映射关系表。
终端记录从5G模块中获取到的近期上行传输中基站分配的MCS信息和RSRP信息,建立MCS和RSRP映射关系表。具体的建表方法,可以采用聚类方法等进行,使得历史真实MCS和利用RSRP查找映射关系表得到的MCS的平均距离最小。
这样,通过获取基站分配的调制编码策略和强度信息,有利于调制编码策略与强度信息的映射关系表的建立。
若终端无法实时从5G模块中获取到基站分配的MCS信息,则可利用仿真或测试等手段线下测量,提前建立MCS和RSRP映射关系表,存入终端中。
参见图7,图7示意性地示出了本申请一实施例中根据计算得到的数据量与实际的数据量进行比较,以确定下一关键帧的目标数据量的步骤流程。在本申请的一个实施例中,步骤S403,根据计算得到的数据量与实际的数据量进行比较,以确定下一关键帧的目标数据量,主要可以包括如下步骤S701至步骤S702。
步骤S701,终端对计算得到的数据量进行调整。
在终端计算得到数据量之后,对计算得到的数据量进行调整,具体是通过系数进行调整,以确保得到的数据量能够正常传输。
步骤S702,将调整后的数据量和实际的数据量进行比较,选择数值较小的数据量作为下一关键帧的目标数据量。
将经过调整后的数据量与视频流的实际数据量进行比较,若经过调整后的数据量小于实际的数据量,则选择经过调整后的数据量作为下一关键帧的目标数据量,并按照目标数据量对下一关键帧进行帧内编码。若实际的数据量小于经过调整后的数据量,则选择实际的数据量作为下一关键帧的目标数据量,并按照目标数据量对下一关键帧进行帧内编码。
这样,通过将调整后的数据量和实际的数据量进行比较,选择数值较小的数据量作为下一关键帧的目标数据量,选择较小的数据量作为目标数据量这样可以大大降低传输时延,另外,在降低传输时延的同时保证视频流能够正常传输。
在本申请的一个实施例中,在确定下一关键帧的目标数据量之后,可以通过量化参数进行进一步调整,以实现保证视频质量的同时减少码率。具体地,获取 当前的量化参数值;判断与所述当前的量化参数值相对应的输出码率是否满足预设阈值的要求,若是,则不需要进行调整,若否,则将所述当前的量化参数值调整为目标量化参数值。
其中,量化参数(Quality Parameter,QP)是进行视频编码时的主要参数之一。当QP取最小值0时,表示视频的量化最精细,相反,QP取最大值时,表示视频的量化是最粗糙的。通常,视频网站等视频内容提供商为了使视频内容能够满足在互联网上进行传输和播放的要求,需要对原始的视频进行转码操作。视频转码几乎是一切互联网视频服务的基础,包括直播、点播等等。视频转码的目标很简单,就是要求获得流畅、清晰的视频数据。但是,流畅和清晰是两个互相矛盾的需求。流畅要求码率越低越好,相反清晰需要更高的码率。视频转码需要优先保证流畅播放;在此基础上,尽可能提高转码的画质与压缩比。码率是指在进行数据传输时,单位时间内所传送的数据位数,通常以kbps为单位,即“千位每秒”。通常,对于一个视频,码率太低时,则画面不清晰;而码率太高时,则无法在网络上流畅播放。
当采用当前的量化参数值对视频进行编码时,编码后所输出的视频的平均码率或瞬时码率未能满足预设要求时,可以对当前的量化参数值进行调整,以获得目标量化参数值。通常,输出的视频的码率跟视频编码器所采用的量化参数值成反比,量化参数值越大,转码输出的视频的码率越小,量化参数值越小,转码输出的视频的码率越大,因此,可以根据转码后输出的视频的码率与预设要求的比较结果,调整视频编码器的量化参数值,例如,当转码输出的视频的码率大于预设要求时,说明当前所采用的量化参数值较小,可以相应地上浮一定数值,以获得目标量化参数值。
参见图8,图8示意性地示出了本申请一实施例中对计算得到的数据量进行调整的步骤流程。在本申请的一个实施例中,步骤S701,对计算得到的数据量进行调整,主要可以包括如下步骤S801至步骤S802。
步骤S801,终端获取当前网络的负载冗余度。
当前网络的负载冗余度代表的是当前网络的状态,通过获取当前网络的状态,从而便于后续对数据量的调整。
当前网络的负载冗余度可以根据丢包情况和网络带宽估计的结果动态调整冗 余度。
步骤S802,根据计算得到的数据量、负载冗余度、预设的保护比例以及预设调整系数的乘积,得到调整后的数据量。
对于保护比例P的设定,例如最大的目标数据量为100k,由于预测过程会存在偏差,为了防止一些意外情况的发生,使得数据无法正常传输,通过设置一个保护比例,例如保护比例设置为90%,通过设置保护比例将目标的数据量降到100*90%去传,即使预测时存在偏差,目标数据量也是可以正常传输,因此设置一个保护比例,是为了降低偏差最后对结果的影响,从而达到消除预测偏差的影响。
这样,在考虑当前网络的负载冗余度的情况下,有利于得到经过调整后的数据量。
在本申请的一个实施例中,若下一关键帧采用前向纠错编码的方式,则预设调整系数为1/(1+R),其中R代表前向纠错编码的冗余率;
若下一关键帧未采用前向纠错编码的方式,则预设调整系数为1。
由于采用前向纠错编码的方式即采用FEC编码方式,对下一关键帧进行编码时,乘积的大小会变大,因此通过预设调整系数进行调整一下,以获得一个能够传输的目标数据大小。
这样,通过采用不同的编码方式,设置的预设调整系数不同,从而适应不同的应用场景。
参见图9,图9示意性地示出了本申请一实施例中获取当前网络的负载冗余度的步骤流程。在本申请的一个实施例中,步骤S801获取当前网络的负载冗余度,主要可以包括如下步骤S901至步骤S902。
步骤S901,获取历史关键帧的时延信息。
通过历史关键帧的时延信息作为反馈,从而确定当前的网络状态。
步骤S902,根据时延信息确定当前网络的负载冗余度。
通过历史关键帧的时延信息得到当前网络的负载冗余度,这样有利于得到较准确的网络当前负载冗余度,有利于获得较准确的目标数据量。
在本申请的一个实施例中,根据时延信息确定当前网络的负载冗余度,即视频发送终端对网络传输状态进行实时统计,实时动态地调节负载冗余度,具体包 括:统计一段时间内N次连续的数据包网络往返时延,得到初始化的该段时间内数据包往返时延的平均值和标准差;对该段时间内连续的数据包往返时延进行统计确定时延阈值;获取发送端传输当前的单个数据包往返时延;将当前的数据包往返时延与通过对该数据包之前连续数据包的传输时延进行统计得到的时延阈值进行比较;若出现当前的数据包往返时延值大于或者等于时间阈值,则表示数据包往返时延过长,判定为该当前数据包在网络传输过程中丢失,记录一次丢包;若出现当前的数据包往返时延值小于时间阈值,则表示数据包往返时延正常,使用滑动窗口的方式,设置窗口大小为M,每当得到新的数据包确认信息后,去掉窗口中最早的往返时延,将新的结果加入该窗口中,实现对数据包往返时延及数据包丢失情况的实时监测,从而得到丢包率结果;根据N次数据包传输丢包率的统计,得到该段时间内数据包丢包率的平均值和标准差;计算当前网络的负载冗余度调整的参考比率;根据冗余度调整的参考比率,调整视频发送终端冗余度,更新数据包个数与冗余包个数;使用滑动窗口的方式,设置窗口大小为M,每当得到新的丢包率后,去掉窗口中最早的丢包率数据,将新的结果加入该窗口中,实现对网络状况的实时监测。
这样,采用基于视频发送终端的实时连续统计对网络状况做出准确判断并对短期内网络状况进行预测,并以此作为动态调节负载冗余度的依据。最终实现系统对网络丢包现象解耦的同时,减少冗余包对网络带宽的消耗,达到提高网络利用率和发送效率的目的。
另外,实现了基于视频发送终端依靠对丢包情况进行统计学运算进而对网络状况进行实时估计,在视频发送终端对网络丢包状况进行统计,减少了因为接收端反馈过程带来的统计过程延迟。通过视频发送终端统计丢包率可以在一个往返时延内得到数据包的传输结果,在丢包率统计周期结束时即得到丢包统计结果,而接收端需要额外的反馈与处理过程。另外,在线统计检验过程中通过滑动窗口的方式更新用于检验的均值与标准差,使统计过程更加及时。并且,相比于以视频帧为单位和编码后每组数据包总数不同的传输方式,通过定常速率的传输方式,可以实现数据包稳定高效的传输,一方面可以保证单位时间内进入网络的数据量保持恒定,另一方面降低因数据包发送间隔的不同而产生的延迟。
在本申请的一个实施例中,步骤S901,获取历史关键帧的时延信息,包括:
将接收端反馈的时延信息作为历史关键帧的时延信息,或者将监测得到的清空发送缓存区的时延信息作为历史关键帧的时延信息。
这样,可基于接收端反馈时延或者从5G模块中观察到的清空发送缓存区的时延信息来得到时延信息,这样便于获取得到时延信息,使得得到的时延信息数据量更准确。
参见图10,图10示意性地示出了本申请一实施例中根据时延信息确定当前网络的负载冗余度的步骤流程。在本申请的一个实施例中,步骤S902,根据时延信息确定当前网络的负载冗余度,主要可以包括如下步骤S1001至步骤S1003。
步骤S1001,设置初始负载冗余度。
设置初始负载冗余度为默认值,例如设置为80%。
步骤S1002,若时延信息的时间窗口大于预设时延的时间窗口,则根据第一预设步长降低负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度。
例如,当时延信息的时间窗口大于预设时延的时间窗口,例如相差5ms时,则根据一定步长降低负载冗余度,直至调整到时延信息的时间窗口和预设时延的时间窗口相等时,将最后调整的负载冗余度作为当前网络的负载冗余度。
步骤S1003,若时延信息的时间窗口小于预设时延的时间窗口,则根据第二预设步长提高所述负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度,其中,第一预设步长大于第二预设步长。
直到当时延信息的时间窗口小于预设时延的时间窗口,例如相差5ms时,则根据第二预设步长提高负载冗余度,直至调整到时延信息的时间窗口和预设时延的时间窗口相等时,以最终调整的负载冗余度作为当前网络的负载冗余度。
这样,通过时延信息的不同设置不同的步长对负载冗余度进行调整,以实现对负载冗余度的快速调整,对负载冗余度进行动态调整,以得到较准确的网络当前负载冗余度,有利于获得较准确的目标数据量。
在本申请的一个实施例中,预设时延的时间窗口根据前向参考帧传输的时延确定,或者根据空负载网络环境测试值确定。
画面组(Group of Pictures,GOP)指的是视频中一组连续的画面,作为视频编码中的帧组。在低时延视频传输中,通常在一个GOP中,编码后的第一个帧为 I帧,后续的帧为前向参考帧即P帧,预设时延的时间窗口可以根据P帧的时延确定。另外,预设时延的窗口还可以根据没有负载时的对应的网络环境进行设定。
这样,预设时延的时间窗口与实际时延情况相结合,设置得更合理。
在本申请的一个实施例中,所述方法还包括:终端获取一个媒体帧,并将获取到的媒体帧放入缓冲队列,确定缓冲队列中的媒体帧的总数目和上一次计算带宽到当前时间之间发送的媒体帧的总长度,以及根据媒体帧的总数目和媒体帧的总长度,计算当前带宽和当前网络拥塞等级;根据当前的带宽和网络拥塞等级,判断码流调整类型,并计算编码调整参数,其中不仅需要计算调整的码流值,还需要计算调整的帧频值,这样,在下调码流时,也相应降低帧频,因为在码流比较低时,过高的帧率已经没有多大意义,而按比率调低帧频,可以有效减少降码流引起的画质变差问题,并且在码流调整类型为下调时,是基于当前带宽,计算需要下调的码流值的,因此在保证了流畅性的同时,尽可能达到最大的带宽利用率;最后,终端基于计算的编码调整参数进行编码配置调整。这样,自适应带宽调整编码配置,减少了无效媒体帧的发送,提高了流畅性。
应当注意,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
以下介绍本申请的装置实施例,可以用于执行本申请上述实施例中的视频编码方法。图11示意性地示出了本申请实施例提供的视频编码装置的结构框图。如图11所示,视频编码装置1100包括:
获取模块1110,用于获取历史时间段内的历史强度信息,历史强度信息用于表示历史时间段内的各个历史时刻对应的视频传输的无线网络信号强度;
预测模块1120,用于根据历史强度信息预测下一关键帧的强度信息,下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;
确定模块1130,用于根据下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照目标数据量对下一关键帧进行帧内编码。
在本申请的一些实施例中,基于以上技术方案,确定模块1130包括:
第一确定单元,用于根据下一关键帧的强度信息,确定下一关键帧的调制编 码策略;
计算单元,用于根据下一关键帧的调制编码策略,计算得到下一关键帧的数据量;
第二确定单元,用于根据计算得到的数据量与实际的数据量进行比较,确定下一关键帧的目标数据量。
在本申请的一些实施例中,基于以上技术方案,第一确定单元用于获取调制编码策略与强度信息的映射关系表;根据下一关键帧的强度信息,查找调制编码策略与强度信息的映射关系表,确定下一关键帧的调制编码策略。
在本申请的一些实施例中,基于以上技术方案,第一确定单元用于实时获取由基站分配的历史调制编码策略和历史强度信息,并以历史调制编码策略对应的编码信息作为分类标签,历史调制编码策略用于表示历史时刻对应的调制编码策略;将历史强度信息按编码信息分类标签进行聚类,以得到聚类分类器;将历史强度信息值分布区间中各强度数值输入聚类分类器得到与编码信息分类标签对应的分类信息,以得到调制编码策略与强度信息的映射关系表。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于对计算得到的数据量进行调整;将调整后的数据量和实际的数据量进行比较,选择数值较小的数据量作为下一关键帧的目标数据量。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于获取当前网络的负载冗余度;根据计算得到的数据量、负载冗余度、预设的保护比例以及预设调整系数的乘积,得到调整后的数据量。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于若下一关键帧采用前向纠错编码的方式,则预设调整系数为1/(1+R),其中R代表前向纠错编码的冗余率;若下一关键帧未采用前向纠错编码的方式,则预设调整系数为1。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于获取历史关键帧的时延信息;根据时延信息确定当前网络的负载冗余度。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于将接收端反馈的时延信息作为历史关键帧的时延信息,或者将监测得到的清空发送缓存区的时延信息作为历史关键帧的时延信息。
在本申请的一些实施例中,基于以上技术方案,第二确定单元用于设置初始 负载冗余度;若时延信息的时间窗口大于预设时延的时间窗口,则根据第一预设步长降低负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度;若时延信息的时间窗口小于预设时延的时间窗口,则根据第二预设步长提高负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度;其中,第一预设步长大于第二预设步长。
在本申请的一些实施例中,基于以上技术方案,第二确定单元中,预设时延的时间窗口根据前向参考帧传输的时延确定,或者根据空负载网络环境测试值确定。
在本申请的一些实施例中,基于以上技术方案,获取模块中,历史时间段与下一关键帧的编码时刻具有预设时间间隔的滑动时间窗口。
本申请各实施例中提供的视频编码装置的具体细节已经在对应的方法实施例中进行了详细的描述,此处不再赘述。
图12示意性地示出了用于实现本申请实施例的电子设备的计算机系统结构框图。
需要说明的是,图12示出的电子设备的计算机系统1200仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图12所示,计算机系统1200包括中央处理器1201(Central Processing Unit,CPU),其可以根据存储在只读存储器1202(Read-Only Memory,ROM)中的程序或者从存储部分1208加载到随机访问存储器1203(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器1203中,还存储有系统操作所需的各种程序和数据。中央处理器1201、只读存储器1202以及随机访问存储器1203通过总线1204彼此相连。输入/输出接口1205(Input/Output接口,即I/O接口)也连接至总线1204。
以下部件连接至输入/输出接口1205:包括键盘、鼠标等的输入部分1206;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分1207;包括硬盘等的存储部分1208;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分1209。通信部分1209经由诸如因特网的网络执行通信处理。驱动器1210也根据需要连接至输入/输出接口1205。可拆卸介质1211,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1210上,以便于从其上读出的计算机程序根据需要被安装 入存储部分1208。
特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1209从网络上被下载和安装,和/或从可拆卸介质1211被安装。在该计算机程序被中央处理器1201执行时,执行本申请的系统中限定的各种功能。
需要说明的是,本申请实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时 也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、触控终端、或者网络设备等)执行根据本申请实施方式的方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (20)

  1. 一种视频编码方法,由电子设备执行,其特征在于,包括:
    获取历史时间段内的历史强度信息,所述历史强度信息用于表示所述历史时间段内的各个历史时刻对应的视频传输的无线网络信号强度;
    根据所述历史强度信息预测下一关键帧的强度信息,所述下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;
    根据所述下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照所述目标数据量对所述下一关键帧进行帧内编码。
  2. 根据权利要求1所述的视频编码方法,其特征在于,所述根据所述下一关键帧的强度信息,确定下一关键帧的目标数据量,包括:
    根据所述下一关键帧的强度信息,确定下一关键帧的调制编码策略;
    根据所述下一关键帧的调制编码策略,计算得到所述下一关键帧的数据量;
    根据计算得到的所述数据量与实际的数据量进行比较,确定所述下一关键帧的目标数据量。
  3. 根据权利要求2所述的视频编码方法,其特征在于,所述根据所述下一关键帧的强度信息,确定下一关键帧的调制编码策略,包括:
    获取调制编码策略与强度信息的映射关系表;
    根据所述下一关键帧的强度信息,查找所述调制编码策略与强度信息的映射关系表,确定所述下一关键帧的调制编码策略。
  4. 根据权利要求3所述的视频编码方法,其特征在于,所述获取调制编码策略与强度信息的映射关系表,包括:
    实时获取由基站分配的历史调制编码策略和所述历史强度信息,并以所述历史调制编码策略对应的编码信息作为分类标签,所述历史调制编码策略用于表示历史时刻对应的调制编码策略;
    将所述历史强度信息按编码信息分类标签进行聚类,以得到聚类分类器;
    将所述历史强度信息值分布区间中各强度数值输入所述聚类分类器得到与所述编码信息分类标签对应的分类信息,以得到所述调制编码策略与强度信息的映射关系表。
  5. 根据权利要求2所述的视频编码方法,其特征在于,所述根据计算得到的所述数据量与实际的数据量进行比较,以确定下一关键帧的目标数据量,包括:
    对计算得到的所述数据量进行调整;
    将调整后的所述数据量和实际的数据量进行比较,选择数值较小的数据量作为下一关键帧的目标数据量。
  6. 根据权利要求5所述的视频编码方法,其特征在于,所述对计算得到的所述数据量进行调整,包括:
    获取当前网络的负载冗余度;
    根据计算得到的所述数据量、所述负载冗余度、预设的保护比例以及预设调整系数的乘积,得到调整后的所述数据量。
  7. 根据权利要求6所述的视频编码方法,其特征在于,若所述下一关键帧采用前向纠错编码的方式,则所述预设调整系数为1/(1+R),其中R代表前向纠错编码的冗余率;
    若所述下一关键帧未采用前向纠错编码的方式,则所述预设调整系数为1。
  8. 根据权利要求6所述的视频编码方法,其特征在于,所述获取当前网络的负载冗余度,包括:
    获取历史关键帧的时延信息;
    根据所述时延信息确定所述当前网络的负载冗余度。
  9. 根据权利要求8所述的视频编码方法,其特征在于,所述获取历史关键帧的时延信息,包括:
    将接收端反馈的时延信息作为所述历史关键帧的时延信息,或者将监测得到的清空发送缓存区的时延信息作为所述历史关键帧的时延信息。
  10. 根据权利要求8所述的视频编码方法,其特征在于,所述根据所述时延信息确定所述当前网络的负载冗余度,包括:
    设置初始负载冗余度;
    若所述时延信息的时间窗口大于预设时延的时间窗口,则根据第一预设步长降低所述负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度;
    若所述时延信息的时间窗口小于预设时延的时间窗口,则根据第二预设步长提高所述负载冗余度,将调整后的最终负载冗余度作为当前网络的负载冗余度;
    其中,所述第一预设步长大于所述第二预设步长。
  11. 根据权利要求10所述的视频编码方法,其特征在于,所述预设时延的时间窗口根据前向参考帧传输的时延确定,或者根据空负载网络环境测试值确定。
  12. 根据权利要求1-11任一项所述的视频编码方法,其特征在于,所述历史时间段与下一关键帧的编码时刻具有预设时间间隔的滑动时间窗口。
  13. 一种视频编码装置,其特征在于,包括:
    获取模块,用于获取历史时间段内的历史强度信息,所述历史强度信息用于表示所述历史时间段内的各个历史时刻对应的视频传输的无线网络信号强度;
    预测模块,用于根据所述历史强度信息预测下一关键帧的强度信息,所述下一关键帧的强度信息用于表示传输下一关键帧的无线网络信号强度;
    确定模块,用于根据所述下一关键帧的强度信息,确定下一关键帧的目标数据量,并按照所述目标数据量对所述下一关键帧进行帧内编码。
  14. 根据权13所述的装置,其特征在于,所述确定模块包括:
    第一确定单元,用于根据所述下一关键帧的强度信息,确定下一关键帧的调制编码策略;
    计算单元,用于根据所述下一关键帧的调制编码策略,计算得到所述下一关键帧的数据量;
    第二确定单元,用于根据计算得到的所述数据量与实际的数据量进行比较,确定所述下一关键帧的目标数据量。
  15. 根据权14所述的装置,其特征在于,所述第一确定单元用于获取调制编码策略与强度信息的映射关系表;根据所述下一关键帧的强度信息,查找所述调制编码策略与强度信息的映射关系表,确定所述下一关键帧的调制编码策略。
  16. 根据权15所述的装置,其特征在于,所述第一确定单元用于:
    实时获取由基站分配的历史调制编码策略和所述历史强度信息,并以所述历史调制编码策略对应的编码信息作为分类标签,所述历史调制编码策略用于表示历史时刻对应的调制编码策略;
    将所述历史强度信息按编码信息分类标签进行聚类,以得到聚类分类器;
    将所述历史强度信息值分布区间中各强度数值输入所述聚类分类器得到与所述编码信息分类标签对应的分类信息,以得到所述调制编码策略与强度信息的映 射关系表。
  17. 根据权14所述的装置,其特征在于,所述第二确定单元用于对计算得到的所述数据量进行调整;将调整后的所述数据量和实际的数据量进行比较,选择数值较小的数据量作为下一关键帧的目标数据量。
  18. 根据权17所述的装置,其特征在于,所述第二确定单元用于获取当前网络的负载冗余度;根据计算得到的所述数据量、所述负载冗余度、预设的保护比例以及预设调整系数的乘积,得到调整后的所述数据量。
  19. 一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至12中任意一项所述的视频编码方法。
  20. 一种电子设备,其特征在于,包括:
    处理器;以及
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至12中任意一项所述的视频编码方法。
PCT/CN2022/097341 2021-08-02 2022-06-07 视频编码方法、装置、计算机可读介质及电子设备 WO2023010992A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/137,950 US20230262232A1 (en) 2021-08-02 2023-04-21 Video coding method and apparatus, computer-readable medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110883015.7 2021-08-02
CN202110883015.7A CN115701709A (zh) 2021-08-02 2021-08-02 视频编码方法、装置、计算机可读介质及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/137,950 Continuation US20230262232A1 (en) 2021-08-02 2023-04-21 Video coding method and apparatus, computer-readable medium and electronic device

Publications (1)

Publication Number Publication Date
WO2023010992A1 true WO2023010992A1 (zh) 2023-02-09

Family

ID=85142581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097341 WO2023010992A1 (zh) 2021-08-02 2022-06-07 视频编码方法、装置、计算机可读介质及电子设备

Country Status (3)

Country Link
US (1) US20230262232A1 (zh)
CN (1) CN115701709A (zh)
WO (1) WO2023010992A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116489342A (zh) * 2023-06-20 2023-07-25 中央广播电视总台 确定编码延时的方法、装置、及电子设备、存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579819B (zh) * 2024-01-17 2024-03-29 哈尔滨学院 一种图像通信数字媒体方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527834A (zh) * 2009-03-26 2009-09-09 浙江大华技术股份有限公司 一种无线窄带网络视频传输方法
CN104125429A (zh) * 2013-04-27 2014-10-29 杭州海康威视数字技术股份有限公司 视频数据传输的调节方法及装置
CN106162257A (zh) * 2016-07-29 2016-11-23 南京云恩通讯科技有限公司 一种实时视频的自适应网络传输优化方法
CN106713913A (zh) * 2015-12-09 2017-05-24 腾讯科技(深圳)有限公司 视频图像帧发送方法及装置、视频图像帧接收方法及装置
CN108495142A (zh) * 2018-04-11 2018-09-04 腾讯科技(深圳)有限公司 视频编码方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527834A (zh) * 2009-03-26 2009-09-09 浙江大华技术股份有限公司 一种无线窄带网络视频传输方法
CN104125429A (zh) * 2013-04-27 2014-10-29 杭州海康威视数字技术股份有限公司 视频数据传输的调节方法及装置
CN106713913A (zh) * 2015-12-09 2017-05-24 腾讯科技(深圳)有限公司 视频图像帧发送方法及装置、视频图像帧接收方法及装置
CN106162257A (zh) * 2016-07-29 2016-11-23 南京云恩通讯科技有限公司 一种实时视频的自适应网络传输优化方法
CN108495142A (zh) * 2018-04-11 2018-09-04 腾讯科技(深圳)有限公司 视频编码方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116489342A (zh) * 2023-06-20 2023-07-25 中央广播电视总台 确定编码延时的方法、装置、及电子设备、存储介质
CN116489342B (zh) * 2023-06-20 2023-09-15 中央广播电视总台 确定编码延时的方法、装置、及电子设备、存储介质

Also Published As

Publication number Publication date
US20230262232A1 (en) 2023-08-17
CN115701709A (zh) 2023-02-10

Similar Documents

Publication Publication Date Title
WO2023010992A1 (zh) 视频编码方法、装置、计算机可读介质及电子设备
US20100124275A1 (en) System and method for dynamically encoding multimedia streams
US7054371B2 (en) System for real time transmission of variable bit rate MPEG video traffic with consistent quality
RU2384008C2 (ru) Способ и система адаптивного кодирования информации в режиме реального времени в беспроводных сетях
CN109743600B (zh) 基于可穿戴的现场运维自适应视频流传输速率控制方法
WO2017148260A1 (zh) 语音编码发送方法和装置
US20130290492A1 (en) State management for video streaming quality of experience degradation control and recovery using a video quality metric
US20130298170A1 (en) Video streaming quality of experience recovery using a video quality metric
US20130286879A1 (en) Video streaming quality of experience degradation control using a video quality metric
US20110299589A1 (en) Rate control in video communication via virtual transmission buffer
CN112954385B (zh) 一种基于控制论和数据驱动的自适应分流决策方法
US10116715B2 (en) Adapting encoded bandwidth
WO2012119459A1 (zh) 数据传输的方法、装置和系统
WO2021097865A1 (zh) 面向多人互动直播的自适应码率调节方法
WO2017084277A1 (zh) 在线媒体服务的码流自适应方法及系统
US20240040127A1 (en) Video encoding method and apparatus and electronic device
CN112866746A (zh) 一种多路串流云游戏控制方法、装置、设备及存储介质
WO2023174254A1 (zh) 视频发布方法、装置、设备及存储介质
CN111447511B (zh) 一种带有用户感知体验质量的带宽分配方法
KR101482484B1 (ko) 멀티미디어 스트리밍을 위한 비디오 패킷 스케줄링 방법
WO2014209493A1 (en) State management for video streaming quality of experience degradation control and recovery using a video quality metric
Liebl et al. Deadline-aware scheduling for wireless video streaming
CN111162877A (zh) 一种音视频服务质量控制的自适应前向纠错方法及应用
US20160277467A1 (en) Adapting Encoded Bandwidth
Haratcherev et al. Fast 802.11 link adaptation for real-time video streaming by cross-layer signaling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22851708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE