WO2019242408A1 - 视频编码方法、视频解码方法、装置、计算机设备和存储介质 - Google Patents
视频编码方法、视频解码方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2019242408A1 WO2019242408A1 PCT/CN2019/084927 CN2019084927W WO2019242408A1 WO 2019242408 A1 WO2019242408 A1 WO 2019242408A1 CN 2019084927 W CN2019084927 W CN 2019084927W WO 2019242408 A1 WO2019242408 A1 WO 2019242408A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- resolution
- video sequence
- target
- decoded
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/179—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present application relates to the field of computer technology, and in particular, to a video encoding method, a video decoding method, a device, a computer device, and a storage medium.
- the embodiments of the present application provide a video encoding method, device, computer equipment, and storage medium, which can flexibly select a target video sequence encoding mode of an input video sequence, encode the input video sequence according to the target video sequence encoding mode, and adaptively adjust The encoding mode of the input video sequence can improve the video encoding quality under the condition of limited bandwidth.
- a video encoding method executed by a computer device includes:
- a video encoding device includes:
- An input video sequence acquisition module configured to obtain an input video sequence
- a coding mode obtaining module is configured to obtain, from a candidate video sequence coding mode, a target video sequence coding mode corresponding to the input video sequence, wherein the candidate video sequence coding mode includes a constant resolution coding mode and a mixed resolution coding mode. ;
- the encoding module is configured to encode each input video frame of the input video sequence according to the target video sequence encoding mode to obtain encoded data.
- a computer device includes a memory and a processor.
- the memory stores a computer program.
- the processor causes the processor to perform the steps of the video encoding method.
- a computer-readable storage medium stores a computer program on the computer-readable storage medium.
- the processor causes the processor to perform the steps of the video encoding method.
- a video decoding method executed by a computer device includes:
- the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode
- a video decoding device includes:
- An encoded data acquisition module configured to acquire encoded data corresponding to a video sequence to be decoded
- a decoding mode obtaining module configured to obtain a target video sequence decoding mode corresponding to the video sequence to be decoded, where the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode;
- a decoding module configured to decode the encoded data corresponding to the video sequence to be decoded according to the target video sequence decoding mode to obtain a corresponding decoded video frame sequence.
- a computer device includes a memory and a processor.
- the memory stores a computer program, and when the computer program is executed by the processor, the processor causes the processor to perform the steps of the video decoding method.
- a computer-readable storage medium stores a computer program on the computer-readable storage medium.
- the processor causes the processor to perform the steps of the video decoding method.
- FIG. 1 is an application environment diagram of a video encoding method and a video decoding method provided in an embodiment
- FIG. 2 is a coding frame diagram corresponding to a video coding method in an embodiment
- FIG. 3 is a decoding frame diagram corresponding to a video decoding method in an embodiment
- FIG. 4 is a schematic diagram corresponding to a coding block in an embodiment
- FIG. 5 is a flowchart of a video encoding method provided in an embodiment
- FIG. 6 is a schematic diagram of a video coding framework provided in an embodiment
- FIG. 7A is a flowchart of encoding each input video frame of an input video sequence according to a target video sequence encoding mode provided in an embodiment to obtain encoded data;
- FIG. 7B is a schematic diagram of encoded data provided in an embodiment
- FIG. 8 is a flowchart of encoding data to be encoded to obtain encoded data corresponding to an input video frame at a resolution of a frame to be encoded according to an embodiment
- FIG. 9A is a flowchart of encoding a frame to be encoded according to a current reference frame to obtain encoded data corresponding to an input video frame according to an embodiment
- FIG. 9B is a schematic diagram of interpolation of a current reference frame provided in an embodiment
- 9C is a schematic diagram of interpolation of a current reference frame provided in an embodiment
- FIG. 10A is a flowchart of encoding a to-be-encoded frame according to a current reference frame to obtain encoded data corresponding to an input video frame according to an embodiment
- FIG. 10B is a schematic diagram of a current reference frame and a frame to be encoded according to an embodiment
- FIG. 11 is a flowchart of encoding a to-be-encoded frame according to a current reference frame to obtain encoded data corresponding to an input video frame according to an embodiment
- FIG. 12 is a flowchart of a video decoding method provided in an embodiment
- FIG. 13 is a schematic diagram of a video decoding framework provided in an embodiment
- FIG. 14 is a flowchart of decoding a coded data corresponding to a video sequence to be decoded according to a target video sequence decoding mode and obtaining a corresponding decoded video frame sequence according to an embodiment
- FIG. 15 is a flowchart of decoding a coded data according to resolution information corresponding to a video frame to be decoded to obtain a reconstructed video frame corresponding to the video frame to be decoded according to an embodiment
- FIG. 16 is a flowchart of decoding the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded according to an embodiment
- FIG. 17 is a structural block diagram of a video encoding device in an embodiment
- FIG. 18 is a structural block diagram of a video decoding apparatus in an embodiment
- 19 is a block diagram of an internal structure of a computer device in an embodiment
- FIG. 20 is a block diagram of the internal structure of a computer device in one embodiment.
- first, second, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms unless specifically stated. These terms are only used to distinguish the first element from another element.
- first vector transformation coefficient may be referred to as a second vector transformation coefficient
- second vector transformation coefficient may be referred to as a first vector transformation coefficient
- embodiments of the present application provide a video encoding method, device, computer device, and storage medium.
- video encoding an input video sequence is obtained, and a target corresponding to the input video sequence is obtained from a candidate video sequence encoding mode.
- a video sequence encoding mode wherein the candidate video sequence encoding modes include a constant resolution encoding mode and a mixed resolution encoding mode.
- Each input video frame of the input video sequence is encoded according to the target video sequence encoding mode to obtain encoded data.
- the target video sequence encoding mode of the input video sequence can be flexibly selected, the input video sequence is encoded according to the target video sequence encoding mode, the resolution of the input video sequence is adaptively adjusted, and the video encoding quality is improved under the condition of limited bandwidth.
- the embodiments of the present application further provide a video decoding method, device, computer device, and storage medium.
- video decoding obtain the encoded data corresponding to the video sequence to be decoded, and obtain the target video sequence decoding mode corresponding to the video sequence to be decoded.
- the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode.
- the encoded data corresponding to the decoded video sequence is decoded to obtain a corresponding decoded video frame sequence. Therefore, when decoding, the decoding can be flexibly performed according to the target video sequence decoding mode corresponding to the video sequence to be decoded, and accurate decoded video frames can be obtained.
- FIG. 1 is an application environment diagram of a video encoding method and a video decoding method provided in an embodiment. As shown in FIG. 1, the application environment includes a terminal 110 and a server 120.
- the terminal 110 or the server 120 may perform video encoding through an encoder, or perform video decoding through a decoder.
- the terminal 110 or the server 120 may also run a video encoding program through a processor to perform video encoding, or run a video decoding program to perform video decoding.
- the server 120 may directly transmit the encoded data to the processor for decoding, or store it in a database for subsequent decoding.
- the server 120 can directly send the encoded data to the terminal 110 through the output interface, or store the encoded data in a database for subsequent transmission.
- the server 120 may also obtain the encoded data sent by the terminal 110 and then send it to the corresponding receiving terminal for decoding by the receiving terminal.
- the terminal 110 and the server 120 may be connected through a network.
- the terminal 110 may specifically be a computer device such as a desktop terminal or a mobile terminal.
- the mobile terminal may specifically include at least one of a mobile phone, a tablet computer, and a notebook computer, but is not limited thereto.
- the server 120 may be implemented by an independent server or a server cluster composed of multiple servers.
- FIG. 2 is a coding framework diagram corresponding to a video coding method provided in an embodiment.
- the video encoding method provided in the embodiment of the present application may obtain each input video frame of an input video sequence, encode it to obtain corresponding encoded data, and store or send the encoded data through the storage and sending unit 222, or store and send the encoded data.
- a processing mode decision may be performed on the input video frame to obtain a processing mode corresponding to the input video frame.
- an input video frame may be processed according to a processing mode to obtain a frame to be encoded.
- each prediction block of the frame to be encoded may be intra-predicted or inter-predicted, and the prediction value is obtained according to the image value of the reference block corresponding to the encoding block,
- the corresponding motion vector is obtained by subtracting the actual value of the coding block from the predicted value to obtain the prediction residual.
- the motion vector represents the displacement of the coding block relative to the reference block.
- the prediction residual and vector information in the spatial domain are transformed into the frequency domain, and the transform coefficients may be encoded.
- the transformation method may be a discrete Fourier transform or a discrete cosine transform, etc.
- the vector information may be an actual motion vector or a motion vector difference representing a displacement, and the motion vector difference is a difference between the actual motion vector and the predicted motion vector.
- the transformed data is mapped to another value, for example, a smaller value can be obtained by dividing the transformed data by the quantization step size.
- the quantization parameter is the serial number corresponding to the quantization step size, and the corresponding quantization step size can be found according to the quantization parameter. If the quantization parameter is small, most of the details of the image frame are retained, and the corresponding bit rate is high. If the quantization parameter is large, the corresponding bit rate is low, but the image distortion is large and the quality is not high.
- y is the value corresponding to the video frame before quantization
- Qstep is the quantization step size
- FQ is the quantized value obtained by quantizing y.
- the Round (x) function refers to rounding the value to the nearest round, that is, rounding to the nearest five.
- the correspondence between the quantization parameter and the quantization step size can be specifically set as required.
- the quantization step has a total of 52 values, which are integers between 0 and 51.
- the quantization step is between 0 and 39 Integer, and the quantization step size increases with the increase of the quantization parameter. Whenever the quantization parameter increases by 6, the quantization step size doubles.
- the entropy encoding unit 220 is used to perform entropy encoding.
- the entropy encoding is a data encoding method that encodes according to the principle of entropy without losing any information, and can use small characters to express certain information.
- the entropy coding method may be, for example, Shannon coding or Huffman coding.
- the first inverse quantization unit 212, the first inverse transform unit 214, the first reconstruction unit 216, and the first reference information adaptation unit 218 are units corresponding to the reconstruction path.
- the reference frame is obtained by performing frame reconstruction using each unit of the reconstruction path, which can keep the reference frames consistent during encoding and decoding.
- the steps performed by the first inverse quantization unit 212 are an inverse process of performing quantization
- the steps performed by the first inverse transform unit 214 are an inverse process performed by the transform unit 208
- the first reconstruction unit 216 is used to convert the residuals obtained by the inverse transform.
- the difference data is added to the prediction data to obtain a reconstructed reference frame.
- the first reference information adaptation unit 218 is configured to, at the resolution of the frame to be encoded, perform position reconstruction on the current reference frame, position information corresponding to each coding block of the frame to be encoded, and position information corresponding to each reference block of the current reference frame. And at least one of the reference information such as a motion vector is adaptively processed, so that the first prediction unit 206 performs prediction based on the adaptively processed reference information.
- FIG. 3 is a decoding frame diagram corresponding to a video decoding method provided in an embodiment.
- the video decoding method provided in the embodiment of the present application may obtain the encoded data corresponding to each to-be-decoded video frame of the to-be-decoded video sequence through the encoded data obtaining unit 300, and perform entropy decoding through the entropy decoding unit 302 to obtain entropy decoded data.
- the quantization unit 304 performs inverse quantization on the entropy decoded data to obtain inverse quantized data
- the second inverse transform unit 306 performs inverse transform on the inverse quantized data to obtain inverse transformed data.
- the inverse transformed data may be inversely transformed with the first inverse transform in FIG. 2.
- the data obtained after the inverse transformation of the unit 214 is consistent.
- the resolution information acquiring unit 308 is configured to acquire resolution information corresponding to a video frame to be decoded.
- the second reference information adaptation unit 312 is configured to obtain the current reference frame reconstructed by the second reconstruction unit, and use the resolution information of the video frame to be decoded to position information corresponding to the current reference frame and each block to be decoded of the video frame to be decoded, the current At least one of position information corresponding to each reference block of a reference frame and reference information such as a motion vector is adaptively processed, and prediction is performed according to the information after the adaptive processing.
- the second prediction unit 314 obtains a reference block corresponding to the block to be decoded according to the reference information obtained after the adaptation, and obtains a prediction value consistent with the prediction value in FIG. 2 according to the image value of the reference block.
- the second reconstruction unit 310 performs reconstruction according to the prediction value and the inverse transformed data, that is, the prediction residual, to obtain a reconstructed video frame.
- the second processing unit 316 processes the reconstructed video frame according to the resolution information corresponding to the video frame to be decoded to obtain a corresponding decoded video frame.
- the playback storage unit 318 may play or store the decoded video frames, or perform playback and storage.
- encoding frame diagram and decoding frame diagram are only examples, and do not constitute a limitation on the encoding method to which the solution of the present application is applied.
- the specific encoding frame diagram and decoding frame diagram may include more than shown in the figure. More or fewer units, or some combination of units, or units with different components are unknown.
- loop filtering may be performed on the reconstructed video frame to reduce the block effect of the video frame to improve the video quality.
- the end performing encoding is referred to as an encoding end
- the end performing decoding is referred to as a decoding end.
- the encoding end and the decoding end may be the same end or different ends.
- the above computer equipment such as a terminal or a server, may be an encoding end or a decoding end.
- the frame to be encoded can be divided into multiple encoding blocks, and the size of the encoding block can be set or calculated as required.
- the size of the coding blocks can be all 8 * 8 pixels.
- the coding block can be divided by calculating the rate distortion cost corresponding to the division method of various coding blocks, and selecting a division method with a low rate distortion cost.
- Figure 4 is a schematic diagram of the division of a 64 * 64 pixel image block, and a square represents a coding block. It can be known from FIG. 4 that the size of the coding block may include 32 * 32 pixels, 16 * 16 pixels, 8 * 8 pixels, and 4 * 4 pixels.
- the size of the encoding block can also be other sizes, for example, it can be 32 * 16 pixels or 64 * 64 pixels. It can be understood that when decoding, there is a one-to-one correspondence between the coded block and the block to be decoded, so the pixel size of the block to be decoded can also include 32 * 32 pixels, 16 * 16 pixels, 8 * 8 pixels, and 4 * 4 pixels, etc. .
- a video encoding method is proposed. This embodiment mainly uses the method as an example for applying to the terminal 110 or the server 120 in FIG. 1 described above. It can include the following steps:
- Step S502 Obtain an input video sequence.
- the input video sequence may include multiple input video frames.
- a video frame is a unit constituting a video
- an input video sequence may be a video sequence acquired in real time by a computer device, for example, a video sequence obtained in real time through a camera of a terminal, or a video sequence stored in advance by a computer device.
- the encoding frame prediction type corresponding to each input video frame in the input video sequence can be I frame, B frame, and P frame, etc.
- the encoding frame prediction type corresponding to the input video frame can be determined according to the encoding algorithm.
- the I frame is an intra prediction frame
- the P frame is a forward prediction frame
- the B frame is a bidirectional prediction frame.
- Each coding block of the P frame and the B frame can be encoded by using an intra prediction method or an inter prediction method.
- Step S504 Obtain a target video sequence coding mode corresponding to the input video sequence from the candidate video sequence coding modes.
- the candidate video sequence coding mode includes a constant resolution coding mode and a mixed resolution coding mode.
- the constant resolution encoding mode refers to that the frames to be encoded corresponding to the input video sequence are encoded at the same resolution, such as the full resolution, and the full resolution encoding refers to keeping the resolution of the input video frame unchanged. coding. It can be understood that since the resolution of the input video frames in the same video sequence is generally the same, the full-resolution encoding mode here is one of the constant-resolution encoding modes. Of course, each full-resolution input video frame of the input video sequence may also be sampled with the same sampling ratio to obtain video frames with the same resolution.
- the mixed-resolution encoding mode refers to that the resolution of the frame to be encoded corresponding to the input video sequence is adaptively adjusted, that is, the resolution of the frame to be encoded corresponding to the input video sequence is different.
- the frame to be encoded refers to a video frame directly used for encoding.
- the method for the computer device to obtain the target video sequence encoding mode corresponding to the input video sequence from the candidate video sequence encoding modes can be set as required. For example, if multiple input video sequences need to be encoded, one or more of them can be input.
- the video sequence is coded with constant resolution, and the other input video sequences are coded with mixed resolution.
- obtaining the target video sequence encoding mode corresponding to the input video sequence includes: obtaining current environment information, the current environment information including at least one of current encoding environment information and current decoding environment information; and determining the input according to the current environment information Coding mode of the target video sequence corresponding to the video sequence.
- the environment information may include one or more of processing capabilities of a device that performs a video encoding method, processing capabilities of a device that performs a video decoding method, and current application scenario information.
- Processing power can be expressed in terms of processing speed.
- the corresponding target video sequence encoding mode may be a full-resolution encoding mode.
- the target video sequence encoding mode is a mixed resolution encoding mode.
- the video sequence encoding mode is a constant resolution encoding mode.
- the relationship between the current environment information and the video sequence encoding mode can be set.
- the target video sequence encoding mode corresponding to the input video sequence is obtained according to the correspondence between the current environment information and the video sequence encoding mode.
- the correspondence between the average value of the processing speed of the device performing the video encoding method and the processing speed of the device performing the video decoding method and the video sequence encoding mode can be set.
- an average value of the two processing speeds is calculated, and a target video sequence encoding mode is obtained according to the average value.
- Whether the current application scenario is a real-time application scenario can be set as required.
- the video call application scenario and online game application scenario are real-time application scenarios, and the application scenarios corresponding to video encoding on the video website and offline video encoding may be non-real-time application scenarios.
- step S506 each input video frame of the input video sequence is encoded according to the target video sequence encoding mode to obtain encoded data.
- the computer device when the target video sequence coding mode is a constant resolution coding mode, the computer device performs constant resolution coding on each input video frame of the input video sequence.
- the target video sequence encoding mode is a mixed-resolution encoding mode, the computer device performs mixed-resolution encoding on the input video sequence, that is, the resolution of the frames to be encoded corresponding to the input video sequence is different. Rate information.
- the target video sequence encoding mode of the input video sequence can be flexibly selected, the input video sequence is encoded according to the target video sequence encoding mode, the encoding mode of the input video sequence is adaptively adjusted, and the video encoding quality can be improved under the condition of limited bandwidth.
- step S506 is to encode each input video frame of the input video sequence according to the target video sequence encoding mode, and obtain the encoded data includes: adding target video sequence encoding mode information corresponding to the target video sequence encoding mode to the encoded data. in.
- the target video sequence coding mode information is used to describe the coding mode used by the input video sequence.
- the computer equipment may add a flag bit Sequence_Mix_Resolution_Flag describing the coding mode of the target video sequence to the encoded data, and the specific flag value may be set as required.
- the video sequence encoding mode information may be sequence-level header information at the position where the encoded data is added. For example, when Sequence_Mix_Resolution_Flag is 1, the corresponding target video sequence encoding mode may be a mixed resolution encoding mode. When Sequence_Mix_Resolution_Flag is 0, the corresponding target video sequence encoding mode may be a constant resolution encoding mode.
- the video coding framework is shown in FIG. 6.
- the video coding framework includes a constant-resolution coding framework and a mixed-resolution coding framework.
- the mixed-resolution encoding framework may correspond to the encoding framework in FIG. 2.
- the video sequence coding mode is determined at the video sequence coding mode acquisition module.
- the target video sequence coding mode is a mixed resolution coding mode
- the mixed resolution coding framework is used for coding.
- the constant-resolution encoding is performed using the constant-resolution encoding framework of FIG. 6.
- the constant resolution coding framework may be a HEVC coding framework or an H.265 coding framework.
- step S506 that is, encoding each input video frame of the input video sequence according to the target video sequence encoding mode, and obtaining encoded data includes:
- Step S702 When the target video sequence coding mode is a mixed resolution coding mode, obtain a processing method corresponding to the input video frame.
- the input video frame is a video frame in an input video sequence.
- the processing method corresponding to the input video frame is selected by the computer device from the candidate processing methods.
- the candidate processing methods may include a full resolution processing method and a downsampling processing method.
- the method for the computer device to obtain the processing mode corresponding to the input video frame can be set according to actual needs. For example, it may be to obtain the processing parameters corresponding to the input video frame, and obtain the corresponding processing mode according to the processing parameters.
- Processing parameters are parameters used to determine the processing mode, and specific processing parameters can be set as needed.
- the processing parameters may include at least one of current encoding information corresponding to the input video frame and image characteristics.
- a downsampling ratio and a downsampling method may also be obtained.
- the down-sampling ratio is a ratio obtained by dividing the resolution after sampling by the resolution before sampling.
- the downsampling method can be direct averaging, filters, bicubic interpolation, bicubic interpolation, or bilinear interpolation.
- the down-sampling ratio can be preset or can be flexibly adjusted. For example, you can set the downsampling ratio to all 1/2. It can be that the downsampling ratio of the first input video frame of the input video sequence is 1/2, and the downsampling ratio of the second input video frame is 1/4.
- the downsampling ratio can be obtained according to the encoding position of the input video frame in the video group, where the lower the encoding position, the smaller the downsampling ratio.
- the downsampling direction may be one of vertical downsampling, horizontal downsampling, and a combination of vertical and horizontal downsampling. For example, if the resolution of the video frame before sampling is 800 * 800 pixels, and the downsampling ratio is 1/2 and the horizontal downsampling is performed, the resolution of the video frame after sampling is 400 * 800 pixels. When the downsampling ratio is 1/2 and the vertical downsampling is performed, the resolution of the sampled video frame is 800 * 400 pixels.
- the down-sampling ratio may be obtained according to the processor capability of a device that performs a video encoding method, such as a terminal or a server.
- a device with a strong processor processing capability corresponds to a large downsampling ratio
- a device with a weak processor processing capability corresponds to a small downsampling ratio.
- Correspondence between processor processing power and downsampling ratio can be set.
- the processor processing capability is obtained, and the corresponding down-sampling ratio is obtained according to the processor processing capability.
- the downsampling ratio corresponding to a 16-bit processor can be set to 1/8
- the downsampling ratio corresponding to a 32-bit processor can be set to 1/4.
- the down-sampling ratio may be obtained according to the frequency or number of times that the input video frame is used as the reference frame, and a correspondence relationship between the down-sampling ratio and the frequency or number of times that the input video frame is used as the reference frame may be set. Among them, if the frequency of the input video frame as the reference frame is high or the frequency is high, the down-sampling ratio is large. If the frequency of the input video frame as the reference frame is low or the frequency is low, the down-sampling ratio is small. For example, for an I frame, if the frequency of the reference frame is high, the corresponding down-sampling ratio is large, which may be 1/2.
- the corresponding downsampling ratio is small, for example, it can be 1/4.
- the downsampling ratio is obtained according to the frequency or number of times that the input video frame is used as the reference frame.
- the image quality is better. Therefore, the accuracy of prediction can be improved and the prediction residual can be reduced. To improve the quality of encoded images.
- the down-sampling method may be obtained according to the processor capability of a device such as a terminal or a server that executes the video encoding method.
- the downsampling method corresponding to a device with a strong processor processing capability has high complexity
- the downsampling method corresponding to a device with a weak processor processing capability has low complexity.
- Correspondence between the processing capability of the processor and the downsampling method can be set. When encoding is needed, the processing capability of the processor is obtained, and the corresponding downsampling method is obtained according to the processing capability of the processor.
- bicubic interpolation bicubic interpolation is more complex than bilinear interpolation bilinear interpolation, so you can set the downsampling method corresponding to the 16-bit processor to bilinear interpolation, and the downsampling method corresponding to the 32-bit processor to bicubic interpolation Bicubic interpolation.
- downsampling when the input video frame is processed by using the downsampling processing method, downsampling may also be performed according to different downsampling methods or downsampling ratios, and the method of processing the input video frame is more flexible.
- the computer device may obtain a processing manner corresponding to the input video frame according to at least one of current encoding information and image feature information corresponding to the input video frame.
- the current encoding information refers to video compression parameter information obtained when the video is encoded, such as one or more of frame prediction type, motion vector, quantization parameter, video source, bit rate, frame rate, and resolution.
- Image feature information refers to information related to image content, including one or more of image motion information and image texture information, such as edges.
- the current encoding information and image feature information reflect the scene, detail complexity, or intensity of the motion corresponding to the video frame.
- the motion scene can be determined by one or more of motion vectors, quantization parameters, or bit rates.
- the motion is intense, and a large motion vector indicates that the image scene is a large motion scene. It can also be judged according to the code rate ratio of the encoded I-frame and P-frame or the encoded I-frame and B-frame. If the ratio exceeds the first preset threshold, it is judged as a still image, and if the ratio is smaller than the second preset threshold, it can be It is judged that the image is intense. Or track the target object directly according to the image content, and determine whether it is a large motion scene according to the moving speed of the target object. When the bit rate is constant, the amount of information can be expressed. For scenes with intense motion, the amount of information in the time domain is large, and the corresponding bit rate can be used to express the information in the spatial domain.
- the frame prediction type can be used to determine the screen switching scene, and the processing method can also be determined according to the influence of the frame prediction type on other frames.
- the I-frame is generally the first frame or there is a picture change.
- the quality of the I-frame affects the quality of the subsequent P-frames or B-frames. Therefore, the intra-frame prediction frame is more inclined to choose the full-resolution processing method than the inter-frame prediction frame to ensure that Image Quality. Because P-frames can be used as reference frames for B-frames, the image quality of P-frames affects the image quality of subsequent B-frames.
- the full-resolution processing method is more preferred than B-frame encoding.
- the computer device may obtain the processing mode corresponding to the input video frame according to the current quantization parameter corresponding to the input video frame and the magnitude relationship of the quantization parameter threshold. If the current quantization parameter is greater than the quantization parameter threshold, the computer device determines that the processing mode is a downsampling mode, otherwise it determines that the processing mode is a full-resolution processing mode.
- the quantization parameter threshold can be obtained according to the ratio of the intra-coded blocks of the encoded forward-coded video frame before the input video frame.
- the correspondence between the intra prediction block ratio and the quantization parameter threshold can be set in advance, so that after the intra prediction block ratio of the current frame is determined, the computer device can determine the quantization parameter corresponding to the intra prediction block ratio of the current frame according to the correspondence relationship.
- the current quantization parameter may be the corresponding fixed quantization parameter value.
- the computer device can calculate and obtain the current quantization parameter corresponding to the input video frame according to the bit rate control model. Alternatively, the computer device may use the quantization parameter corresponding to the reference frame as the current quantization parameter corresponding to the input video frame. In the embodiment of the present application, the larger the current quantization parameter is, the more intense the general motion is, and it is more inclined to select the downsampling processing method for scenes with severe motion.
- the relationship between the intra prediction block ratio and the quantization parameter threshold is a positive correlation.
- the correspondence between the intra prediction block ratio Intra 0 and the quantization parameter threshold QP TH can be determined in advance as:
- step S704 the input video frame is processed according to the processing mode to obtain a frame to be encoded.
- the resolution of the frame to be encoded corresponding to the processing mode is the resolution of the input video frame or is smaller than the resolution of the input video frame.
- the frame to be encoded is obtained by processing an input video frame according to a processing mode.
- the processing mode includes a full-resolution processing mode
- the computer device may use the input video frame as a frame to be encoded.
- the processing mode includes a downsampling processing mode
- the computer device can downsample the input video frame to obtain a frame to be encoded. For example, when the resolution of the input video frame is 800 * 800 pixels and the processing method is 1/2 downsampling in both the horizontal and vertical directions, the resolution of the frame to be encoded obtained by the downsampling is 400 * 400 pixels.
- Step S706 Under the resolution of the frame to be encoded, encode the frame to be encoded to obtain encoded data corresponding to the input video frame.
- the encoding may include at least one of prediction, transform, quantization, and entropy encoding.
- the computer device performs intra prediction on the frame to be encoded at the resolution of the frame to be encoded.
- the frame to be encoded is a P frame or a B frame
- the current reference frame corresponding to the frame to be encoded can be obtained, and the prediction residual can be obtained according to the current reference frame.
- the prediction residual can be transformed, quantized, and entropy encoded to obtain the input video frame. Corresponding encoded data.
- the current reference frame In the process of obtaining the encoded data, according to the resolution of the frame to be encoded, the current reference frame, the position information corresponding to each coding block of the frame to be encoded, the position information corresponding to each reference block of the current reference frame, and the motion vector. At least one is processed.
- the current reference frame can be processed according to the resolution information of the frame to be encoded to obtain the target reference frame, and the target reference block corresponding to each encoding block in the frame to be encoded is obtained from the target reference frame.
- the target reference block is predicted to obtain the predicted value corresponding to the coded block, and then the prediction residual is obtained according to the difference between the actual value and the predicted value of the coded block.
- the position information of the encoded block or the position information of the decoded block may be transformed according to the resolution information of the current reference frame and the frame to be encoded.
- the position information corresponding to the frame to be encoded is located on the same quantization scale as the position information of the current reference frame, and then the target motion vector is obtained according to the transformed position information to reduce the value of the target motion vector and reduce the data amount of the encoded data.
- the first motion vector corresponding to the encoding block of the frame to be encoded is calculated at the resolution of the frame to be encoded
- the resolution information of the frame and the resolution information of the target motion vector unit transform the first motion vector to obtain a target motion vector at the target resolution. For example, suppose the resolution of the frame to be encoded is 400 * 800 pixels, and the resolution of the current reference frame is 800 * 1600 pixels. Then, the current reference frame can be down-sampled by 1/2 according to the resolution of the frame to be encoded, and the resolution of the target reference frame is 400 * 800 pixels, and then the video is encoded according to the target reference frame.
- the input video frame may be processed according to the processing method to obtain a frame to be encoded, and a resolution of the frame to be encoded corresponding to the processing mode is a resolution of the input video frame.
- the resolution is smaller than the input video frame, and at the resolution of the frame to be encoded, the frame to be encoded is encoded to obtain encoded data corresponding to the input video frame.
- the processing method of the video frame can be flexibly selected, the input video frame is processed, the resolution of the input video frame is adaptively adjusted to the data amount of the data to be encoded, and the encoding is performed at the resolution of the frame to be encoded. Can get accurate encoded data.
- step S706, that is, encoding the frame to be encoded to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes: adding processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame. in.
- the processing mode information is used to describe the processing mode of the input video frame.
- a flag bit describing the processing mode, Frame_Resolution_Flag may be added to the encoded data, that is, a syntax element describing the processing mode information is added to the encoded data.
- the value of the flag bit corresponding to each processing method can be set as required. For example, when the processing mode is the full resolution processing mode, the corresponding Frame_Resolution_Flag can be 0, and when the processing mode is the downsampling processing mode, the corresponding Frame_Resolution_Flag can be 1.
- the processing mode information may be added to the frame-level header information corresponding to the encoded data, for example, it may be added to a preset position of the frame-level header information.
- the frame-level header information is the header information of the encoded data corresponding to the input video frame
- the sequence-level header information is the header information of the encoded data corresponding to the video sequence
- the group-level header information is the encoding corresponding to the video group (GOP, Groups of Picture).
- the header information of the data A video frame sequence may include multiple video groups, and a video group may include multiple video frames. The box indicated by the dotted line in FIG.
- the processing method corresponding to the first input video frame and the second input video frame is a full resolution processing method
- the processing method corresponding to the third input video frame is a downsampling processing method.
- the down-sampling processing mode information for down-sampling the input video frame may also be added to the encoded data corresponding to the input video frame, so that when the decoding end obtains the encoded data, it can be based on the down-sampling processing mode.
- the downsampling processing mode information includes at least one of downsampling method information and downsampling ratio information.
- the position where the downsampling method information is added in the encoded data may be one of the corresponding group-level header information, sequence-level header information, and frame-level header information.
- the position where the downsampling method information is added to the encoded data can be determined according to the corresponding scope of the downsampling method.
- the addition position of the down-sampling ratio information in the encoded data may be any one of the corresponding group-level header information, sequence-level header information, and frame-level header information.
- the addition position of the down-sampling ratio information in the encoded data can be determined according to the scope of action corresponding to the down-sampling scale, which refers to the applicable range. For example, if the scope of the downsampling ratio is a video group, the downsampling ratio information corresponding to the video group may be added to the header information corresponding to the video group.
- the downsampling ratio information is added to the sequence-level header information corresponding to the video sequence, indicating that each video frame of the video sequence is downsampled with the downsampling ratio corresponding to the downsampling ratio information. deal with.
- acquiring a processing method corresponding to the input video frame includes: acquiring a processing parameter corresponding to the input video frame, and determining a processing method corresponding to the input video frame according to the processing parameter.
- Adding the processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame includes: when the processing parameters cannot be reproduced during the decoding process, adding the processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame .
- the processing parameters may include at least one of image encoding information and image feature information corresponding to the input video frame.
- the non-reproducibility of processing parameters during decoding means that the processing parameters cannot be obtained or generated during decoding.
- the processing parameters are information corresponding to the image content of the input video frame, and there is a loss of image information during the encoding process, the decoded video frame at the decoding end is different from the input video frame, so it will not be obtained during decoding.
- the information corresponding to the image content of the input video frame that is, the information corresponding to the image content cannot be reproduced during the decoding process.
- the rate distortion cost needs to be calculated in the encoding process, and the rate distortion cost is not calculated in the decoding process.
- the processing parameters include the rate distortion cost
- the processing parameters cannot be reproduced during the decoding process.
- the PSNR Peak Signal Noise Ratio
- the PSNR information of the reconstructed video frame and the input video frame obtained during the encoding process cannot be obtained during the decoding process, so the PSNR information cannot be reproduced during the decoding process.
- processing parameters such as the number of intra-coded blocks corresponding to the input video frame and the number of inter-coded blocks are available at the decoding end, and can be reproduced.
- the computer device may add the processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame, or may not add the processing mode information corresponding to the processing mode to the input video frame.
- Corresponding encoded data When the processing mode information corresponding to the processing mode is added to the encoded data corresponding to the input video frame, the decoding end can read the processing mode information from the encoded data, and there is no need to obtain the processing mode according to the processing data.
- the decoding device determines the processing mode consistent with the encoding end according to the processing parameters, which can reduce the data amount of the encoded data. .
- step S706 that is, encoding the frames to be encoded to obtain the encoded data corresponding to the input video frames at the resolution of the frames to be encoded includes:
- Step S802 Acquire a current reference frame corresponding to the frame to be encoded.
- the current reference frame is a video frame that needs to be referenced when encoding the frame to be encoded
- the current reference frame is a video frame obtained by reconstructing data that has been encoded before the frame to be encoded.
- the number of current reference frames corresponding to the frame to be encoded may be one or more.
- the corresponding reference frame may be one.
- the corresponding reference frame may be two.
- the reference frame corresponding to the frame to be encoded may be obtained according to a reference relationship, and the reference relationship is determined according to each video codec standard.
- the corresponding reference frame may be the I frame of the video group and the fourth frame of the video group after encoding.
- the reconstructed video frame is then decoded.
- acquiring the current reference frame corresponding to the frame to be encoded includes: acquiring a first reference rule, where the first reference rule includes a resolution relationship between the frame to be encoded and the current reference frame; and acquiring the frame to be encoded according to the first reference rule. Corresponding current reference frame.
- the first reference rule determines a limitation relationship between the resolution size of the frame to be encoded and the current reference frame, and the resolution size relationship includes at least one of the same resolution and different resolution of the frame to be encoded and the current reference frame.
- the first reference rule may further include a reference rule for a processing manner of the resolution of the frame to be encoded and the current reference frame.
- the processing mode reference rule may include one of: a frame to be encoded in the full resolution processing mode may refer to a reference frame in the full resolution processing mode, and a frame to be encoded in the downsampling processing mode may refer to one of the reference frames in the downsampling processing mode or Both.
- the first reference rule may further include that the resolution of the frame to be encoded is greater than the resolution of the current reference frame, and that the resolution of the frame to be encoded is less than the current One or two resolutions of the reference frame. Therefore, in an embodiment, the first reference rule may specifically include: the original resolution to-be-encoded frame may refer to the down-sampling resolution reference frame, the down-sampling resolution-to-be-encoded frame may refer to the original resolution reference frame, and the original resolution to be encoded.
- the encoded frame may refer to one or more of the original resolution reference frame and the down-sampled resolution frame to be encoded may refer to the down-sampled reference frame.
- the original resolution frame to be encoded refers to that the resolution of the frame to be encoded is the same as the resolution of its corresponding input video frame
- the original resolution reference frame is that the resolution of the reference frame is the same as the resolution of its corresponding input video frame .
- the down-sampling resolution to-be-encoded frame refers to that the to-be-encoded frame is obtained by performing downsampling processing on a corresponding input video frame.
- the down-sampling reference frame refers to the reference frame obtained by down-sampling the corresponding input video frame. After obtaining the first reference rule, the current reference frame corresponding to the frame to be encoded is obtained according to the first reference rule, so that the obtained current reference frame satisfies the first reference rule.
- step S706 that is, encoding the frame to be encoded to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes: adding rule information corresponding to the first reference rule to the encoding corresponding to the input video frame. Data.
- the rule information is used to describe the adopted reference rule, and the computer device may add a flag Bit Resolution_Referencer_Rules describing the reference rule to the encoded data.
- the reference rule represented by the specific flag value can be set as required.
- the addition position of the rule information in the encoded data may be one or more of group-level header information, sequence-level header information, and frame-level header information.
- the addition position of the rule information in the encoded data may be determined according to the scope of action of the first processing reference rule. For example, when the first reference rule is that the original resolution frame can be referred to the downsampling reference frame, the corresponding Resolution_Referencer_Rules can be 1.
- the frame to be coded can refer to the downsampling resolution.
- the corresponding Resolution_Referencer_Rules can be 2. If the video sequences use the same first reference rule, the rule information may be sequence-level header information in the position where the encoded data is added. If the first reference rule is a reference rule adopted by one of the video groups, the addition position of the rule information in the encoded data is group-level header information corresponding to the video group using the first reference rule.
- Step S804 Under the resolution of the frame to be encoded, encode the frame to be encoded according to the current reference frame to obtain encoded data corresponding to the input video frame.
- the computer device may obtain a current reference frame corresponding to the frame to be encoded, perform prediction based on the current reference frame to obtain a prediction residual, and perform transformation, quantization, and entropy encoding on the prediction residual to obtain encoded data corresponding to the input video frame.
- the computer device uses the resolution of the frame to be encoded to position information corresponding to the current reference frame, each encoding block of the frame to be encoded, position information corresponding to each reference block of the current reference frame, and the motion vector. At least one of them.
- the computer device may obtain a reference block corresponding to the encoding block of the frame to be encoded from the current reference frame, and encode the encoding block according to the reference block.
- the current reference frame can also be processed according to the resolution of the frame to be encoded, to obtain the corresponding target reference frame, and the target reference block corresponding to the encoding block of the frame to be encoded is obtained from the target reference frame, and the encoding block is processed according to the target reference block. Encoding to obtain the encoded data corresponding to the input video frame.
- encoding the to-be-encoded frame to obtain the encoded data corresponding to the input video frame at the resolution of the to-be-encoded frame includes: obtaining the corresponding encoding mode when the to-be-encoded frame is encoded under the to-be-encoded frame resolution. ; Add the coding mode information corresponding to the coding mode to the coding data corresponding to the input video frame.
- the encoding method is a processing method related to performing encoding.
- it may include one or more of an upsampling method adopted for decoding and reconstructing a reference frame during encoding, a rule corresponding to a reference rule, a sampling method for sampling a reference frame, and a resolution corresponding to a motion vector.
- the encoding mode information corresponding to the encoding mode may not be added to the encoded data. Instead, an encoding method is set in the encoding and decoding standard in advance, and a decoding method corresponding to the encoding method is set in the decoding end. Or the encoding end and the decoding end can calculate the matching encoding mode and decoding mode according to the same or corresponding algorithms. For example, the method of upsampling the current reference frame when encoding is set in advance in the codec standard is the same as the method of upsampling the current reference frame when decoding.
- step S804 that is, encoding the frame to be encoded according to the current reference frame, and obtaining the encoded data corresponding to the input video frame includes:
- Step S902 Perform sampling processing on the current reference frame according to the resolution information of the frame to be encoded to obtain a corresponding target reference frame.
- the target reference frame is a video frame obtained by sampling the current reference frame.
- the sampling process is a process of sampling the current reference frame by using the resolution information of the frame to be encoded, so that the resolution information of the target reference frame obtained matches.
- the computer device may first determine a sampling method, and the sampling method includes one of a direct sub-pixel interpolation method and a post-sampling sub-pixel interpolation method.
- the direct sub-pixel interpolation method directly performs sub-pixel interpolation processing on the current reference frame, and the sub-pixel interpolation method performs sampling processing on the current reference frame and then sub-pixel interpolation processing after sampling.
- Sub-pixel interpolation is a process of obtaining reference data at a sub-pixel level by interpolation of reference data of an entire pixel in a current reference frame.
- FIGS. 9B and 9C it is a schematic diagram of interpolation of a current reference frame in one embodiment.
- A1, A2, A3, B1, B2, B3 and other pixels are 2 * 2 integer pixels in the current reference frame.
- the reference data of sub-pixels is calculated based on the reference data of these integer pixels.
- the reference data of sub-pixel point a23 can be calculated by averaging the reference data of three full-pixel points of A1, A2, and A3, and the sub-pixel points can be calculated by averaging the reference data of three full-pixel points of A2, B2, and C2.
- the reference data of a21 is calculated according to the reference data of the sub-pixel points a23 and a21 to obtain the reference data of the sub-pixel point a22 to implement 1/2 pixel precision interpolation on the current reference frame. Referring to FIG. 9C, A1, A2, A3, B1, B2, B3 and other pixels are 4 * 4 integer pixels in the current reference frame.
- 15 sub-pixel reference data are calculated to realize Performs 1/4 pixel precision interpolation on the current reference frame.
- the reference data of the sub-pixel point a8 is calculated according to the reference data of the whole pixels of A2 and B2
- the reference data of the sub-pixel point a2 is calculated according to the reference data of the whole pixels of A2 and A3.
- the reference data of 15 sub-pixels realizes 1/4 pixel precision interpolation of the whole pixel A2.
- the target reference frame is obtained by performing sub-pixel interpolation on the current reference frame, and the frame to be encoded can be based on a higher resolution target. Motion estimation is performed with reference frames, thereby improving the accuracy of motion estimation and encoding quality.
- the encoding end and the decoding end can set the sampling method used when processing the current reference frame to obtain the target reference frame in their respective encoding and decoding rules.
- the sampling method used should be the same.
- the sampling mode corresponding to the processing of the current reference frame is determined according to the setting.
- encoding the to-be-encoded frame to obtain the encoded data corresponding to the input video frame includes: adding the sampling mode information corresponding to the sampling process of the current reference frame to the current reference frame. Encoded data.
- the addition position of the sampling mode information corresponding to the sampling process of the current reference frame in the encoded data may be any one of the corresponding sequence-level header information, group-level header information, and frame-level header information.
- the addition position of the sampling mode information in the encoded data can be determined according to the corresponding scope of the sampling mode.
- the sampling mode information can be added to the frame-level header information of the encoded data corresponding to the input video frame, which indicates that the current reference frame corresponding to the input video frame when it is encoded uses the sampling method corresponding to the sampling mode information for sub-pixel interpolation processing. For example, when the flag bit Pixel_Sourse_Interpolation used to determine the sampling mode in the frame-level header information of the encoded data is 0, it means that the current reference frame corresponding to the input video frame is directly subjected to sub-pixel interpolation. When Pixel_Sourse_Interpolation is 1, it indicates the input video. The current reference frame corresponding to the frame is subjected to sampling processing and then sub-pixel interpolation processing.
- the decoding end can perform the sub-pixel interpolation processing on the current reference frame according to the sub-pixel interpolation indicated by the identification bit in the encoded data to obtain the target reference frame, so that the encoded data can be decoded according to the target reference frame to obtain a reconstructed video frame.
- the computer device may determine a ratio of sampling the current reference frame according to a proportional relationship between the resolution of the frame to be encoded and the resolution of the current reference frame. For example, the resolution of the input video frame is 2M * 2N. By processing the current input video frame in full resolution, that is, directly using the current input video frame as the frame to be encoded, the resolution of the frame to be encoded is 2M.
- the input video frames that can be used as reference frames are processed according to the downsampling processing method, and the resolution of the current reference frame to be encoded after downsampling is M * 2N, then the corresponding current reference frames obtained after reconstruction are The resolution is also M * 2N, then it is determined that the current reference frame is up-sampled with a sampling ratio of width 2 and height 1 to obtain a frame with the same resolution as the frame to be encoded. If the current input video frame is processed according to the downsampling processing method, the resolution of the frame to be encoded after the downsampling is M * N, and the input video frame that can be used as the reference frame is processed according to the full resolution processing method. Then the resolution of the current reference frame obtained after the reconstruction is 2M * 2N, it is determined that the current reference frame is down-sampled with a sampling ratio of 1/2 in width and height to obtain a frame with the same resolution as the frame to be encoded.
- the computer device may perform downsampling according to the input video frames to obtain the downsampling ratio corresponding to the frame to be encoded, and perform an input video frame that can be used as a reference frame. Downsampling to obtain the downsampling ratio corresponding to the reference frame to be coded, and determining the ratio of sampling the current reference frame. For example, an input video frame is down-sampled at a sampling ratio of 1/2 to obtain a frame to be encoded, and an input video frame that can be used as a reference frame is down-sampled at a sampling ratio of 1/4 to obtain a reference frame to be encoded.
- the downsampling ratio corresponding to the current reference frame obtained by reconstructing the encoded data of the reference frame to be encoded is also 1/4. Then, according to the multiple relationship between the two downsampling ratios, the current reference frame can be determined to be 2 samples. Up-sampling processing is performed to obtain a frame with the same resolution as the frame to be encoded.
- the sampling method for sampling the current reference frame matches the sampling algorithm for down-sampling the input video frame to obtain the frame to be encoded, that is, if the current reference frame needs to be down-sampled, the down-sampling algorithm and the to-be-encoded
- the down-sampling algorithm to obtain the frame to be encoded by down-sampling the video frame is the same. If the current reference frame needs to be up-sampled, the up-sampling algorithm is the opposite of the down-sampling algorithm that matches the down-sampling algorithm of the input video frame to obtain the current frame to be encoded.
- the sampling algorithm that samples the current reference frame and the sampling algorithm that down-samples the video frame to be encoded to match the current encoding video frame can further improve the image matching between the current reference frame and the current encoded video frame, and further improve The accuracy of inter prediction, reduces prediction residuals, and improves the quality of encoded images.
- Step S904 Encode the frame to be encoded according to the target reference frame to obtain encoded data corresponding to the input video frame.
- an image block similar to the coding block is searched from the target reference frame as a target reference block, and a pixel difference between the coding block and the target reference block is calculated to obtain a prediction residual.
- a first motion vector is obtained according to the displacement of the coding block and the corresponding target reference block. Coded data is obtained based on the first motion vector and the prediction residual.
- the computer device may transform the first motion vector according to the target motion vector unit resolution information to obtain a target motion vector at the target resolution, and generate encoded data according to the target motion vector and the prediction residual.
- the method of transforming the first motion vector according to the target motion vector unit resolution information to obtain the target motion vector is described later.
- the computer device may also calculate the vector difference between the target motion vector and the corresponding predicted motion vector, encode the vector difference to obtain encoded data, and further reduce the amount of encoded data.
- the step of calculating the vector difference value may include: obtaining an initial predicted motion vector corresponding to the current coding block; and obtaining a second vector transformation coefficient according to the current motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector;
- the target prediction motion vector corresponding to the current coding block is obtained according to the initial prediction motion vector and the second vector transformation coefficient; and the motion vector difference is obtained according to the target motion vector and the target prediction motion vector.
- the target prediction motion vector is a motion vector at a target resolution, and a method of calculating a vector difference value is described later.
- step S902 performs sampling processing on the current reference frame according to the resolution information of the frame to be encoded, and obtains the corresponding target reference frame including: performing current reference frame processing on the resolution information of the frame to be encoded and the pixel accuracy of the motion estimation. Sampling processing to obtain the corresponding target reference frame.
- the motion estimation pixel accuracy is a unit length of a motion vector corresponding to a coding block in a frame to be coded.
- a computer device encodes a coding block in a frame to be coded, it can refine the unit length of the motion vector corresponding to the coding block according to the obtained motion estimation pixel accuracy.
- the motion vector obtained in this way is more precise and accurate. Sampling processing of the current reference frame according to the obtained motion estimation pixel accuracy to obtain a target reference frame, and then calculating a first motion vector corresponding to each coding block in the frame to be encoded according to the target reference frame, and encoding based on the first motion vector to obtain the to-be-encoded
- the encoded data corresponding to the frame is a unit length of a motion vector corresponding to a coding block in a frame to be coded.
- the computer device can obtain the resolution information of the current reference frame, according to the sub-pixel interpolation method adopted by the frame to be encoded, and the resolution information of the frame to be encoded, the resolution information of the current reference frame, and the motion estimation corresponding to the frame to be encoded.
- the pixel accuracy determines what sampling processing method is performed on the current reference frame, the sampling ratio corresponding to the sampling processing, and the pixel interpolation accuracy.
- the size of the motion estimation pixel accuracy can be set as required, for example, it is generally 1/2 pixel accuracy, 1/4 pixel accuracy, or 1/8 pixel accuracy.
- the computer device may configure a corresponding motion estimation pixel accuracy for the frame to be encoded according to the image feature information of the frame to be encoded.
- the image feature information may be, for example, the size of the frame to be encoded, texture information, motion speed, and the like.
- a variety of image feature information can be synthesized to determine the motion estimation pixel accuracy corresponding to the frame to be encoded. The more complex the image data carried by the frame to be encoded, the richer the image information, and the higher the pixel accuracy of the corresponding motion estimation. For example, when performing P frame inter prediction, a higher motion estimation pixel accuracy can be used to calculate the motion vector corresponding to each coding block in the P frame, and when performing B frame inter prediction, a lower motion can be used.
- the estimated pixel accuracy calculates the motion vector corresponding to each coding block in the B frame.
- sampling the current reference frame according to the resolution information of the frame to be encoded and the accuracy of the motion estimation pixel, and obtaining the corresponding target reference frame includes: calculated and obtained according to the resolution information of the frame to be encoded and the accuracy of the motion estimation pixel.
- Pixel interpolation accuracy; sub-pixel interpolation processing is performed directly on the current reference frame according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- the pixel interpolation accuracy is the pixel accuracy corresponding to the sub-pixel interpolation of the current reference frame.
- the sub-pixel interpolation method is a direct sub-pixel interpolation method, it means that the current reference frame can be directly subjected to sub-pixel interpolation processing to obtain a target reference frame. Therefore, the pixel interpolation accuracy can be calculated according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy. The ratio of the resolution information of the current reference frame to the resolution information of the frame to be encoded can be calculated, and the pixel interpolation accuracy can be obtained according to the ratio and the motion estimation pixel accuracy.
- the data of some sub-pixels in the current reference frame can be directly multiplexed and can be used as the sub-pixels corresponding to the accuracy of the motion estimation pixels.
- the data For example, the resolution of the frame to be encoded is M * N, and the resolution of the current reference frame is 2M * 2N. If the pixel accuracy of the motion estimation is 1/2 and the pixel interpolation accuracy is 1, the current reference frame can be directly used as the target reference frame. ; If the motion estimation pixel accuracy is 1/4, the calculated pixel interpolation accuracy is 1/2, and the current reference frame can be subjected to sub-pixel interpolation processing with 1/2 pixel interpolation accuracy to obtain the target reference frame.
- the current reference frame is directly subjected to sub-pixel interpolation processing according to the motion estimation pixel accuracy to obtain the corresponding target. Reference frame.
- the resolution of the frame to be encoded is the same as the resolution of the current reference frame.
- the input video frame is processed by the down-sampling method to obtain the frame to be encoded, and the current reference frame is also reconstructed from the encoded data encoded by the down-sampling method with the same sampling ratio, then the resolution of the frame to be encoded and the current reference The frame resolution is the same.
- the target reference frame can be obtained by directly performing sub-pixel interpolation processing on the current reference frame based on the motion estimation pixel accuracy, and the pixel interpolation accuracy corresponding to the sub-pixel interpolation processing is the same as the motion estimation pixel accuracy.
- sampling the current reference frame according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: performing sampling processing on the current reference frame according to the resolution information of the frame to be encoded. To obtain the intermediate reference frame; perform pixel interpolation on the intermediate reference frame according to the motion estimation pixel accuracy to obtain the target reference frame.
- the sub-pixel interpolation method corresponding to the frame to be encoded is the sub-pixel interpolation method after sampling, it means that the current reference frame is first sampled to obtain an intermediate reference frame with the same resolution as the frame to be encoded, and then the intermediate reference frame Sub-pixel interpolation is performed to obtain the corresponding target reference frame.
- the current reference frame is down-sampled according to the resolution information of the frame to be encoded to obtain an intermediate reference frame, and then based on the frame to be encoded
- the corresponding motion estimation pixel accuracy performs pixel interpolation on the intermediate reference frame to obtain the target reference frame.
- An example is as follows: The input video frame with a resolution of 2M * 2N is downsampled according to the downsampling processing method to obtain a frame to be encoded with a resolution of M * N, and the resolution of the current reference frame is 2M * 2N (full Resolution processing method), the current reference frame is down-sampled at a sampling ratio of 1/2 to obtain an intermediate reference frame with a resolution of M * N.
- the obtained motion estimation pixel corresponding to the frame to be encoded has an accuracy of 1/2 , And then perform pixel interpolation on the intermediate reference frame according to the same pixel interpolation accuracy as the motion estimation pixel accuracy, that is, 1/2 pixel interpolation accuracy to obtain the target reference frame; if the obtained motion estimation pixel accuracy corresponding to the frame to be encoded is 1/4, sub-pixel interpolation processing is performed on the intermediate reference frame according to 1/4 sub-pixel interpolation accuracy to obtain the target reference frame.
- the computer device When the resolution indicated by the resolution information of the frame to be encoded is greater than the resolution of the current reference frame, the computer device performs upsampling processing on the current reference frame according to the resolution information of the frame to be encoded to obtain an intermediate reference frame, and then based on the The motion estimation pixel accuracy corresponding to the encoded frame is subjected to sub-pixel interpolation processing on the intermediate reference frame to obtain the target reference frame. For example, if the resolution of the frame to be encoded is 2M * 2N and the resolution of the current reference frame is 1 / 2M * 1 / 2N, you need to up-sample the current reference frame according to the sampling ratio of 4 to obtain the resolution of the frame to be encoded.
- the motion estimation pixel accuracy is 1/2, continue to perform sub-pixel interpolation processing on the obtained intermediate reference frame according to the 1/2 pixel interpolation accuracy to obtain the target reference frame; if the motion estimation pixel accuracy is 1 / 4, then continue to perform sub-pixel interpolation processing on the obtained intermediate reference frame according to 1/4 pixel interpolation accuracy to obtain the target reference frame.
- step S804 encodes the to-be-encoded frame according to the current reference frame under the resolution of the to-be-encoded frame, and obtains the encoded data corresponding to the input video frame including:
- Step S1002 Determine a first vector transformation parameter according to the resolution information of the frame to be encoded and the first resolution information.
- the first resolution information includes the resolution information of the current reference frame or the target motion vector unit resolution information corresponding to the input video frame. .
- the first vector transformation parameter is used to transform the position information or motion vector of the obtained motion vector.
- Resolution information is information related to resolution, and may be, for example, the resolution itself or the down-sampling ratio.
- the first vector transformation parameter may be a ratio between the resolution information of the frame to be encoded and the first resolution information. For example, it is assumed that the downsampling ratio of the current reference frame is 1/3 and the downsampling ratio of the frame to be encoded is 1/6. Then the first vector transformation parameter may be 1/3 divided by 1/6 and equal to 2.
- Step S1004 Obtain a target motion vector corresponding to each coding block in the frame to be coded according to the first vector transformation parameter.
- the obtained motion vector or position information corresponding to the motion vector is transformed according to the first vector transformation parameter to obtain a target motion vector.
- the target motion vector is a motion vector at the target resolution indicated by the target motion vector unit resolution information
- the target motion vector unit resolution information is related to the target motion vector.
- the information corresponding to the target resolution corresponding to the unit of, for example, the target resolution itself or the down-sampling ratio.
- the first vector transformation parameter is used to transform the position information corresponding to the motion vector
- the position information corresponding to the frame to be encoded is on the same quantization scale as the position information of the current reference frame, and a second motion vector is obtained according to the transformed position information. , Transforming the second motion vector into a target motion vector at a target resolution.
- step S1002 determining the first vector transformation parameter according to the resolution information of the frame to be encoded and the first resolution information includes: determining the first vector according to the resolution information of the frame to be encoded and the resolution information of the current reference frame. Transformation parameters.
- Step S1004, according to the first vector transformation parameter, to obtain the motion vector corresponding to each coding block in the frame to be coded includes: acquiring first position information corresponding to the current coding block, and acquiring second position information corresponding to the target reference block corresponding to the current coding block; The first vector transformation parameter, the first position information, and the second position information are calculated to obtain a target motion vector corresponding to the current coding block.
- the current coding block is a coding block in the input video frame that currently requires predictive coding.
- the target reference block is an image block in the reference frame that is used to predictively encode the current encoding block.
- the first position information corresponding to the current coding block may be represented by the coordinates of a pixel.
- the first position information corresponding to the current coding block may include coordinates corresponding to all pixels of the current coding block, and the first position information corresponding to the current coding block may also include coordinates of one or more pixels of the current coding block.
- the second position information corresponding to the target reference block may include coordinates corresponding to all pixels of the target reference block, and the second position information corresponding to the target reference block may also include coordinates of one or more pixels of the target reference block.
- the coordinates of the first pixel point of the current image block may be used as the coordinate values of the current coding block
- the coordinates of the first pixel point of the target reference block may be used as the coordinate values of the target reference block.
- the first position information may be transformed by using the first vector transformation parameter to obtain the corresponding first transformed position information, and the target motion vector may be obtained according to a difference between the first transformed position information and the second position information.
- the second position information may be transformed by using the first vector transformation parameter to obtain the corresponding second transformed position information, and the target motion vector may be obtained according to the difference between the first position information and the second bit transformation set information.
- the first vector transformation parameter is a ratio obtained by dividing the large resolution information by the small resolution information from the resolution information of the frame to be encoded and the resolution information of the current reference frame, where the large resolution information corresponds to The resolution is larger than the resolution corresponding to the small resolution.
- the first vector transformation parameter is used to transform position information of a frame to be encoded and a frame of small resolution information in a current reference frame. For example, if the resolution of the frame to be encoded is 1200 * 1200 pixels and the resolution of the current reference frame is 600 * 600 pixels, the large resolution is 1200 * 1200 pixels and the small resolution is 600 * 600 pixels.
- the first vector transformation parameter may be two. Assume that the first position information is (6,8) and the second position information is (3,3).
- the value of the target motion vector can be reduced, and the data amount of the encoded data can be reduced.
- the first vector transformation parameter is a ratio obtained by dividing the small resolution information by the large resolution information from the resolution information of the frame to be encoded and the resolution information of the current reference frame.
- the first vector transformation parameter is used to transform position information of a frame of large resolution information in a frame to be coded and a current reference frame.
- the resolution of the frame to be encoded is 1200 * 1200 pixels
- the resolution of the current reference frame is 600 * 600 pixels
- the first vector transformation parameter may be 1/2.
- the first position information is (6,8) and the second position information is (3,3).
- the position information is transformed by using the first vector transformation parameter, so that the position information corresponding to the frame to be encoded is on the same quantization scale as the position information of the current reference frame, which can reduce the value of the target motion vector and reduce encoded data
- the amount of data For example, as shown in FIG. 10B, the resolution of the current reference frame is twice the resolution of the frame to be encoded, and the current encoding block is pixels (1, 1), (1, 2), (2, 1), and (2 , 2), the corresponding target reference block is composed of pixels (4, 2), (4, 3), (5, 2), and (5, 3).
- the target motion vector is ( -3, -1), and when calculating the target motion vector, multiply the corresponding position information in the frame to be encoded by 2, and then calculate the target motion vector, the target motion vector is (-2,0), which is less than (- 3, -1) small.
- step S1002 that is, determining the first vector transformation parameter according to the resolution information and the first resolution information of the frame to be encoded includes: obtaining target unit motion vector resolution information; and according to the resolution information and the target of the frame to be encoded The motion vector unit resolution information determines a first vector transformation parameter.
- Step S1004 that is, obtaining the target motion vector corresponding to each encoding block in the frame to be encoded according to the first vector transformation parameter includes: obtaining a first motion vector according to the displacement of the current encoding block and the corresponding target reference block; according to the first vector transformation parameter and the A motion vector obtains a target motion vector corresponding to the current coding block.
- the target motion vector unit resolution information refers to information corresponding to a target resolution corresponding to a unit of the target motion vector, and may be, for example, a target resolution or a corresponding down-sampling ratio.
- the target motion vector is calculated based on the vector unit at this resolution. Because each of the frames to be encoded corresponding to the input video sequence may have some resolutions that are the same as the original resolution of the input video frames, and the resolution of other frames to be encoded is smaller than the original resolution of the input video frames, that is, the frames to be encoded in the video sequence There are many kinds of frame resolutions, so the resolution corresponding to the unit of the target motion vector needs to be determined.
- the resolution corresponding to the unit of the target motion vector may be set before encoding or obtained according to parameters of the encoding process, and may be specifically set as required.
- the first motion vector is obtained according to a displacement of a current coding block and a corresponding target reference block.
- the target reference block may be obtained from a current reference frame, or may be obtained from a target reference frame obtained after processing the current reference frame.
- the first vector transformation parameter and the first motion vector may be multiplied, and the obtained product may be used as the target motion vector.
- the resolution corresponding to the target motion vector unit is the original resolution
- the down-sampling ratio corresponding to the frame to be encoded is 1/2. Since the unit of the target motion vector is the original resolution, and the first motion vector is calculated at the resolution of the frame to be encoded, the first motion vector needs to be transformed.
- the first vector transformation parameter is equal to 2.
- a motion vector is (2,2,), then the target motion vector is (4,4).
- coding can be performed according to the target motion vector. For example, the target motion vector and the prediction residual corresponding to the current coding block can be coded to obtain coded data.
- the first motion vector when the target reference block is obtained from the current reference frame, it can be understood that, for the same coding block, the first motion vector may be equal to the second motion vector.
- the resolution corresponding to the unit of the target motion vector may be the resolution corresponding to the input video frame, that is, the original resolution, or the resolution corresponding to the unit of the target motion vector may be the resolution corresponding to the frame to be encoded.
- the first vector transformation parameter may be a ratio of the resolution information corresponding to the target motion vector unit to the resolution information of the frame to be encoded. For example, assuming that the resolution corresponding to the target motion vector unit is the original resolution, the sampling ratio corresponding to the target motion vector unit is 1, and the sampling ratio of the resolution of the frame to be encoded is 1/2, then the first vector transformation parameter may be 1 divided by 1/2 is equal to 2.
- the target motion vector unit resolution information can be obtained according to the computing capability of the encoding device. For example, when the encoding device can only perform operations on integers or the calculation takes a long time when the value is a decimal, the target motion The resolution corresponding to the vector unit can be the original resolution corresponding to the input video frame. When the encoding device can quickly perform decimal operations, the resolution corresponding to the target motion vector unit can be the resolution corresponding to the frame to be encoded.
- step S1002 when the resolution information of the frame to be encoded is consistent with the target motion vector unit resolution information, the first vector transformation parameter is 1, and the first motion vector is the same as the target motion vector. Therefore, step S1002 may be skipped. Use the first motion vector as the target motion vector. When the resolution information of the frame to be encoded and the target motion vector unit resolution information are not consistent, step S1002 is performed.
- the target resolution corresponding to each input video frame is the same.
- the uniformity of the target motion vector can be maintained.
- the computer device may add identification information indicating the unit resolution information of the target motion vector to the encoded data, so that the decoding end can obtain the target resolution corresponding to the target motion vector. If the identification information is not carried, the encoding end and the decoding end may agree on a target resolution corresponding to the target motion vector.
- the identification information is used to indicate resolution information corresponding to the target motion vector.
- the addition position of the identification information in the encoded data may be one or more of group-level header information, sequence-level header information, frame-level header information, and block-level header information, where the block-level header information refers to the encoded data corresponding to the encoded block. Header information.
- the addition position of the identification information in the encoded data can be determined according to the scope of the target motion vector unit resolution information. For example, if the resolutions corresponding to the vector units in the video sequence are consistent, the addition position may be sequence-level header information.
- the resolution information represented by the specific flag value can be set as required. For example, when the resolution corresponding to the target motion vector unit resolution information is the original resolution, the flag bit MV_Scale_Adaptive corresponding to the identification information is 0, and when the target motion vector unit resolution information corresponds to the resolution corresponding to the frame to be encoded The corresponding flag bit MV_Scale_Adaptive is 1.
- step S804 that is, encoding the frame to be encoded according to the current reference frame, and obtaining encoded data corresponding to the input video frame includes:
- Step S1102 Obtain an initial predicted motion vector corresponding to the current coding block.
- the computer device may predict the motion vector of the current encoding block to obtain a predicted value, calculate a difference between the target motion vector and the predicted value, obtain a motion vector difference, and perform a motion vector The difference is encoded.
- the initial prediction motion vector is used to predict the motion vector of the current coding block.
- the number of initial predicted motion vectors may be one or more, and may be specifically set according to needs.
- the rules for obtaining the initial predicted motion vector can be set as required. Because the current coding block and its neighboring coding block often have spatial correlation, the corresponding target motion vector value of one or more neighboring coded blocks corresponding to the current coding block can be used as the initial prediction motion vector.
- the first motion vector value corresponding to the upper-right corner of the current coding block and the neighboring coded block in the upper-left corner may be used as the initial prediction motion vector.
- the motion vector value of the target reference block corresponding to the target reference block corresponding to the current coding block may be used as the initial prediction motion vector.
- Step S1104 Obtain a second vector transformation coefficient according to the current motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector.
- the current motion vector unit resolution information refers to information of a current resolution corresponding to a unit of an initial predicted motion vector, and may be, for example, a current resolution or a down-sampling ratio.
- the resolution corresponding to the unit of the initial predicted motion vector means that the unit of the initial predicted motion vector is calculated based on the vector unit at the current resolution, that is, the motion vector at the current resolution.
- the first motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector are used to obtain the first Two vector transform coefficients.
- the second vector transformation parameter is used to transform the initial predicted motion vector into a motion vector at a target resolution.
- the second vector transformation parameter may be a ratio of the resolution information corresponding to the target motion vector unit to the resolution information of the current motion vector unit. For example, assuming that the resolution corresponding to the target motion vector unit is 200 * 200 pixels and the resolution information of the current motion vector unit is 100 * 100 pixels, the first vector transformation parameter is 2.
- Step S1106 Obtain a target predicted motion vector corresponding to the current coding block according to the initial predicted motion vector and the second vector transformation coefficient.
- the computer device After the computer device obtains the second vector transformation parameter, it operates according to the initial predicted motion vector and the second vector transformation coefficient to obtain a target predicted motion vector.
- the target predicted motion vector is a predicted motion vector at a target resolution.
- the initial prediction motion vector when the initial prediction motion vector is one, the product of the initial prediction motion vector and the second vector transformation coefficient may be used as the target prediction motion vector.
- the initial prediction motion vector may be calculated by taking a minimum value, an average value, or a median value to obtain a calculation result, and a target motion vector may be obtained according to the calculation result and a second vector transformation coefficient.
- the calculation result may be one or more of a minimum value, an average value, and a median value in the initial prediction motion vector. It can be understood that the algorithm for obtaining the target prediction motion vector according to the initial prediction motion vector and the second vector transformation coefficient can be customized, and the same target prediction motion vector can be calculated at the decoding end by using a consistent customized algorithm.
- Step S1108 obtaining a motion vector difference according to the target motion vector and the target predicted motion vector.
- the difference between the motion target vector and the target prediction motion vector is used as the motion vector difference to perform encoding according to the motion vector difference to obtain encoded data, thereby reducing the data amount of the encoded data.
- the target predicted motion vector at the target resolution is obtained by transforming the initial predicted motion vector, so that the target predicted motion vector and the unit of the target motion vector are in a matched quantization scale, and thus obtained
- the small motion vector difference reduces the data amount of the encoded data.
- the processing manner of obtaining the input video frame corresponding to step S702 includes: calculating the proportion of the target prediction type encoding block in the forward encoded video frame corresponding to the input video frame; and determining the processing corresponding to the input video frame according to the proportion. the way.
- the prediction type coding block is a coding block corresponding to a frame prediction type.
- the ratio of the target prediction type may be one or two of a ratio corresponding to an intra-coded block and a ratio corresponding to an inter-coded block.
- the proportion of the target prediction type encoding block in the forward encoded video frame corresponding to the input video frame may be the ratio of the target prediction type encoding block to other prediction type encoding blocks, or the target prediction type encoding block and the total number of encoding blocks. proportion.
- Specific settings can be made as required. For example, a computer device may obtain a first number of intra-coded blocks in a forward-coded video frame and a second number of inter-coded blocks in a forward-coded video frame.
- the ratio of the coded block to the third number may also be calculated according to the second number and the third number.
- the proportion of each type of forward-coded video frame corresponding to different types of coding blocks can be calculated, and the total proportion can be obtained by weighting according to each proportion, and then based on the total
- the ratio and the preset threshold determine the target processing method corresponding to the input video frame.
- the weight corresponding to the forward video frame may have a negative correlation with the encoding distance between the forward encoded video frame and the input video frame.
- the ratio of the intra-coded block in the forward-coded video frame in the forward-coded video frame may be calculated. When the ratio is greater than the target threshold, it is determined that the processing mode is a down-sampling processing mode.
- the target processing method corresponding to the input video frame is the downsampling processing method, otherwise it is determined that the target processing method corresponding to the input video frame is the full resolution processing. the way.
- the proportion of intra-frame coding blocks is large, it means that the video will be relatively complicated or the correlation between the video frames will be low, so the prediction residuals obtained will be relatively large, so the downsampling process is more preferred. Encoding to reduce the amount of encoded data.
- the target threshold can be determined according to the processing method of the reference frame corresponding to the input video frame.
- a first preset threshold T1 is acquired, and the first preset threshold T1 is used as a target threshold.
- the processing method of the reference frame corresponding to the input video frame is a full resolution processing method, a second preset threshold value T2 is obtained, and the second preset threshold value T2 is used as a target threshold value.
- the input video frame is determined based on the target threshold and the proportion of the intra-coded block in the forward-coded video frame in the forward-coded video frame. Processing method. When the proportion of the intra-coded block in the forward-coded video frame in the forward-coded video frame is greater than the target threshold, it is determined that the processing method corresponding to the input video frame is a down-sampling processing method.
- the second preset threshold is greater than the first preset threshold. In this way, when the processing method corresponding to the current reference frame is a full resolution processing method, the input video frame is more inclined to use the full resolution processing method. When the current reference frame is the downsampling processing method, the input video frame is more inclined to use the downsampling processing method.
- the video sequence A includes three input video frames: a, b, and c.
- the video encoding method is described.
- the target video sequence coding mode is a mixed resolution coding mode.
- the processing decision unit in the mixed resolution coding framework is used to make a decision on the first input video frame a, and the processing method is a downsampling method, and the downsampling ratio is 1/2; the asampling is performed to obtain the downsampling The subsequent video frame a1; intra-coding the a1 to obtain the encoded data d1 corresponding to the a1, and reconstructing the encoded data d1 corresponding to the a1 to obtain the corresponding reconstructed video frame a2.
- the processing decision unit in the mixed resolution coding framework is used to make a decision on the second input video frame b, and the processing method is a downsampling method, and the sampling ratio is 1/4.
- Downsampling b to obtain the downsampled video frame b1, encoding b1 to obtain the encoded data d2 corresponding to b, and carrying the sampling ratio information corresponding to the downsampling ratio and the processing method information corresponding to the processing mode in the encoded data .
- Reference frame a3.
- the first motion vector MV1 of the current coding block in the b1 and the target reference block in the target reference frame is calculated, and the prediction residual is p1.
- the obtained target resolution is the original resolution, so the target motion vector is 4MV1.
- the initial prediction vector is calculated as MV2, and the initial prediction vector is calculated at a resolution corresponding to 1/4 downsampling ratio. Therefore, the target prediction vector is 4MV2, so the motion vector difference MVD1 corresponding to the current coding block is equal to 4MV1-4MV2. MVD1 and p1 are transformed, quantized, and entropy coded to obtain coded data d2.
- the processing decision unit in the mixed resolution coding framework is used to make a decision on the third input video frame c, and the processing method is a downsampling method, and the sampling ratio is 1/8.
- C is down-sampled to obtain the down-sampled video frame c1
- c1 is encoded to obtain the encoded data d3 corresponding to c.
- the first motion vector MV3 of the current coding block in c1 and the target reference block in the target reference frame is calculated, and the prediction residual is p2.
- the obtained target resolution is the original resolution, so the target motion vector is 8MV3.
- the initial prediction vector is MV4, and the initial prediction vector is calculated at a resolution corresponding to 1/4 downsampling ratio. Therefore, the target prediction vector is 4MV4, so the motion vector difference MVD2 corresponding to the current coding block is equal to 8MV3-4MV4.
- MVD2 and p2 are transformed, quantized, and entropy coded to obtain coded data d3.
- the encoded data corresponding to the video sequence carries a description that the target video sequence encoding mode is a mixed resolution encoding mode. Flag bit.
- a video decoding method is proposed.
- this method is mainly described by applying the method to the terminal 110 or the server 120 in FIG. 1. It can include the following steps:
- Step S1202 Obtain the encoded data corresponding to the video sequence to be decoded.
- the video sequence to be decoded is a video sequence that needs to be decoded.
- a video sequence to be decoded may include multiple video frames to be decoded.
- the video sequence to be decoded may be a video sequence obtained in real time, or a video sequence to be decoded stored in advance. It can be understood that at the encoding end, the encoded data corresponding to the input video sequence is obtained by the encoding. When the encoded data is transmitted to the decoding end, the encoded data received by the decoding end is the encoded data corresponding to the video sequence to be decoded.
- Step S1204 Obtain a target video sequence decoding mode corresponding to the video sequence to be decoded.
- the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode.
- the computer device can parse the encoded video data to obtain the target video sequence encoding mode information, and obtain the target video sequence decoding mode according to the target video sequence encoding mode information.
- the target video sequence coding mode corresponding to the target video sequence coding mode information is a constant resolution coding mode
- the corresponding target video sequence decoding mode is a constant resolution decoding mode.
- the constant resolution decoding mode the resolution of each video frame to be decoded in a video sequence is consistent.
- the corresponding target video sequence decoding mode is a mixed resolution decoding mode, that is, the resolution of the video frame to be decoded corresponding to the video sequence to be decoded has a resolution Different situation.
- the video decoding framework is shown in FIG. 13.
- the video decoding framework includes a constant-resolution decoding framework and a mixed-resolution decoding framework.
- the mixed-resolution decoding framework may correspond to the decoding framework in FIG. 3.
- the video sequence decoding mode is determined at the video sequence decoding mode acquisition module.
- the target video sequence decoding mode is a mixed resolution decoding mode
- the mixed resolution decoding framework is used for decoding.
- the sequence decoding mode is the constant resolution decoding mode
- the constant resolution decoding is performed using the constant resolution decoding frame of FIG. 13.
- the constant resolution decoding framework may be a HEVC decoding framework or an H.265 decoding framework.
- the decoding frame corresponding to the video frame to be decoded may be determined from the header information of the encoded data. Specifically, the decoding end may obtain, from the sequence-level header information corresponding to the encoded data, an encoding frame used when each input video frame in the input video frame sequence corresponding to the current encoded data is encoded, so as to determine a matching pending frame.
- a decoding framework for decoding video frames may be obtained from the sequence-level header information corresponding to the encoded data.
- Sequence_Mix_Flag flag in the sequence-level header information of the encoded data is used to determine the adopted encoding frame, indicating that each input video frame in the input video frame sequence is encoded with a constant resolution encoding frame
- the decoder can decode the encoded data using a constant-resolution decoding frame to obtain the reconstructed video frame corresponding to the video frame to be decoded.
- Sequence_Mix_Flag is 1, it indicates that each input video frame in the input video frame sequence is mixed-resolved when it is encoded.
- Rate encoding frame the decoding end can use the decoding frame with mixed adaptive resolution to decode the encoded data to obtain a reconstructed video frame sequence.
- acquiring the target video sequence decoding mode corresponding to the video sequence to be decoded may include: acquiring current environment information, the current environment information including at least one of current encoding environment information and current decoding environment information; according to the current environment information A target video sequence decoding mode corresponding to the video sequence to be decoded is obtained from the candidate video sequence decoding modes.
- the current environment information includes current application scenario information.
- the target video sequence decoding mode is a mixed resolution decoding mode.
- Step S1206 Decode the encoded data corresponding to the decoded video sequence according to the target video sequence decoding mode to obtain a corresponding decoded video frame sequence.
- the target video sequence decoding mode is a constant resolution decoding mode
- constant resolution decoding is performed on each video frame to be decoded of the input video sequence.
- the video sequence decoding mode is a mixed resolution decoding mode
- decoding is performed according to the resolution information of the video frames to be decoded in the video sequence to be decoded, that is, there are different resolutions of the video frames to be decoded corresponding to the input video sequence. Decode the resolution information of the video frame.
- the decoding when video decoding is performed, coded data corresponding to the video sequence to be decoded is obtained, and a target video sequence decoding mode corresponding to the video sequence to be decoded is obtained.
- the target video sequence decoding mode includes a constant resolution decoding mode or a hybrid resolution.
- Rate decoding mode according to the target video sequence decoding mode, decode the encoded data corresponding to the decoded video sequence to obtain the corresponding decoded video frame sequence. Therefore, when decoding, the decoding can be flexibly performed according to the target video sequence decoding mode corresponding to the video sequence to be decoded, and accurate decoded video frames can be obtained.
- step S1206 that is, decoding the encoded data corresponding to the decoded video sequence according to the target video sequence decoding mode, and obtaining the corresponding decoded video frame sequence includes:
- Step S1402 When the target video sequence decoding mode is a mixed resolution decoding mode, obtain resolution information corresponding to the video frame to be decoded.
- the video frame to be decoded is a video frame in a video sequence to be decoded.
- the resolution information to be decoded is information related to the resolution, and may be the resolution itself or a down-sampling ratio.
- the resolution information corresponding to the video frame to be decoded may be carried in the encoded data, or may be calculated by the decoding device.
- the encoded data may carry resolution information corresponding to the video frame to be decoded, for example, may carry the resolution or down-sampling ratio corresponding to the video frame to be decoded.
- the processing data is carried in the encoded data.
- the computer equipment obtains the processing mode information from the encoded data, and obtains the resolution information corresponding to the video frame to be decoded according to the processing mode information.
- the encoded data can carry the processing mode information.
- the corresponding processing mode is the downsampling processing mode.
- the encoding standard and the decoding standard determine that the downsampling ratio is both 1/2 or the encoded data carries the corresponding downsampling ratio.
- the resolution information is 1/2 downsampling.
- Step S1404 Decode the encoded data according to the resolution information corresponding to the video frame to be decoded to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the reconstructed video frame is a video frame obtained by decoding and reconstruction. It can be understood that the resolution information corresponding to the reconstructed video frame corresponds to the resolution information of the frame to be encoded in the encoding process. If there is no loss of image information during the encoding process, the reconstructed video frame is the same as the frame to be encoded. If there is a loss of image information during the encoding process, the difference between the reconstructed video frame and the frame to be encoded corresponds to the loss value. Decoding the encoded data is performed according to the resolution information corresponding to the video frame to be decoded.
- the decoding may include at least one of prediction, inverse transform, inverse quantization, and entropy decoding, which is specifically determined according to the encoding process.
- the computer device uses the resolution information of the video frame to be decoded to position the current reference frame, the position information corresponding to each block to be decoded of the video frame to be decoded, the position information corresponding to each reference block of the current reference frame, and the motion vector. At least one process is performed, and a processing method therein matches a processing method when the encoding end performs encoding.
- the current reference frame corresponding to the video frame to be decoded can be obtained, and the current reference frame is processed according to the resolution information corresponding to the video frame to be decoded to obtain the target reference frame.
- the target reference block is obtained according to the motion vector information carried by the encoded data, and the target reference block is obtained according to the target.
- the reference block obtains the prediction value corresponding to the block to be decoded, and obtains a reconstructed video frame according to the prediction residual and the prediction value in the encoded data.
- the position information when the encoding end transforms the position information, when the corresponding position information is obtained in the decoding process, the position information needs to be correspondingly transformed to maintain the target obtained by the encoding end and the decoding end. Consistency of reference blocks.
- the target motion vector when the motion vector information carried in the encoded data is the target motion vector, the target motion vector may be transformed according to the resolution information of the target motion vector unit resolution information and the resolution information corresponding to the video frame to be decoded to obtain the target motion vector.
- the first motion vector under the resolution information corresponding to the video frame is decoded, and the target reference block corresponding to the block to be decoded is obtained according to the first motion vector.
- the motion vector information carried in the encoded data is a motion vector difference value
- an initial prediction motion vector corresponding to the current block to be decoded is obtained
- a motion vector difference value and an initial prediction motion vector corresponding to each block to be decoded are obtained.
- Processing is performed at the same resolution to obtain a first motion vector corresponding to a corresponding block to be decoded and at a resolution of a video frame to be decoded, and a target reference block corresponding to the block to be decoded is obtained according to the first motion vector.
- the computer device transforms both the motion vector difference value and the initial predicted motion vector into corresponding motion vectors at the same resolution.
- the initial prediction motion vector can be transformed into the target prediction motion vector at the target resolution, the target motion vector can be obtained according to the target prediction motion vector and the motion vector difference value, and then the target motion vector is transformed to the resolution of the video frame to be decoded.
- the initial predicted motion vector can also be transformed into the predicted motion vector at the resolution of the video frame to be decoded, and the difference between the motion vector and the motion vector difference at the resolution of the video frame to be decoded can be converted according to the resolution of the video frame to be decoded.
- the first motion vector is obtained from the motion vector difference at the rate and the predicted motion vector at the resolution of the video frame to be decoded.
- Step S1406 the reconstructed video frame is processed according to the resolution information corresponding to the video frame to be decoded to obtain a corresponding decoded video frame.
- the processing of the reconstructed video frame may be a sampling process, for example, an upsampling process.
- the method for processing the reconstructed video frame may correspond to the method for processing the input video frame in the encoding.
- the input video frame processing method is a downsampling processing method and the resolution information is 1/2 downsampling ratio
- the reconstructed video frame is subjected to upsampling processing, and the upsampling ratio may be 2.
- the decoding end may also obtain the used downsampling ratio information or the downsampling method from the header information.
- the decoder can obtain the down-sampling ratio information or down-sampling method information corresponding to the current encoded data from any of the sequence-level header information, the group-level header information, and the frame-level header information.
- the video decoding method obtains the encoded data corresponding to the video frame to be decoded, obtains the resolution information corresponding to the video frame to be decoded, and decodes the encoded data according to the resolution information corresponding to the video frame to be decoded to obtain the reconstruction corresponding to the video frame to be decoded.
- the reconstructed video frames are processed according to the resolution information corresponding to the video frames to be decoded to obtain corresponding decoded video frames. Therefore, when decoding, it is possible to flexibly decode according to the resolution information corresponding to the video frame to be decoded to obtain a decoded video frame, and to decode at the resolution of the video frame to be decoded, to obtain an accurately decoded video frame.
- the reconstructed video frames corresponding to the video frames to be decoded of the video sequence to be decoded are all processed to the same resolution, for example, the reconstructed video frames are processed to decoded video frames with the same original resolution as the input video frames.
- step S1404 that is, decoding the encoded data according to the resolution information corresponding to the video frame to be decoded, and obtaining the reconstructed video frame corresponding to the video frame to be decoded includes:
- Step S1502 Acquire a current reference frame corresponding to the video frame to be decoded.
- the number of reference frames corresponding to the video frames to be decoded may be one or more.
- the corresponding reference frame may be one.
- the corresponding reference frame can be two.
- the reference frame corresponding to the frame to be encoded may be obtained according to a reference relationship, and the reference relationship may be different according to each video codec standard.
- the frame is a B frame, and the corresponding video frame to be decoded may be the I frame of the video group and the fourth frame of the video group.
- the current reference frame corresponding to the video frame to be decoded may be the previous one or two of its forward encoded frames. It can be understood that the current reference frame here is consistent with the current reference frame in the encoding process.
- acquiring the current reference frame corresponding to the video frame to be decoded includes: acquiring a second reference rule, and the second reference rule includes a resolution relationship between the video frame to be decoded and the current reference frame; and acquiring the target frame according to the second reference rule. Decode the current reference frame corresponding to the video frame.
- the second reference rule determines the limit relationship between the resolution size of the video frame to be decoded and the current reference frame. It can be understood that, to ensure that the current reference frame obtained during the encoding process is consistent with the reference frame obtained during the decoding process
- the first reference rule is consistent with the second reference rule.
- the first reference rule and the second reference rule may be preset in a codec standard. Alternatively, when encoding, the first reference rule may be selected according to the application scenario of the encoding, real-time requirements, etc., and the reference rule information is carried in the encoded data, and the decoder obtains the second reference rule according to the reference rule information in the encoded data.
- the resolution size relationship includes at least one of a resolution of a video frame to be decoded and a resolution of a reference frame being different.
- the second reference rule may further include a reference rule for a processing manner of the resolution of the video frame to be decoded and the current reference frame.
- a reference rule for a processing mode may include: a video frame to be decoded in a full resolution processing mode may refer to a current reference frame in a full resolution processing mode, and a video frame to be decoded in a down sampling processing mode may refer to a current reference frame in a down sampling processing mode One or two.
- the second reference rule may further include that the resolution of the video frame to be decoded is greater than the resolution of the current reference frame, and the resolution of the video frame to be decoded One or two resolutions smaller than the current reference frame. Therefore, the second reference rule may include the original resolution video frame to be decoded may refer to the downsampling resolution reference frame, the down sampling resolution video frame to be decoded may refer to the original resolution reference frame, and the original resolution video frame to be decoded may refer to the original The resolution reference frame and the down-sampling resolution video frame to be decoded may refer to one or more of the reference frames with the down-sampling resolution.
- the original resolution video frame to be decoded means that the resolution of the video frame to be decoded is the same as the resolution of the corresponding input video frame, and the original resolution reference frame refers to the resolution of the reference frame and the resolution of the corresponding input video frame.
- the rate is the same.
- the down-sampling resolution video frame to be decoded means that the resolution information corresponding to the video frame to be decoded is down-sampling.
- the down-sampling resolution reference frame means that the resolution information corresponding to the reference frame is down-sampling.
- Step S1504 Decode the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the computer device may obtain a reference block corresponding to the to-be-decoded block of the video frame to be decoded from the current reference frame, and decode the to-be-decoded block according to the reference block.
- the current reference frame can also be processed according to the resolution information of the video frame to be decoded to obtain the corresponding target reference frame, and the target reference block corresponding to the to-be-decoded block of the video frame to be decoded is obtained from the target reference frame, and according to the target reference block Decode the encoded block and reconstruct the video frame corresponding to the video frame to be decoded.
- step S1504 that is, decoding the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded includes: according to the resolution corresponding to the video frame to be decoded The information performs sampling processing on the current reference frame to obtain a corresponding target reference frame; and decodes the video frame to be decoded according to the target reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the target reference block is obtained from the target reference frame according to the motion vector information carried by the encoded data
- the predicted value corresponding to the block to be decoded is obtained according to the target reference block
- the reconstructed video frame is obtained according to the prediction residual and the predicted value in the encoded data.
- sampling the current reference frame according to the resolution information corresponding to the video frame to be decoded, and obtaining the corresponding target reference frame includes: performing a current reference frame based on the resolution information of the video frame to be decoded and the motion estimation pixel accuracy. Perform sampling processing to obtain the corresponding target reference frame.
- sampling the current reference frame according to the resolution information and motion estimation pixel accuracy of the video frame to be decoded to obtain the corresponding target reference frame includes: according to the resolution information and motion estimation pixel accuracy of the video frame to be decoded.
- the pixel interpolation accuracy is calculated; the pixel-based interpolation processing is directly performed on the current reference frame according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- sampling the current reference frame according to the resolution information of the video frame to be decoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: performing a current reference frame according to the resolution information of the video frame to be decoded. Sampling processing to obtain the intermediate reference frame; sub-pixel interpolation processing is performed on the intermediate reference frame according to the pixel accuracy of the motion estimation to obtain the target reference frame.
- the resolution of the video frame to be decoded is consistent with the resolution of the video frame to be encoded, and the obtained target reference frame is also consistent. Therefore, the current reference frame is sampled and processed according to the resolution information corresponding to the video frame to be decoded to obtain the corresponding
- the method of the target reference frame is the same as that in the encoding end by sampling processing the current reference frame according to the resolution information of the frame to be encoded, and the corresponding target reference frame is obtained, which is not described in this embodiment of the present application.
- the decoding end may also obtain the sampling mode information corresponding to the video frame to be decoded from the header information of the encoded data.
- the sub-pixel interpolation mode information corresponding to the video frame to be decoded can be obtained from any of sequence-level header information, group-level header information, and frame-level header information. For example, when the flag bit Pixel_Sourse_Interpolation used to determine the sampling mode in the frame-level header information of the encoded data is 0, it means that the current reference frame corresponding to the input video frame is directly subjected to sub-pixel interpolation. When Pixel_Sourse_Interpolation is 1, it indicates the input video.
- the current reference frame corresponding to the frame is subjected to sampling processing and then sub-pixel interpolation processing.
- the decoder can perform the sub-pixel interpolation processing on the current reference frame in the same manner as the sub-pixel interpolation indicated by the identification bit in the encoded data to obtain the target reference frame, so that the encoded data can be decoded according to the target reference frame to obtain a reconstructed video frame. .
- step S1504 that is, decoding the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame, to obtain a reconstructed video frame corresponding to the video frame to be decoded includes:
- Step S1602 Determine a third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information.
- the first resolution information includes target unit motion vector resolution information or resolution information of a current reference frame.
- the third vector transformation parameter is used to transform the obtained position information of the motion vector or the motion vector.
- the third vector transformation parameter may be a ratio between the first resolution information and the resolution information of the video frame to be decoded, and the third vector transformation parameter corresponds to the first vector transformation parameter.
- the computer device may transform the target motion vector to a resolution corresponding to the video frame to be decoded, and the corresponding motion vector, the third vector transformation parameter may be the first The inverse of the vector transformation parameter.
- the third vector transformation parameter is used to transform the position information corresponding to the motion vector
- the first vector transform parameter in the encoding end is used to transform the first position information
- the position value calculated according to the target motion vector and the first position information is the second position in the encoding end according to the first vector transformation parameter.
- the position value after the information is transformed, so the third vector transformation parameter is the inverse of the first vector transformation parameter.
- Step S1604 Obtain a target motion vector corresponding to each block to be decoded in the video frame to be decoded according to the encoded data.
- the computer device reads the target motion vector from the encoded data.
- the computer device can calculate and obtain the target prediction motion vector, and obtain the target motion vector according to the motion vector difference and the target prediction motion vector.
- Step S1606 Obtain a target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector.
- the computer device transforms the obtained motion vector or the position information corresponding to the motion vector according to the third vector transformation parameter to obtain the position information corresponding to the target reference block, thereby obtaining the target reference block.
- Step S1608 Decode the encoded data according to the target reference block to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the computer device After obtaining the target reference block, the computer device obtains the pixel value of each image block of the reconstructed video frame according to the pixel value of the target reference block and the prediction residual of the block to be decoded carried in the encoded data, to obtain a reconstructed video frame.
- step S1602 determining the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information corresponding to the video frame to be decoded and the resolution of the current reference frame. The information determines the third vector transformation parameter.
- Step S1606 that is, obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining first position information corresponding to the current to-be-decoded block; The target reference block corresponding to the current block to be decoded is obtained according to the first position information, the third vector transformation parameter, and the target motion vector.
- the computer device may obtain the second position information corresponding to the target reference block according to the first position information, the third vector transformation parameter, and the target motion vector, and obtain the target reference block according to the second position information. Due to the correspondence between encoding and decoding, if the first vector transformation parameter in the encoding end is used to transform the first position information, because the position information of the block to be decoded and the encoding block are the same, the third vector transformation parameter is the same as the first vector The transformation parameters are the same. If the first vector transformation parameter in the encoding end is used to transform the second position information, the position value calculated according to the target motion vector and the first position information is the second position information in the encoding end according to the first vector transformation parameter. The position value after the transformation is performed, so the third vector transformation parameter is the inverse of the first vector transformation parameter.
- the resolution of the video frame to be decoded is 1200 * 1200 pixels, and the resolution of the current reference frame is 600 * 600 pixels.
- the resolution of the video frame to be decoded is 1200 * 1200 pixels, and the resolution of the current reference frame is 600 * 600 pixels.
- step S1602 that is, determining the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information corresponding to the video frame to be decoded and the target motion vector unit resolution The information determines the third vector transformation parameter; step S1606, that is, obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining the first reference block according to the target motion vector and the third vector transformation parameter. A motion vector; and acquiring a target reference block corresponding to the current block to be decoded according to the first motion vector.
- the third vector transformation parameter is determined according to the resolution information corresponding to the video frame to be decoded and the target motion vector unit resolution information, and is used to transform the target motion vector to the first motion at the resolution corresponding to the frame to be decoded.
- Vector After the third vector transformation parameter is obtained, the computer device may multiply the third vector transformation parameter and the target motion vector, and use the obtained product as the first motion vector. It can be understood that the process of obtaining the first motion vector according to the third vector transformation parameter and the target motion vector is inverse to the process of obtaining the target motion vector corresponding to the current coding block according to the first vector transformation parameter and the first motion vector.
- the first vector transformation parameter of the coding block corresponding to the block to be decoded is equal to 2
- the obtained first motion vector is (2, 2)
- the first vector transformation parameter and the first motion vector 2
- the product of, 2, gives the target motion vector as (4,4).
- the third vector transformation parameter is 1/2
- the obtained target motion vector is (4, 4).
- the first The motion vector is (2,2).
- obtaining the target motion vector corresponding to each to-be-decoded block in the to-be-decoded video frame according to the encoded data includes: obtaining the current The motion vector difference corresponding to the block to be decoded; obtain the initial predicted motion vector corresponding to the current block to be decoded; and obtain the second vector transformation coefficient according to the current motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector. ; Obtain the target predicted motion vector corresponding to the current decoded block according to the initial predicted motion vector and the second vector transform coefficient; obtain the target motion vector according to the target predicted motion vector and the motion vector difference.
- the initial motion prediction vector corresponding to the current block to be decoded and the initial prediction motion vector corresponding to the current block to be encoded It is consistent, and the method for obtaining the target prediction motion vector may refer to the method in the encoding process, and details are not described again.
- the target motion vector is the sum of the target prediction motion vector and the motion vector difference.
- the proportion of the target prediction type decoding block in the forward decoded video frame corresponding to the video frame to be decoded can also be calculated; the processing mode corresponding to the video frame to be decoded is determined according to the ratio; and the video to be decoded is obtained according to the processing mode. Resolution information corresponding to the frame.
- the target prediction type decoding block corresponds to the target prediction type encoding block.
- the forward decoded video frame is the video frame decoded in the video frame to be decoded.
- the forward decoded video frame corresponds to the forward encoded video frame. Therefore, the proportion of the target prediction type encoding block obtained by the encoding end and the target prediction obtained by the decoding end.
- the calculation method and the result of the proportion of the type decoding block are also consistent.
- the method of obtaining the proportion of the target prediction type decoding block refer to the method of the proportion of the target prediction type encoding block, and details are not described herein again.
- the ratio of the intra decoded block in the forward decoded video frame in the forward decoded video frame may be calculated. When the ratio is greater than the target threshold, it is determined that the processing mode is a downsampling processing mode.
- the target processing method corresponding to the video frame to be decoded is the downsampling processing method, otherwise it is determined that the target processing method corresponding to the video frame to be decoded is the full resolution. Processing method.
- the target threshold may be determined according to a processing manner of a reference frame corresponding to a video frame to be decoded.
- a processing mode of the reference frame corresponding to the video frame to be decoded is a downsampling method
- a first preset threshold value T1 is acquired, and the first preset threshold value T1 is used as a target threshold value.
- the processing method of the reference frame corresponding to the video frame to be decoded is the full resolution processing method
- a second preset threshold value T2 is obtained, and the second preset threshold value T2 is used as a target threshold value.
- the video to be decoded is determined according to the target threshold and the proportion of the intra-frame decoded block in the forward decoded video frame in the forward decoded video frame. How frames are processed. Wherein, when the proportion of the intra decoded block in the forward decoded video frame in the forward decoded video frame is greater than the target threshold, it is determined that the processing method corresponding to the video frame to be decoded is a downsampling processing method.
- the following uses the decoding of the encoded data corresponding to the video sequence A as an example to describe the video decoding method. It is assumed that the names of the video frames to be decoded corresponding to the input video frames a, b, and c at the decoding end are e, f, and g, respectively.
- the receiving terminal (that is, the decoding end) obtains the encoded data corresponding to the video sequence A, and obtains the target video sequence encoding mode from the sequence header information corresponding to the encoded data.
- the encoding mode is a mixed resolution encoding mode. Therefore, the mixed resolution decoding framework is used to The encoded data is decoded.
- the resolution information acquisition unit of the hybrid resolution decoding frame acquires resolution information corresponding to the first video frame e to be decoded. It can be understood that the encoded data corresponding to e is data obtained by encoding a1. The intra decoding is performed on e to obtain a reconstructed video frame e1. Since the resolution information corresponding to e is 1/2, the reconstructed video frame e1 may be subjected to an upsampling process with a sampling ratio of 2 to obtain a decoded video frame e2.
- the decoding process is as follows: since f is an inter-predicted frame, the reconstructed video frame e1 needs to be used as the current reference frame. It can be understood that e1 and a2 are the same, and e1 is subjected to the same sampling processing as a2 to obtain e3. It is the same as a3 and is the target reference frame.
- the motion vector difference corresponding to the current block to be decoded is obtained from the encoded data as MVD1. Since MVD1 is the target resolution, that is, the original resolution, MVD1 needs to be converted to the resolution corresponding to f, so MVD3 can be obtained as MVD1. / 4. Obtain the initial prediction vector as MV2.
- the first motion vector can be obtained as MV1, which is equal to MVD1 / 4 + MV2.
- MV1 which is equal to MVD1 / 4 + MV2.
- MV1 the prediction value corresponding to the block to be decoded is obtained according to the target reference block, and the prediction residual p1 is added to the prediction value to reconstruct the reconstruction block corresponding to the reconstructed video frame f1.
- the resolution information acquisition unit of the hybrid resolution decoding framework obtains the encoded data corresponding to the third video frame g to be decoded. It can be understood that the encoded data corresponding to g is data obtained by encoding c1. Interframe decoding is performed on g to obtain a reconstructed video frame g1. Since the resolution information corresponding to g is 1/8, the reconstructed video frame f1 may be subjected to an upsampling process with a sampling ratio of 8 to obtain a decoded video frame g2.
- the decoding process is as follows: Since g is an inter-predicted frame, the reconstructed video frame f1 needs to be used as the current reference frame. It can be understood that f1 and b2 are the same, and f1 is subjected to the same sampling processing as b2 to obtain f3. Here f3 It is the same as b3 and is the target reference frame.
- the motion vector difference corresponding to the current block to be decoded is obtained from the encoded data as MVD2. Since MVD2 is the target resolution, that is, the original resolution, MVD2 needs to be converted to the resolution corresponding to g, so MVD2 can be obtained as MVD1. /8.
- the initial prediction vector is MV4.
- the first motion vector is MV3, which is equal to MVD2 / 8 + MV4 / 2.
- MV3 is equal to MVD2 / 8 + MV4 / 2.
- the receiving terminal plays e2, f2, and g2.
- a video encoding device is provided.
- the video encoding device may be integrated into the above-mentioned computer equipment such as the terminal 110 or the server 120, and may specifically include an input video sequence acquisition module 1702 and encoding.
- An input video sequence acquisition module 1702 configured to acquire an input video sequence
- a coding mode acquisition module 1704 is configured to obtain a target video sequence coding mode corresponding to an input video sequence from the candidate video sequence coding modes, where the candidate video sequence coding modes include a constant resolution coding mode and a mixed resolution coding mode;
- the encoding module 1706 is configured to encode each input video frame of the input video sequence according to a target video sequence encoding mode to obtain encoded data.
- the encoding module 1706 is configured to perform constant-resolution encoding on each input video frame of the input video sequence when the target video sequence encoding mode is a constant-resolution encoding mode.
- the encoding module 1706 includes:
- a processing mode obtaining unit configured to obtain a processing mode corresponding to an input video frame when a target video sequence coding mode is a mixed resolution coding mode
- a processing unit configured to process an input video frame according to a processing mode to obtain a frame to be encoded, and the resolution of the frame to be encoded corresponding to the processing mode is the resolution of the input video frame or smaller than the resolution of the input video frame;
- the encoding unit is configured to encode the frame to be encoded at the resolution of the frame to be encoded to obtain encoded data corresponding to the input video frame.
- the encoding mode obtaining module 1704 is configured to obtain current environment information, where the current environment information includes at least one of current encoding environment information and current decoding environment information; and from the candidate video sequence encoding mode according to the current environment information. Get the target video sequence encoding mode corresponding to the input video sequence.
- the current environment information includes current application scenario information; when the current application scenario corresponding to the current application scenario information is a real-time application scenario, the target video sequence encoding mode is a mixed resolution encoding mode.
- the encoding module 1706 is configured to add target video sequence encoding mode information corresponding to the target video sequence encoding mode to the encoded data.
- a video decoding device is provided.
- the video decoding device may be integrated into the above-mentioned computer equipment such as the server 120 or the terminal 110, and may specifically include an encoded data acquisition module 1802 and a decoding mode.
- An encoded data acquisition module 1802 configured to acquire encoded data corresponding to a video sequence to be decoded
- a decoding mode obtaining module 1804 configured to obtain a target video sequence decoding mode corresponding to the video sequence to be decoded, where the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode;
- a decoding module 1806 is configured to decode the encoded data corresponding to the decoded video sequence according to the target video sequence decoding mode to obtain a corresponding decoded video frame sequence.
- the decoding module 1806 is configured to: when the target video sequence decoding mode is a constant resolution decoding mode, perform constant resolution decoding on each to-be-decoded video frame of the to-be-decoded video sequence.
- the decoding module 1806 includes:
- a resolution information obtaining unit configured to obtain resolution information corresponding to a video frame to be decoded when a target video sequence decoding mode is a mixed resolution decoding mode
- a decoding unit configured to decode the encoded data according to the resolution information corresponding to the video frame to be decoded to obtain a reconstructed video frame corresponding to the video frame to be decoded;
- the processing unit is configured to process the reconstructed video frame according to the resolution information corresponding to the video frame to be decoded to obtain a corresponding decoded video frame.
- the decoding mode acquisition module 1804 is configured to: obtain current environment information, the current environment information including at least one of current encoding environment information and current decoding environment information; and from the candidate video sequence decoding mode according to the current environment information A target video sequence decoding mode corresponding to the video sequence to be decoded is obtained.
- the decoding mode acquisition module 1804 is configured to parse the encoded data corresponding to the video sequence to be decoded to obtain the target video sequence decoding mode.
- FIG. 19 shows an internal structure diagram of a computer device in one embodiment.
- the computer device may specifically be the terminal 110 in FIG. 1.
- the computer device includes a processor 1901, a memory 1902, a network interface 1903, an input device 1904, and a display screen 1905 connected through a system bus.
- the memory 1902 includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium of the computer device stores an operating system and a computer program.
- the processor 1901 can implement the video encoding method and video of any embodiment of the present application. At least one of the decoding methods.
- a computer program may also be stored in the internal memory.
- the processor 1901 may execute at least one of a video encoding method and a video decoding method in any embodiment of the present application.
- the display screen 1905 of a computer device may be a liquid crystal display or an electronic ink display screen.
- the input device 1904 of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball, or a touchpad provided on the computer device casing. , Or an external keyboard, trackpad, or mouse.
- FIG. 20 shows an internal structure diagram of a computer device in one embodiment.
- the computer device may specifically be the server 120 in FIG. 1.
- the computer device includes a processor 2001, a memory 2002, and a network interface 2003 connected through a system bus.
- the memory 2002 includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium of the computer device stores an operating system and a computer program.
- the processor 2001 can enable the processor 2001 to implement the video encoding method and video of any embodiment of the present application. At least one of the decoding methods.
- a computer program may also be stored in the internal memory.
- the processor 2001 may cause the processor 2001 to execute at least one of a video encoding method and a video decoding method in any embodiment of the present application.
- FIG. 19 and FIG. 20 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
- the specific computer The device may include more or fewer components than shown in the figure, or some components may be combined, or have different component arrangements.
- the video encoding device provided in the present application can be implemented in the form of a computer program, which can be run on a computer device as shown in FIGS. 19 and 20.
- the memory of the computer device may store various program modules constituting the video encoding device, for example, the input video sequence acquisition module 1702, the encoding mode acquisition module 1704, and the encoding module 1706 shown in FIG.
- the computer program constituted by each program module causes the processor to execute the steps in the video encoding method of each embodiment of the application described in this specification.
- the computer device shown in FIGS. 19 and 20 may obtain the input video sequence through the input video sequence acquisition module 1702 shown in FIG. 17.
- the encoding mode acquisition module 1704 obtains the target video sequence encoding mode corresponding to the input video sequence from the candidate video sequence encoding modes, where the candidate video sequence encoding mode includes a constant resolution encoding mode and a mixed resolution encoding mode.
- the encoding module 1706 encodes each input video frame of the input video sequence according to the target video sequence encoding mode to obtain encoded data.
- the video decoding apparatus provided in this application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in FIGS. 19 and 20.
- the memory of the computer device may store various program modules constituting the video decoding device, such as the encoded data acquisition module 1802, the decoding mode acquisition module 1804, and the decoding module 1806 shown in FIG.
- the computer program constituted by each program module causes the processor to execute the steps in the video decoding method of each embodiment of the present application described in this specification.
- the computer device shown in FIGS. 19 and 20 may obtain the encoded data corresponding to the video sequence to be decoded through the encoded data acquisition module 1802 shown in FIG. 18.
- a decoding mode acquisition module 1804 is used to obtain a target video sequence decoding mode corresponding to the video sequence to be decoded.
- the target video sequence decoding mode includes a constant resolution decoding mode or a mixed resolution decoding mode.
- the decoding module 1806 decodes the encoded data corresponding to the decoded video sequence according to the target video sequence decoding mode to obtain a corresponding decoded video frame sequence.
- a computer device is proposed, and the computer device may be as shown in FIG. 19 or FIG. 20.
- the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the processor executes the computer program, the following steps are implemented: obtaining an input video sequence; and obtaining an input video from a candidate video sequence encoding mode.
- the target video sequence encoding mode corresponding to the sequence, wherein the candidate video sequence encoding mode includes a constant resolution encoding mode and a mixed resolution encoding mode; encoding each input video frame of the input video sequence according to the target video sequence encoding mode to obtain encoded data .
- encoding each input video frame of the input video sequence according to the target video sequence encoding mode performed by the processor to obtain encoded data includes: when the target video sequence encoding mode is a constant resolution encoding mode, inputting Each input video frame of a video sequence is coded at a constant resolution.
- encoding each input video frame of the input video sequence according to the target video sequence encoding mode performed by the processor to obtain the encoded data includes: when the target video sequence encoding mode is a mixed resolution encoding mode, obtaining input The processing method corresponding to the video frame; the input video frame is processed according to the processing method to obtain the frame to be encoded, and the resolution of the frame to be encoded corresponding to the processing mode is the resolution of the input video frame or smaller than the resolution of the input video frame; At the resolution of the frame to be encoded, the frame to be encoded is encoded to obtain encoded data corresponding to the input video frame.
- obtaining the target video sequence encoding mode corresponding to the input video sequence from the candidate video sequence encoding modes performed by the processor includes: acquiring current environment information, and the current environment information includes current encoding environment information and current decoding environment information. At least one piece of information in the method; obtaining a target video sequence coding mode corresponding to the input video sequence from the candidate video sequence coding modes according to the current environment information.
- the current environment information includes current application scenario information; when the current application scenario corresponding to the current application scenario information is a real-time application scenario, the target video sequence encoding mode is a mixed resolution encoding mode.
- the encoding of each input video frame of the input video sequence by the processor according to the target video sequence encoding mode to obtain the encoded data includes: adding target video sequence encoding mode information corresponding to the target video sequence encoding mode to Encoded data.
- the encoding data corresponding to the input video frame by the processor to encode the frame to be encoded under the resolution of the frame to be encoded includes: adding processing mode information corresponding to the processing mode to the corresponding input video frame. Encoded data.
- obtaining the processing method corresponding to the input video frame by the processor includes: obtaining processing parameters corresponding to the input video frame, determining the processing method corresponding to the input video frame according to the processing parameter, and processing information corresponding to the processing method.
- the encoding data added to the input video frame includes: when the processing parameters cannot be reproduced during the decoding process, adding the processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame.
- the encoding of the input video frame by the processor at the resolution of the frame to be encoded to obtain the encoded data corresponding to the input video frame includes: obtaining a current reference frame corresponding to the frame to be encoded; At the resolution, the to-be-encoded frame is encoded according to the current reference frame to obtain encoded data corresponding to the input video frame.
- encoding the to-be-encoded frame according to the current reference frame by the processor to obtain the encoded data corresponding to the input video frame includes: determining the first vector transformation according to the resolution information and the first resolution information of the frame to be encoded.
- the first resolution information includes resolution information of a current reference frame or target motion vector unit resolution information corresponding to an input video frame; and a target motion vector corresponding to each encoding block in a frame to be encoded is obtained according to the first vector transformation parameter.
- encoding the to-be-encoded frame according to the current reference frame by the processor to obtain the encoded data corresponding to the input video frame includes: performing sampling processing on the current reference frame according to the resolution information of the to-be-encoded frame to obtain the corresponding Target reference frame; encoding the frame to be coded according to the target reference frame to obtain coded data corresponding to the input video frame.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes: obtaining the frame to be encoded at the resolution of the frame to be encoded for encoding. Encoding method corresponding to the time; adding encoding method information corresponding to the encoding method to the encoded data corresponding to the input video frame.
- determining, by the processor, the first vector transformation parameter according to the resolution information and the first resolution information of the frame to be encoded includes: according to the resolution information of the frame to be encoded and the resolution information of the current reference frame. Determining the first vector transformation parameter; and obtaining the target motion vector corresponding to each coding block in the frame to be coded according to the first vector transformation parameter includes: acquiring first position information corresponding to the current coding block, and acquiring a target reference block corresponding to the current coding block Second position information; a target motion vector corresponding to the current coding block is calculated and calculated according to the first vector transformation parameter, the first position information, and the second position information.
- determining, by the processor, the first vector transformation parameter according to the resolution information and the first resolution information of the frame to be encoded includes: acquiring target resolution information of the unit of the motion vector; and according to the resolution of the frame to be encoded.
- the first vector transformation parameter is determined by the information and the target resolution information of the target motion vector unit.
- Obtaining the target motion vector corresponding to each coding block in the frame to be coded according to the first vector transformation parameter includes: obtaining according to the displacement of the current coding block and the corresponding target reference block. A first motion vector; and obtaining a target motion vector corresponding to the current coding block according to the first vector transformation parameter and the first motion vector.
- encoding the to-be-encoded frame according to the current reference frame and obtaining the encoded data corresponding to the input video frame performed by the processor includes: obtaining an initial predicted motion vector corresponding to the current encoded block; The current motion vector unit resolution information and the target motion vector unit resolution information are used to obtain a second vector transform coefficient; the initial predicted motion vector and the second vector transform coefficient are used to obtain a target predicted motion vector corresponding to the current coding block; according to the target motion vector and The target predicts the motion vector to obtain the motion vector difference.
- the processor performs sampling processing on the current reference frame according to the resolution information of the frame to be encoded, and obtains a corresponding target reference frame including: according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy pair The current reference frame is sampled to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution information of the frame to be encoded and The pixel interpolation accuracy is calculated by calculating the pixel accuracy of the motion estimation; the sub-pixel interpolation processing is directly performed on the current reference frame according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: Sampling processing is performed on the current reference frame to obtain an intermediate reference frame; sub-pixel interpolation is performed on the intermediate reference frame according to the motion estimation pixel accuracy to obtain a target reference frame.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes adding the sampling mode information corresponding to the sampling process of the current reference frame. To the encoded data corresponding to the current reference frame.
- obtaining the current reference frame corresponding to the frame to be encoded by the processor includes: obtaining a first reference rule, where the first reference rule includes a resolution relationship between the frame to be encoded and the current reference frame; The reference rule acquires the current reference frame corresponding to the frame to be encoded.
- the encoding of the input video frame by the processor to encode the frame to be encoded at the resolution of the frame to be encoded includes adding rule information corresponding to the first reference rule to the input video frame. Corresponding encoded data.
- the processing performed by the processor to obtain the corresponding input video frame includes: calculating a ratio of the target prediction type encoding block in the forward encoded video frame corresponding to the input video frame; and determining the input video frame according to the ratio. Corresponding processing.
- the processing mode includes downsampling
- processing the input video frame by the processor according to the processing mode to obtain the frame to be encoded includes: downsampling the input video frame to obtain the frame to be encoded.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes adding downsampling processing information corresponding to the downsampling processing to the input. In the encoded data corresponding to the video frame.
- a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor causes the processor to perform the following steps: obtaining an input video sequence; In the video sequence coding mode, a target video sequence coding mode corresponding to the input video sequence is obtained.
- the candidate video sequence coding modes include a constant resolution coding mode and a mixed resolution coding mode. Each of the input video sequences is based on the target video sequence coding mode.
- the input video frame is encoded to obtain encoded data.
- encoding each input video frame of the input video sequence according to the target video sequence encoding mode performed by the processor to obtain encoded data includes: when the target video sequence encoding mode is a constant resolution encoding mode, inputting Each input video frame of a video sequence is coded at a constant resolution.
- encoding each input video frame of the input video sequence according to the target video sequence encoding mode performed by the processor to obtain the encoded data includes: when the target video sequence encoding mode is a mixed resolution encoding mode, obtaining input The processing method corresponding to the video frame; the input video frame is processed according to the processing method to obtain the frame to be encoded, and the resolution of the frame to be encoded corresponding to the processing mode is the resolution of the input video frame or smaller than the resolution of the input video frame; At the resolution of the frame to be encoded, the frame to be encoded is encoded to obtain encoded data corresponding to the input video frame.
- obtaining the target video sequence encoding mode corresponding to the input video sequence from the candidate video sequence encoding modes performed by the processor includes: acquiring current environment information, and the current environment information includes current encoding environment information and current decoding environment information. At least one piece of information in the method; obtaining a target video sequence coding mode corresponding to the input video sequence from the candidate video sequence coding modes according to the current environment information.
- the current environment information includes current application scenario information; when the current application scenario corresponding to the current application scenario information is a real-time application scenario, the target video sequence encoding mode is a mixed resolution encoding mode.
- the encoding of each input video frame of the input video sequence by the processor according to the target video sequence encoding mode to obtain the encoded data includes: adding target video sequence encoding mode information corresponding to the target video sequence encoding mode to Encoded data.
- the encoding data corresponding to the input video frame by the processor to encode the frame to be encoded under the resolution of the frame to be encoded includes: adding processing mode information corresponding to the processing mode to the corresponding input video frame. Encoded data.
- obtaining the processing method corresponding to the input video frame by the processor includes: obtaining processing parameters corresponding to the input video frame, determining the processing method corresponding to the input video frame according to the processing parameter, and processing information corresponding to the processing method.
- the encoding data added to the input video frame includes: when the processing parameters cannot be reproduced during the decoding process, adding the processing mode information corresponding to the processing mode to the encoded data corresponding to the input video frame.
- the encoding of the input video frame by the processor at the resolution of the frame to be encoded to obtain the encoded data corresponding to the input video frame includes: obtaining a current reference frame corresponding to the frame to be encoded; At the resolution, the to-be-encoded frame is encoded according to the current reference frame to obtain encoded data corresponding to the input video frame.
- encoding the to-be-encoded frame according to the current reference frame by the processor to obtain the encoded data corresponding to the input video frame includes: determining the first vector transformation according to the resolution information and the first resolution information of the frame to be encoded.
- the first resolution information includes resolution information of a current reference frame or target motion vector unit resolution information corresponding to an input video frame; and a target motion vector corresponding to each encoding block in a frame to be encoded is obtained according to the first vector transformation parameter.
- encoding the to-be-encoded frame according to the current reference frame by the processor to obtain the encoded data corresponding to the input video frame includes: performing sampling processing on the current reference frame according to the resolution information of the to-be-encoded frame to obtain the corresponding Target reference frame; encoding the frame to be coded according to the target reference frame to obtain coded data corresponding to the input video frame.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes: obtaining the frame to be encoded at the resolution of the frame to be encoded for encoding. Encoding method corresponding to the time; adding encoding method information corresponding to the encoding method to the encoded data corresponding to the input video frame.
- determining, by the processor, the first vector transformation parameter according to the resolution information and the first resolution information of the frame to be encoded includes: according to the resolution information of the frame to be encoded and the resolution information of the current reference frame. Determining the first vector transformation parameter; and obtaining the target motion vector corresponding to each coding block in the frame to be coded according to the first vector transformation parameter includes: acquiring first position information corresponding to the current coding block, and acquiring a target reference block corresponding to the current coding block Second position information; a target motion vector corresponding to the current coding block is calculated and calculated according to the first vector transformation parameter, the first position information, and the second position information.
- determining, by the processor, the first vector transformation parameter according to the resolution information and the first resolution information of the frame to be encoded includes: acquiring target resolution information of the unit of the motion vector; and according to the resolution of the frame to be encoded.
- the first vector transformation parameter is determined by the information and the target resolution information of the target motion vector unit.
- Obtaining the target motion vector corresponding to each coding block in the frame to be coded according to the first vector transformation parameter includes: A first motion vector; and obtaining a target motion vector corresponding to the current coding block according to the first vector transformation parameter and the first motion vector.
- encoding the to-be-encoded frame according to the current reference frame and obtaining the encoded data corresponding to the input video frame performed by the processor includes: obtaining an initial predicted motion vector corresponding to the current encoded block; The current motion vector unit resolution information and the target motion vector unit resolution information are used to obtain a second vector transform coefficient; the initial predicted motion vector and the second vector transform coefficient are used to obtain a target predicted motion vector corresponding to the current coding block; according to the target motion vector and The target predicts the motion vector to obtain the motion vector difference.
- the processor performs sampling processing on the current reference frame according to the resolution information of the frame to be encoded, and obtains a corresponding target reference frame including: according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy pair The current reference frame is sampled to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution information of the frame to be encoded and The pixel interpolation accuracy is calculated by calculating the pixel accuracy of the motion estimation; the sub-pixel interpolation processing is directly performed on the current reference frame according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the frame to be encoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: Sampling processing is performed on the current reference frame to obtain an intermediate reference frame; sub-pixel interpolation is performed on the intermediate reference frame according to the motion estimation pixel accuracy to obtain a target reference frame.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes adding the sampling mode information corresponding to the sampling process of the current reference frame. To the encoded data corresponding to the current reference frame.
- obtaining the current reference frame corresponding to the frame to be encoded by the processor includes: obtaining a first reference rule, where the first reference rule includes a resolution relationship between the frame to be encoded and the current reference frame; The reference rule acquires the current reference frame corresponding to the frame to be encoded.
- the encoding of the input video frame by the processor to encode the frame to be encoded at the resolution of the frame to be encoded includes adding rule information corresponding to the first reference rule to the input video frame. Corresponding encoded data.
- the processing performed by the processor to obtain the corresponding input video frame includes: calculating a ratio of the target prediction type encoding block in the forward encoded video frame corresponding to the input video frame; and determining the input video frame according to the ratio. Corresponding processing.
- the processing mode includes downsampling
- processing the input video frame by the processor according to the processing mode to obtain the frame to be encoded includes: downsampling the input video frame to obtain the frame to be encoded.
- the encoding performed by the processor to obtain the encoded data corresponding to the input video frame at the resolution of the frame to be encoded includes adding downsampling processing information corresponding to the downsampling processing to the input. In the encoded data corresponding to the video frame.
- a computer device is proposed, and the computer device may be as shown in FIG. 19 or FIG. 20.
- the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor.
- the processor executes the computer program, the following steps are implemented: obtaining the encoded data corresponding to the video sequence to be decoded; obtaining the video sequence to be decoded; Corresponding target video sequence decoding mode, the target video sequence decoding mode includes constant resolution decoding mode or mixed resolution decoding mode; according to the target video sequence decoding mode, decode the encoded data corresponding to the decoded video sequence to obtain the corresponding decoded video Frame sequence.
- the decoding of the encoded data corresponding to the video sequence to be decoded according to the target video sequence decoding mode performed by the processor to obtain the corresponding decoded video frame sequence includes: when the target video sequence decoding mode is constant resolution decoding In the mode, each to-be-decoded video frame of the to-be-decoded video sequence is decoded at a constant resolution.
- the decoding of the encoded data corresponding to the video sequence to be decoded according to the target video sequence decoding mode performed by the processor to obtain the corresponding decoded video frame sequence includes: when the target video sequence decoding mode is mixed resolution decoding In the mode, the resolution information corresponding to the video frame to be decoded is obtained; the encoded data is decoded according to the resolution information corresponding to the video frame to be decoded to obtain a reconstructed video frame corresponding to the video frame to be decoded; according to the resolution corresponding to the video frame to be decoded The information is used to process the reconstructed video frame to obtain the corresponding decoded video frame.
- obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded by the processor includes: acquiring current environment information, and the current environment information includes at least one of current encoding environment information and current decoding environment information. ; Obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded from the candidate video sequence decoding mode according to the current environment information.
- obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded by the processor includes parsing the encoded video data corresponding to the video sequence to be decoded to obtain the target video sequence decoding mode.
- obtaining the resolution information corresponding to the video frame to be decoded by the processor includes: reading the processing mode information from the encoded data, and obtaining the resolution information corresponding to the video frame to be decoded according to the processing mode information.
- obtaining the resolution information corresponding to the video frame to be decoded by the processor includes: calculating a proportion of the target prediction type decoding block in the forward decoded video frame corresponding to the video frame to be decoded; determined according to the proportion The processing mode corresponding to the video frame to be decoded; the resolution information corresponding to the video frame to be decoded is obtained according to the processing mode.
- decoding the encoded data according to the resolution information corresponding to the video frame to be decoded by the processor to obtain a reconstructed video frame corresponding to the video frame to be decoded includes: obtaining a current reference frame corresponding to the video frame to be decoded ; Decoding the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the decoding performed by the processor according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain the reconstructed video frame corresponding to the video frame to be decoded includes: And the first resolution information determine a third vector transformation parameter.
- the first resolution information includes target motion vector unit resolution information or resolution information of a current reference frame; and acquires each of the video frames to be decoded according to the encoded data.
- the target motion vector corresponding to the decoded block; the target reference block corresponding to each block to be decoded in the video frame to be decoded is obtained according to the third vector transformation parameter and the target motion vector; the encoded data is decoded according to the target reference block to obtain the corresponding video frame to be decoded Reconstructed video frames.
- determining, by the processor, the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information corresponding to the video frame to be decoded and the current reference frame. Determining the third vector transformation parameter according to the resolution information; obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining first position information corresponding to the current to-be-decoded block; The target reference block corresponding to the current block to be decoded is obtained according to the first position information, the third vector transformation parameter, and the target motion vector.
- determining, by the processor, the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information and the target motion vector corresponding to the video frame to be decoded.
- the unit resolution information determines the third vector transformation parameter; obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining the third vector transformation parameter according to the target motion vector and the third vector transformation parameter.
- obtaining the target motion vector corresponding to each block to be decoded in the video frame to be decoded according to the encoded data performed by the processor includes: obtaining the motion vector corresponding to the current block to be decoded in the video frame to be decoded according to the encoded data. Poor; obtain the initial predicted motion vector corresponding to the current block to be decoded; obtain the second vector transformation coefficient according to the current motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector; and according to the initial predicted motion vector and The second vector transformation coefficient obtains the target predicted motion vector corresponding to the current decoded block; and obtains the target motion vector according to the target predicted motion vector and the motion vector difference.
- the decoding performed by the processor according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain the reconstructed video frame corresponding to the video frame to be decoded includes: The current reference frame is sampled and processed to obtain a corresponding target reference frame; the target reference frame is decoded according to the target reference frame to obtain a reconstructed video frame corresponding to the decoded video frame.
- the processing performed by the processor on the current reference frame according to the resolution information corresponding to the video frame to be decoded to obtain the corresponding target reference frame includes: according to the resolution information and motion estimation of the video frame to be decoded
- the pixel precision processes the current reference frame to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the video frame to be decoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution of the video frame to be decoded Information and motion estimation pixel accuracy are calculated to obtain pixel interpolation accuracy; the current reference frame is directly subjected to sub-pixel interpolation processing according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the video frame to be decoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution of the video frame to be decoded The information performs sampling processing on the current reference frame to obtain an intermediate reference frame; and performs sub-pixel interpolation processing on the intermediate reference frame according to the pixel accuracy of the motion estimation to obtain a target reference frame.
- acquiring the current reference frame corresponding to the video frame to be decoded by the processor includes: acquiring a second reference rule, where the second reference rule includes a resolution relationship between the video frame to be decoded and the reference frame; The two reference rules obtain a current reference frame corresponding to the video frame to be decoded.
- a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor causes the processor to perform the following steps:
- the encoded data corresponding to the video sequence to be decoded is decoded to obtain a corresponding decoded video frame sequence.
- the decoding of the encoded data corresponding to the video sequence to be decoded according to the target video sequence decoding mode performed by the processor to obtain the corresponding decoded video frame sequence includes: when the target video sequence decoding mode is constant resolution decoding In the mode, each to-be-decoded video frame of the to-be-decoded video sequence is decoded at a constant resolution.
- the decoding of the encoded data corresponding to the video sequence to be decoded according to the target video sequence decoding mode performed by the processor to obtain the corresponding decoded video frame sequence includes: when the target video sequence decoding mode is mixed resolution decoding In the mode, the resolution information corresponding to the video frame to be decoded is obtained; the encoded data is decoded according to the resolution information corresponding to the video frame to be decoded to obtain a reconstructed video frame corresponding to the video frame to be decoded; according to the resolution corresponding to the video frame to be decoded The information is used to process the reconstructed video frame to obtain the corresponding decoded video frame.
- obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded by the processor includes: acquiring current environment information, and the current environment information includes at least one of current encoding environment information and current decoding environment information. ; Obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded from the candidate video sequence decoding mode according to the current environment information.
- obtaining the target video sequence decoding mode corresponding to the video sequence to be decoded by the processor includes parsing the encoded video data corresponding to the video sequence to be decoded to obtain the target video sequence decoding mode.
- obtaining the resolution information corresponding to the video frame to be decoded by the processor includes: reading the processing mode information from the encoded data, and obtaining the resolution information corresponding to the video frame to be decoded according to the processing mode information.
- obtaining the resolution information corresponding to the video frame to be decoded by the processor includes: calculating a proportion of the target prediction type decoding block in the forward decoded video frame corresponding to the video frame to be decoded; determined according to the proportion The processing mode corresponding to the video frame to be decoded; the resolution information corresponding to the video frame to be decoded is obtained according to the processing mode.
- decoding the encoded data according to the resolution information corresponding to the video frame to be decoded by the processor to obtain a reconstructed video frame corresponding to the video frame to be decoded includes: obtaining a current reference frame corresponding to the video frame to be decoded ; Decoding the encoded data according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain a reconstructed video frame corresponding to the video frame to be decoded.
- the decoding performed by the processor according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain the reconstructed video frame corresponding to the video frame to be decoded includes: And the first resolution information determine a third vector transformation parameter.
- the first resolution information includes target motion vector unit resolution information or resolution information of a current reference frame; and acquires each of the video frames to be decoded according to the encoded data.
- the target motion vector corresponding to the decoded block; the target reference block corresponding to each block to be decoded in the video frame to be decoded is obtained according to the third vector transformation parameter and the target motion vector; the encoded data is decoded according to the target reference block to obtain the corresponding video frame to be decoded Reconstructed video frames.
- determining, by the processor, the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information corresponding to the video frame to be decoded and the current reference frame. Determining the third vector transformation parameter according to the resolution information; obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining first position information corresponding to the current to-be-decoded block; The target reference block corresponding to the current block to be decoded is obtained according to the first position information, the third vector transformation parameter, and the target motion vector.
- determining, by the processor, the third vector transformation parameter according to the resolution information corresponding to the video frame to be decoded and the first resolution information includes: according to the resolution information and the target motion vector corresponding to the video frame to be decoded.
- the unit resolution information determines the third vector transformation parameter; obtaining the target reference block corresponding to each to-be-decoded block in the to-be-decoded video frame according to the third vector transformation parameter and the target motion vector includes: obtaining the third vector transformation parameter according to the target motion vector and the third vector transformation parameter.
- obtaining the target motion vector corresponding to each block to be decoded in the video frame to be decoded according to the encoded data performed by the processor includes: obtaining the motion vector corresponding to the current block to be decoded in the video frame to be decoded according to the encoded data. Poor; obtain the initial predicted motion vector corresponding to the current block to be decoded; obtain the second vector transformation coefficient according to the current motion vector unit resolution information and the target motion vector unit resolution information corresponding to the initial predicted motion vector; and according to the initial predicted motion vector and The second vector transformation coefficient obtains the target predicted motion vector corresponding to the current decoded block; and obtains the target motion vector according to the target predicted motion vector and the motion vector difference.
- the decoding performed by the processor according to the resolution information corresponding to the video frame to be decoded and the current reference frame to obtain the reconstructed video frame corresponding to the video frame to be decoded includes: The current reference frame is sampled and processed to obtain a corresponding target reference frame; the target reference frame is decoded according to the target reference frame to obtain a reconstructed video frame corresponding to the decoded video frame.
- the processing performed by the processor on the current reference frame according to the resolution information corresponding to the video frame to be decoded to obtain the corresponding target reference frame includes: according to the resolution information and motion estimation of the video frame to be decoded
- the pixel precision processes the current reference frame to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the video frame to be decoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution of the video frame to be decoded Information and motion estimation pixel accuracy are calculated to obtain pixel interpolation accuracy; the current reference frame is directly subjected to sub-pixel interpolation processing according to the pixel interpolation accuracy to obtain the corresponding target reference frame.
- the processing performed by the processor on the current reference frame according to the resolution information of the video frame to be decoded and the motion estimation pixel accuracy to obtain the corresponding target reference frame includes: according to the resolution of the video frame to be decoded The information performs sampling processing on the current reference frame to obtain an intermediate reference frame; and performs sub-pixel interpolation processing on the intermediate reference frame according to the pixel accuracy of the motion estimation to obtain a target reference frame.
- acquiring the current reference frame corresponding to the video frame to be decoded by the processor includes: acquiring a second reference rule, where the second reference rule includes a resolution relationship between the video frame to be decoded and the reference frame; The two reference rules obtain a current reference frame corresponding to the video frame to be decoded.
- Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM) or external cache memory.
- RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM dual data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous chain (Synchlink) DRAM
- SLDRAM synchronous chain (Synchlink) DRAM
- Rambus direct RAM
- DRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
本申请实施例涉及一种视频编码方法、视频解码方法、装置、计算机设备和存储介质,所述方法包括:在进行视频编码时,获取输入视频序列,从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式。根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
Description
本申请要求于2018年06月20日提交中国专利局、申请号为201810637511.2,申请名称为“视频编码、解码方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,特别是涉及视频编码方法、视频解码方法、装置、计算机设备和存储介质。
随着数字媒体技术和计算机技术的发展,视频应用于各个领域,如移动通信、网络监控、网络电视等。随着硬件性能和屏幕分辨率的提高,用户对高清视频的需求日益强烈。
发明内容
本申请实施例提供了一种视频编码方法、装置、计算机设备和存储介质,能够灵活选择输入视频序列的目标视频序列编码模式,根据目标视频序列编码模式对输入视频序列进行编码,自适应地调整输入视频序列的编码模式,能够在带宽有限的条件下提高视频编码质量。
一种视频编码方法,由计算机设备执行,所述方法包括:
获取输入视频序列;
从候选视频序列编码模式中,获取所述输入视频序列对应的目标视频序列编码模式,其中,所述候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;
根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据。
一种视频编码装置,所述装置包括:
输入视频序列获取模块,用于获取输入视频序列;
编码模式获取模块,用于从候选视频序列编码模式中,获取所述输入视频序列对应的目标视频序列编码模式,其中,所述候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;
编码模块,用于根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述视频编码方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行上述视频编码方法的步骤。
一种视频解码方法,由计算机设备执行,所述方法包括:
获取待解码视频序列对应的已编码数据;
获取所述待解码视频序列对应的目标视频序列解码模式,所述目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;
根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
一种视频解码装置,所述装置包括:
编码数据获取模块,用于获取待解码视频序列对应的已编码数据;
解码模式获取模块,用于获取所述待解码视频序列对应的目标视频序列解码模式,所述目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;
解码模块,用于根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述视频解码方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行上述视频解码方法的步骤。
图1为一个实施例中提供的视频编码方法和视频解码方法的应用环境图;
图2为一个实施例中视频编码方法对应的编码框架图;
图3为一个实施例中视频解码方法对应的解码框架图;
图4为一个实施例中编码块对应的示意图;
图5为一个实施例中提供的视频编码方法的流程图;
图6为一个实施例中提供的视频编码框架的示意图;
图7A为一个实施例中提供的据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据的流程图;
图7B为一个实施例中提供的编码数据的示意图;
图8为一个实施例中提供的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据的流程图;
图9A为一个实施例中提供的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据的流程图;
图9B为一个实施例中提供的对当前参考帧进行插值的示意图;
图9C为一个实施例中提供的对当前参考帧进行插值的示意图;
图10A为一个实施例中提供的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据的流程图;
图10B为一个实施例中提供的当前参考帧与待编码帧的示意图;
图11为一个实施例中提供的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据的流程图;
图12为一个实施例中提供的视频解码方法的流程图;
图13为一个实施例中提供的视频解码框架的示意图;
图14为一个实施例中提供的根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列的流程图;
图15为一个实施例中提供的根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧的流程图;
图16为一个实施例中提供的根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧的流程图;
图17为一个实施例中视频编码装置的结构框图;
图18为一个实施例中视频解码装置的结构框图;
图19为一个实施例中计算机设备的内部结构框图;
图20为一个实施例中计算机设备的内部结构框图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但除非特别说明,这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一矢量变换系数称为第二矢量变换系数, 且类似地,可将第二矢量变换系数称为第一矢量变换系数。
在带宽有限的条件下,传统的编码器对视频帧无区分地进行编码,可能出现某些场景视频质量差的问题,如在750kbps时,对于所有视频帧不加区分地进行编码时,存在部分视频帧质量差的情况,H.264\H.265\iOS等编码器都存在相似问题。
有鉴于此,本申请实施例提供了一种视频编码方法、装置、计算机设备和存储介质,在进行视频编码时,获取输入视频序列,从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式。根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。因此能够灵活选择输入视频序列的目标视频序列编码模式,根据目标视频序列编码模式对输入视频序列进行编码,自适应地调整输入视频序列的分辨率,提高在带宽有限的条件下视频编码质量。
本申请实施例还提供了一种视频解码方法、装置、计算机设备和存储介质,在进行视频解码时,获取待解码视频序列对应的已编码数据,获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式,根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。因此进行解码时,可以灵活地根据待解码视频序列对应的目标视频序列解码模式进行解码,能够得到准确的解码视频帧。
图1为一个实施例中提供的视频编码方法和视频解码方法的应用环境图,如图1所示,在该应用环境中,包括终端110以及服务器120。
终端110或服务器120可以通过编码器进行视频编码,或者通过解码器进行视频解码。终端110或服务器120也可以通过处理器运行视频编码程序进行视频编码,或者通过处理器运行视频解码程序进行视频解 码。服务器120通过输入接口接收到终端110发送的编码数据后,可直接传递至处理器进行解码,也可存储至数据库中等待后续解码。服务器120在通过处理器对原始视频帧编码得到编码数据后,可直接通过输出接口发送至终端110,也可将编码数据存储至数据库中等待后续传递。当然,服务器120也可以在获取终端110发送的编码数据后,发送到对应的接收终端中,由接收终端进行解码。
终端110和服务器120可以通过网络连接。终端110具体可以是台式终端或移动终端等计算机设备,移动终端具体可以包括手机、平板电脑以及笔记本电脑等中的至少一种,但并不局限于此。服务器120可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
图2为一个实施例中提供的视频编码方法对应的编码框架图。本申请实施例提供的视频编码方法可以获取输入视频序列的各个输入视频帧,对其进行编码,得到对应的编码数据,并通过存储发送单元222存储或者发送编码数据,或者存储并发送编码数据。其中,在处理方式决策单元202处,可以对输入视频帧进行处理方式决策,得到输入视频帧对应的处理方式。在第一处理单元204处,可以根据处理方式对输入视频帧进行处理,得到待编码帧。在第一预测单元206处,可以在待编码帧的分辨率下,可以对待编码帧的各个编码块进行帧内预测或者帧间预测,并根据编码块对应的参考块的图像值得到预测值以及对应的运动矢量,将编码块实际值与预测值相减得到预测残差,运动矢量表示编码块相对于参考块的位移。在变换单元208处,将空间域中的预测残差以及矢量信息变换到频率域,并可以对变换系数进行编码。变换的方法可以为离散傅里叶变换或者离散余弦变换等等,矢量信息可以为表示位移的实际运动矢量或者运动矢量差值,运动矢量差值为实际运动矢量与预测运动矢量的差值。
在量化单元210处,将变换后的数据映射成另一个数值,例如可以 通过变换后的数据除以量化步长得到一个较小的值。量化参数是量化步长对应的序号,根据量化参数可以查找到对应的量化步长。量化参数小,则图像帧的大部分的细节都会被保留,对应的码率高。量化参数大,则对应的码率低,但图像失真较大、质量不高。量化的原理用公式表示如下:FQ=round(y/Qstep)。其中,y为量化之前视频帧对应的值,Qstep为量化步长,FQ为对y进行量化得到的量化值。Round(x)函数指将取值进行四舍五入取偶,即四舍六入五取偶。量化参数与量化步长的对应关系具体可以根据需要进行设置。例如,在一些视频编码标准中,对于亮度编码而言,量化步长共有52个值,为0~51之间的整数,对于色度编码,量化步长的取值为0~39之间的整数,且量化步长随着量化参数的增加而增加,每当量化参数增加6,量化步长便增加一倍。
熵编码单元220用于进行熵编码,熵编码为按熵原理进行编码,且不丢失任何信息的数据编码方式,能够利用较小的字符来表达一定的信息。熵编码方法例如可以为香农编码(shannon)或者哈夫曼编码(huffman)等。
第一反量化单元212、第一反变换单元214、第一重建单元216以及第一参考信息自适应单元218是重建路径对应的单元。利用重建路径的各个单元进行帧的重建得到参考帧,能够保持编码以及解码中参考帧的一致。其中第一反量化单元212进行的步骤是进行量化的逆过程,第一反变换单元214进行的步骤是是变换单元208进行变换的逆过程,第一重建单元216用于将反变换得到的残差数据加上预测数据得到重建参考帧。第一参考信息自适应单元218用于在待编码帧的分辨率下,对重建得到的当前参考帧、待编码帧的各个编码块对应的位置信息、当前参考帧的各个参考块对应的位置信息以及运动矢量中等参考信息中的至少一个进行自适应处理,使第一预测单元206根据自适应处理后的参考信息进行预测。
图3为一个实施例中提供的视频解码方法对应的解码框架图。本申请实施例提供的视频解码方法可以通过编码数据获取单元300获取待解码视频序列的各个待解码视频帧对应的编码数据,通过熵解码单元302进行熵解码后,得到熵解码数据,第二反量化单元304对熵解码数据进行反量化,得到反量化数据,第二反变换单元306对反量化数据进行反变换,得到反变换的数据,该反变换的数据可以与图2中第一反变换单元214进行反变换后得到的数据是一致的。分辨率信息获取单元308用于获取待解码视频帧对应的分辨率信息。第二参考信息自适应单元312用于获取第二重建单元重建得到当前参考帧,根据待解码视频帧的分辨率信息对当前参考帧、待解码视频帧的各个待解码块对应的位置信息、当前参考帧的各个参考块对应的位置信息以及运动矢量等参考信息中的至少一个进行自适应处理,根据自适应处理后的信息进行预测。第二预测单元314根据自适应后得到的参考信息获取待解码块对应的参考块,根据参考块的图像值得到与图2中的预测值一致的预测值。第二重建单元310根据预测值以及反变换的数据即预测残差进行重建,得到重建视频帧。第二处理单元316根据待解码视频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。播放存储单元318可以对解码视频帧进行播放或者存储,或者进行播放以及存储。
可以理解,上述的编码框架图、解码框架图仅是一种示例,并不构成对本申请方案所应用于的编码方法的限定,具体的编码框架图以及解码框架图可以包括比图中所示更多或更少的单元,或者组合某些单元,或者具有不同的部件单元不知。例如,还可以对重建视频帧进行环路滤波,降低视频帧的方块效应,以提高视频质量。
在本申请实施例中,将执行编码的一端称为编码端,将执行解码的一端称为解码端。编码端和解码端可以是同一端或者不同端,上述计算机设备,比如终端或服务器,可以是编码端也可以是解码端。
可以将待编码帧划分为多个编码块,编码块的大小可以根据需要进行设置或者计算得到。例如编码块的大小可以均为8*8像素。或者可以通过计算各种编码块的划分方式对应的率失真代价,选择率失真代价小的划分方式进行编码块的划分。如图4所示为一个64*64像素图像块的划分示意图,一个方块代表一个编码块。由图4可知,编码块的大小可以包括32*32像素、16*16像素、8*8像素以及4*4像素。当然,编码块的大小也可是其他大小,例如可以是32*16像素或者是64*64像素。可以理解,在解码时,由于编码块与待解码块是一一对应的,因此待解码块的像素大小也可以包括32*32像素、16*16像素、8*8像素以及4*4像素等。
如图5所示,在一个实施例中,提出了一种视频编码方法,本实施例主要以该方法应用于上述图1中的终端110或服务器120来举例说明。具体可以包括以下步骤:
步骤S502,获取输入视频序列。
具体地,输入视频序列可以包括多个输入视频帧。视频帧是构成视频的单位,输入视频序列可以是计算机设备实时采集的视频序列,例如可以是通过终端的摄像头实时获取的视频序列,也可以是计算机设备预先存储的视频序列。输入视频序列中的各个输入视频帧对应的编码帧预测类型可以为I帧、B帧以及P帧等,可以根据编码算法确定输入视频帧对应的编码帧预测类型。其中I帧为帧内预测帧,P帧为前向预测帧,B帧为双向预测帧,P帧与B帧的各个编码块可以采用帧内预测方式也可以采用帧间预测方式进行编码。
步骤S504,从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式。
具体地,恒定分辨率编码模式是指输入视频序列对应的待编码帧是 在相同的分辨率下例如全分辨率下进行编码的,全分辨率编码是指保持输入视频帧的分辨率不变进行编码。可以理解,由于同一个视频序列中输入视频帧的分辨率一般相同,因此这里的全分辨率编码模式是恒定分辨率编码模式中的一种。当然,也可以将输入视频序列的各个全分辨率输入视频帧进行相同采样比例的采样处理,得到相同分辨率的视频帧。混合分辨率编码模式是指输入视频序列对应的待编码帧的分辨率是自适应调整的,即输入视频序列对应的待编码帧存在分辨率不同的情况。待编码帧是指直接用于进行编码的视频帧。计算机设备从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式的方法可以根据需要进行设置,例如,假设需要编码多个输入视频序列,则可以对其中的一个或多个输入视频序列进行恒定分辨率编码,对其他的输入视频序列进行混合分辨率编码。
在一个实施例中,获取输入视频序列对应的目标视频序列编码模式包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息确定输入视频序列对应的目标视频序列编码模式。
具体地,环境信息可以包括执行视频编码方法的设备的处理能力、执行视频解码方法的设备的处理能力以及当前应用场景信息中的一种或多种。处理能力可以用处理速度进行表示。例如,对于处理能力强的设备,由于处理速度快,对应的目标视频序列编码模式可以为全分辨率编码模式。当当前应用场景信息对应的当前应用场景为实时应用场景时,目标视频序列编码模式为混合分辨率编码模式。当当前应用场景信息对应的当前应用场景非实时应用场景时,视频序列编码模式为恒定分辨率编码模式。可以设置当前环境信息与视频序列编码模式对应的关系,当得到当前环境信息后,根据当前环境信息与视频序列编码模式的对应关系得到输入视频序列对应的目标视频序列编码模式。例如,可以设置执 行视频编码方法的设备的处理速度与执行视频解码方法的设备的处理速度的平均值与视频序列编码模式的对应关系。当得到执行视频编码方法的设备的处理速度与执行视频解码方法的设备的处理速度后,计算这两个处理速度的平均值,根据平均值得到目标视频序列编码模式。当前应用场景是否为实时应用场景可以根据需要进行设置。例如,视频通话应用场景、在线游戏应用场景为实时应用场景,视频网站上的视频编码、离线视频的编码对应的应用场景可以为非实时应用场景。
步骤S506,根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
具体地,当目标视频序列编码模式为恒定分辨率编码模式时,计算机设备对输入视频序列的各个输入视频帧进行恒定分辨率编码。当目标视频序列编码模式为混合分辨率编码模式时,计算机设备对输入视频序列进行混合分辨率编码,即输入视频序列对应的待编码帧存在分辨率不同的情况,需要根据输入视频帧对应的分辨率信息进行编码。
上述视频编码方法,在进行视频编码时,获取输入视频序列,从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式,根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。因此能够灵活选择输入视频序列的目标视频序列编码模式,根据目标视频序列编码模式对输入视频序列进行编码,自适应地调整输入视频序列的编码模式,能够在带宽有限的条件下提高视频编码质量。
在一个实施例中,步骤S506即根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:将目标视频序列编码模式对应的目标视频序列编码模式信息添加至编码数据中。
具体地,目标视频序列编码模式信息用于描述输入视频序列采用的 编码模式。计算机设备可以在编码数据中加入描述目标视频序列编码模式的标志位Sequence_Mix_Resolution_Flag,具体的标志位的值可以根据需要设置。视频序列编码模式信息在编码数据的添加位置可以是序列级头信息。例如,当Sequence_Mix_Resolution_Flag为1时,对应的目标视频序列编码模式可以为混合分辨率编码模式。当Sequence_Mix_Resolution_Flag为0时,对应的目标视频序列编码模式可以为恒定分辨率编码模式。
在一个实施例中,视频编码框架如图6示。视频编码框架包括恒定分辨率编码框架以及混合分辨率编码框架。混合分辨率编码框架可以与图2中的编码框架对应。当得到输入视频序列后,在视频序列编码模式获取模块处对视频序列编码模式进行决策,当目标视频序列编码模式为混合分辨率编码模式,则采用混合分辨率编码框架进行编码。当目标视频序列编码模式为恒定分辨率编码模式时,利用图6的恒定分辨率编码框架进行恒定分辨率编码。其中恒定分辨率编码框架可以是HEVC编码框架或者H.265编码框架等。
在一个实施例中,如图7A所示,步骤S506即根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:
步骤S702,当目标视频序列编码模式为混合分辨率编码模式时,获取输入视频帧对应的处理方式。
具体地,输入视频帧是输入视频序列中的视频帧。输入视频帧对应的处理方式是计算机设备从候选的处理方式中选取的,候选的处理方式可以包括全分辨率处理方式以及下采样处理方式。计算机设备得到输入视频帧对应的处理方式的方法可以根据实际需要设置。例如,可以是获取输入视频帧对应的处理参数,根据处理参数得到对应的处理方式。处理参数是用于确定处理方式的参数,具体采用的处理参数可以根据需要进行设置。例如处理参数可以包括输入视频帧对应的当前编码信息以及 图像特征的至少一种。
在一个实施例中,当输入视频帧对应的处理方式包括下采样处理方式时,还可以获取下采样比例以及下采样方法。其中,下采样比例为采样后的分辨率除以采样前的分辨率得到的比值。下采样方法可以可采用直接平均、滤波器、bicubic interpolation双三次插值或者bilinear Interpolation双线性插值等。下采样比例可以是预先设置的也可以灵活地调整下采样比例。例如,可以设置下采样的比例均为1/2。可以是输入视频序列的第一个输入视频帧的下采样比例为1/2,第二个输入视频帧的下采样比例为1/4。可以根据输入视频帧在视频组的编码位置得到下采样比例,其中编码位置越后,则下采样比例越小。下采样方向可以是垂直下采样、水平下采样以及垂直和水平下采样结合中的一种。例如若采样前的视频帧分辨率为800*800像素,当下采样比例为1/2且为进行水平下采样,则采样后视频帧的分辨率为400*800像素。当下采样比例为1/2且为进行垂直下采样,采样后视频帧的分辨率为800*400像素。
在一个实施例中,下采样的比例可以根据执行视频编码方法的设备如终端或者服务器的处理器能力得到。处理器处理能力强的设备对应的下采样比例大,处理器处理能力弱的设备对应的下采样比例小。可以设置处理器处理能力与下采样比例的对应关系。当需要进行编码时,获取处理器处理能力,根据处理器处理能力得到对应的下采样比例。例如,可以设置16位处理器对应的下采样比例为1/8,32位处理器对应的下采样比例为1/4。
在一个实施例中,下采样的比例可以根据输入视频帧作为参考帧的频率或者次数得到,可以设置下采样比例与输入视频帧作为参考帧的频率或者次数的对应关系。其中,输入视频帧作为参考帧的频率高或者次数多,则下采样比例大。输入视频帧作为参考帧的频率低或者次数少, 则下采样比例小。例如,对于I帧,作为参考帧的频率高,则对应的下采样比例大,可以为1/2。对于P帧,作为参考帧的频率低,则对应的下采样比例小,例如可以为1/4。通过根据输入视频帧作为参考帧的频率或者次数得到下采样比例,当输入视频帧作为参考帧的频率高或者次数多时,图像质量较好,因此能够提高进行预测的准确度,减小预测残差,提高编码图像的质量。
在一个实施例中,下采样的方法可以根据执行视频编码方法的设备如终端或者服务器的处理器能力得到。对于处理器处理能力强的设备对应的下采样方法复杂度高,处理器处理能力弱的设备对应的下采样方法复杂度低。可以设置处理器处理能力与下采样方法的对应关系,当需要进行编码时,获取处理器处理能力,根据处理器处理能力得到对应的下采样方法。例如,bicubic interpolation双三次插值比bilinear Interpolation双线性插值的复杂度高,因此可以设置16位处理器对应的下采样方法为bilinear Interpolation双线性插值,32位处理器对应的下采样方法为bicubic interpolation双三次插值。
本申请实施例中,在对输入视频帧采用下采样处理方式处理时,还可按照不同的下采样方法或下采样比例进行下采样,对输入视频帧进行处理的方式更为灵活。
在一个实施例中,计算机设备可以根据输入视频帧对应的当前编码信息以及图像特征信息中的至少一种得到输入视频帧对应的处理方式。当前编码信息是指视频在编码时得到的视频压缩参数信息,如帧预测类型、运动向量、量化参数、视频来源、码率、帧率以及分辨率的一种或多种。图像特征信息是指与图像内容相关的信息,包括图像运动信息以及图像纹理信息的一种或多种,如边缘等。当前编码信息以及图像特征信息反映了视频帧对应的场景、细节复杂度或者运动剧烈程度等,如通过运动向量、量化参数或者码率中的一个或多个等可判断运动场景,量 化参数大则一般运动剧烈,运动向量大则代表图像场景是大运动场景。还可根据已编码I帧与P帧或已编码I帧与B帧的码率比值判断,如果比值超过第一预设阈值,则判断为静止图像,如果比值小于第二预设阈值,则可判断为运动剧烈图像。或直接根据图像内容跟踪目标对象,根据目标对象的运动速度确定是否为大运动场景。码率一定时能表达的信息量一定,对于运动剧烈的场景,时间域信息量大,相应的可用于表达空间域信息的码率就少,因此采用低分辨率能达到较好的图像质量效果,更倾向于选择下采样模式进行编码。通过帧预测类型可确定画面切换场景,也可根据帧预测类型对其它帧的影响确定倾向的处理方式。如I帧一般为首帧或存在画面切换,I帧的质量影响了后续P帧或B帧的质量,所以帧内预测帧相比于帧间预测帧更倾向于选择全分辨率处理方式,以保证图像质量。因为P帧可作为B帧的参考帧,P帧图像质量影响了后续B帧的图像质量,所以如果采用P帧编码则相比于采用B帧编码更倾向于选择全分辨率处理方式。通过图像特征信息,如图像纹理信息确定待编码视频帧的纹理复杂度,如果纹理复杂,包含的细节多,则图像空域信息多,如果进行下采样,可能由于下采样损失较多细节信息,影响视频质量,所以纹理复杂的待编码视频帧相比于纹理简单的待编码视频帧更倾向于选择全分辨率处理。
在一个实施例中,计算机设备可以根据输入视频帧对应的当前量化参数以及量化参数阈值的大小关系得到输入视频帧对应的处理方式。如果当前量化参数大于量化参数阈值,则计算机设备确定处理方式为下采样方式,否则确定处理方式为全分辨率处理方式。量化参数阈值可以根据在输入视频帧之前,已编码的前向编码视频帧的帧内编码块的比例得到。可以预先设置帧内预测块比例与量化参数阈值的对应关系,从而在确定了当前帧的帧内预测块比例后,计算机设备可以根据对应关系确定与当前帧的帧内预测块比例对应的量化参数阈值。对于固定量化参数编 码,当前量化参数可以是对应的固定量化参数值。对于固定码率编码,则计算机设备可以根据码率控制模型计算得到输入视频帧对应的当前量化参数。或者,计算机设备可以将参考帧对应的量化参数作为输入视频帧对应的当前量化参数。本申请实施例中,当前量化参数越大,一般运动越剧烈,对于运动剧烈的场景更倾向于选择下采样处理方式。
在一个实施例中,帧内预测块比例与量化参数阈值的关系为正相关关系。比如,可以预先确定帧内预测块比例Intra
0与量化参数阈值QP
TH的对应关系为:
步骤S704,根据处理方式对输入视频帧进行处理,得到待编码帧,处理方式对应的待编码帧的分辨率为输入视频帧的分辨率或者比输入视频帧的分辨率小。
具体地,待编码帧是根据处理方式对输入视频帧进行处理得到的。当处理方式包括全分辨率处理方式时,则计算机设备可以将输入视频帧作为待编码帧。当处理方式包括下采样处理方式时,则计算机设备可以对输入视频帧进行下采样处理,得到待编码帧。例如,当输入视频帧的分辨率为800*800像素,处理方式为水平以及垂直方向均进行1/2下采样时,下采样得到的待编码帧的分辨率为400*400像素。
步骤S706,在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据。
具体地,编码可以包括预测、变换、量化以及熵编码中的至少一个。当待编码帧为I帧时,计算机设备在待编码帧的分辨率下对待编码帧进行帧内预测。当待编码帧为P帧以及B帧时,可以获取待编码帧对应的当前参考帧,根据当前参考帧进行预测得到预测残差,并对预测残差进 行变换、量化以及熵编码得到输入视频帧对应的编码数据。其中,在得到编码数据的过程中,根据待编码帧的分辨率对当前参考帧、待编码帧的各个编码块对应的位置信息、当前参考帧的各个参考块对应的位置信息以及运动矢量中的至少一个进行处理。例如,在计算预测残差时,可以根据待编码帧的分辨率信息对当前参考帧进行处理,得到目标参考帧,从目标参考帧中获取待编码帧中各个编码块对应的目标参考块,根据目标参考块进行预测,得到编码块对应的预测值,再根据编码块的实际值与预测值的差值得到预测残差。计算目标运动矢量时,如果当前参考帧的分辨率与待编码帧的分辨率不同,可以根据当前参考帧与待编码帧的分辨率信息对编码块的位置信息或者解码块的位置信息进行变换,使得待编码帧对应的位置信息与当前参考帧的位置信息处于同一量化尺度下,再根据变换后的位置信息得到目标运动矢量,以减少目标运动矢量的值,减少编码数据的数据量。或者,如果目标运动矢量对应的分辨率信息与待编码帧的分辨率信息不同,则在待编码帧的分辨率下,计算得到待编码帧的编码块对应的第一运动矢量时,根据待编码帧的分辨率信息以及目标运动矢量单位辨率信息对第一运动矢量进行变换,得到目标分辨率下的目标运动矢量。例如,假设待编码帧的分辨率为400*800像素,当前参考帧的分辨率为800*1600像素。则可以根据待编码帧的分辨率对当前参考帧进行1/2下采样,得到目标参考帧的分辨率为400*800像素,再根据目标参考帧进行视频编码。
本申请实施例中,在获取输入视频帧对应的处理方式后,可以根据处理方式对输入视频帧进行处理,得到待编码帧,处理方式对应的待编码帧的分辨率为输入视频帧的分辨率或者比输入视频帧的分辨率小,且在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据。这样,能够灵活选择视频帧的处理方式,对输入视频帧进行处理,自适应地调整输入视频帧的分辨率以待编码数据的数据量,而且是 在待编码帧的分辨率下进行编码的,能够得到准确的编码数据。
在一个实施例中,步骤S706即在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
具体地,处理方式信息用于描述输入视频帧采用的处理方式,可以在编码数据中加入描述处理方式的标志位Frame_Resolution_Flag,即在编码数据中加入描述处理方式信息的语法元素。各个处理方式对应的标志位的值可以根据需要设置。例如,当处理方式为全分辨率处理方式时,对应的Frame_Resolution_Flag可以为0,当处理方式为下采样处理方式时,对应的Frame_Resolution_Flag可以为1。
在一个实施例中,处理方式信息可添加至编码数据对应的帧级头信息中,例如可以添加到帧级头信息的预设位置中。帧级头信息是输入视频帧对应的编码数据的头信息,序列级头信息是指视频序列对应的编码数据的头信息,组级头信息是指视频组(GOP,Groups of Picture)对应的编码数据的头信息。一个视频帧序列可以包括多个视频组,一个视频组可以包括多个视频帧。图7B中虚线所指的方框表示各个输入视频帧对应的编码数据的帧级头信息,frame分别代表第1个、第2个以及第n个视频帧对应的编码数据。其中,图7B中,第一个输入视频帧以及第二个输入视频帧对应的处理方式为全分辨率处理方式,第三个输入视频帧对应的处理方式为下采样处理方式。
在一个实施例中,还可以将对输入视频帧进行下采样的下采样处理方式信息添加至输入视频帧对应的编码数据中,以使解码端在获取到编码数据时,能够根据下采样处理方式信息获取对应的对重建视频帧进行上采样的方法以及上采样比例。下采样处理方式信息包括下采样方法信息以及下采样比例信息中的至少一种。下采样方法信息在编码数据中的添加位置可以是对应的组级头信息、序列级头信息以及帧级头信息中的 一个。下采样方法信息在编码数据中的添加位置可以根据下采样方法对应的作用范围确定。下采样比例信息在编码数据中的添加位置可以是对应的组级头信息、序列级头信息以及帧级头信息中的任一个。下采样比例信息在编码数据中的添加位置可以根据下采样比例对应的作用范围确定,作用范围是指适用的范围。例如,如果下采样比例作用的范围是一个视频组,则可以将该视频组对应的下采样比例信息添加到该视频组对应的头信息中。如果下采样比例作用的范围是视频序列,则将下采样比例信息添加至该视频序列对应的序列级头信息,表示该视频序列的各个视频帧采用下采样比例信息对应的下采样比例进行下采样处理。
在一个实施例中,获取输入视频帧对应的处理方式包括:获取输入视频帧对应的处理参数,根据处理参数确定输入视频帧对应的处理方式。将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中包括:当处理参数在解码过程中不能重现时,将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
具体地,处理参数可以包括输入视频帧对应的图像编码信息以及图像特征信息中的至少一个。处理参数在解码过程中不能重现是指在解码的过程中不能得到或者不会产生该处理参数。例如,如果处理参数是与输入视频帧的图像内容对应的信息,而在编码过程中图像信息是存在损失的,则解码端的解码视频帧与输入视频帧存在差别,因此,解码过程中不会得到输入视频帧的图像内容对应的信息,即图像内容对应的信息在解码过程中不能重现。编码过程中需要计算率失真代价,而解码过程中不计算率失真代价,则当处理参数包括率失真代价时,则该处理参数不能在解码过程中重现。对于编码过程中得到的重建视频帧与输入视频帧的PSNR(Peak Signal to Noise Ratio,峰值信噪比)信息,解码过程中不能得到,因此PSNR信息在解码过程中不能重现。
在一个实施例中,例如对于输入视频帧对应的帧内编码块的个数以 及帧间编码块的个数等处理参数,在解码端是可以得到的,即可以重现。当处理参数在解码端中能够重现时,计算机设备可以将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中,也可以不将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。其中,将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中时,解码端可以从编码数据中读取处理方式信息,无需再根据处理数据得到处理方式。当不将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中时,由解码设备(即解码端)根据处理参数确定出与编码端一致的处理方式,可以减少编码数据的数据量。
在一个实施例中,如图8所示,步骤S706即在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:
步骤S802:获取待编码帧对应的当前参考帧。
具体地,当前参考帧是对待编码帧进行编码时所需参考的视频帧,当前参考帧是待编码帧之前已编码得到的数据进行重建得到的视频帧。待编码帧对应的当前参考帧的个数可为一个或多个。例如当待编码帧为P帧,则对应的参考帧可以为1个。当待编码帧为B帧,则对应的参考帧可以为2个。待编码帧对应的参考帧可以是根据参考关系得到的,参考关系根据各个视频编解码标准确定。例如,对于一个视频图像组(GOP、Group Of Picture)中的第二个视频帧,为B帧,对应的参考帧可以是该视频组的I帧以及视频组的第4帧对应的进行编码后再解码重建得到的视频帧。
在一个实施例中,获取待编码帧对应的当前参考帧包括:获取第一参考规则,第一参考规则包括待编码帧与当前参考帧的分辨率大小关系;根据第一参考规则获取待编码帧对应的当前参考帧。
具体地,第一参考规则确定了待编码帧与当前参考帧的分辨率大小限制关系,分辨率大小关系包括待编码帧与当前参考帧的分辨率相同以 及不同的至少一种。当第一参考规则包括待编码帧与当前参考帧的分辨率相同时,第一参考规则还可以包括待编码帧与当前参考帧的分辨率的处理方式参考规则。例如处理方式参考规则可以包括:全分辨率处理方式的待编码帧可以参考全分辨率处理方式的参考帧,以及下采样处理方式的待编码帧可以参考下采样处理方式的参考帧的一种或两种。当第一参考规则包括待编码帧与当前参考帧的分辨率不相同时,第一参考规则还可以包括待编码帧的分辨率大于当前参考帧的分辨率,以及待编码帧的分辨率小于当前参考帧的分辨率的一种或两种。因此,在一个实施例中,第一参考规则具体可以包括:原始分辨率待编码帧可以参考下采样分辨率参考帧、下采样分辨率待编码帧可以参考原始分辨率参考帧、原始分辨率待编码帧可以参考原始分辨率参考帧、以及下采样分辨率待编码帧可以参考下采样分辨率的参考帧中的一种或多种。其中原始分辨率待编码帧是指该待编码帧的分辨率与其对应的输入视频帧的分辨率相同,原始分辨率参考帧是指该参考帧的分辨率与其对应的输入视频帧的分辨率相同。下采样分辨率待编码帧是指该待编码帧是对应的输入视频帧进行下采样处理得到的。下采样分辨率参考帧是指该参考帧是对应的输入视频帧进行下采样处理得到的。得到第一参考规则后,根据第一参考规则获取待编码帧对应的当前参考帧,使得到的当前参考帧满足第一参考规则。
在一个实施例中,步骤S706即在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将第一参考规则对应的规则信息添加至输入视频帧对应的编码数据中。
具体地,规则信息用于描述采用的参考规则,计算机设备可以在编码数据中加入描述参考规则的标志位Resolution_Referencer_Rules。具体的标志位的值所代表的参考规则可以根据需要设置。规则信息在编码数据中的添加位置可以是组级头信息、序列级头信息以及帧级头信息 中的一个或多个。规则信息在编码数据中的添加位置可以根据第一处理参考规则的作用范围确定。例如,当第一参考规则为原始分辨率待编码帧可以参考下采样分辨率参考帧时,对应的Resolution_Referencer_Rules可以为1,当第一参考规则为下采样分辨率待编码帧可以参考下采样分辨率的参考帧时,对应的Resolution_Referencer_Rules可以为2。若视频序列采用相同的第一参考规则,则规则信息在编码数据的添加位置可以是序列级头信息。若第一参考规则是其中的一个视频组采用的参考规则,则规则信息在编码数据的添加位置是采用第一参考规则的视频组对应的组级头信息。
步骤S804:在待编码帧的分辨率下,根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
具体地,计算机设备可以获取待编码帧对应的当前参考帧,根据当前参考帧进行预测得到预测残差,并对预测残差进行变换、量化以及熵编码得到输入视频帧对应的编码数据。其中,得到编码数据的过程中,计算机设备根据待编码帧的分辨率对当前参考帧、待编码帧的各个编码块对应的位置信息、当前参考帧的各个参考块对应的位置信息以及运动矢量中的至少一个进行处理。得到当前参考帧后,计算机设备可以从当前参考帧中获取与待编码帧的编码块对应的参考块,根据参考块对编码块进行编码。也可以根据待编码帧的分辨率对当前参考帧进行处理,得到对应的目标参考帧,从目标参考帧中获取与待编码帧的编码块对应的目标参考块,根据目标参考块对编码块进行编码,得到输入视频帧对应的编码数据。
在一个实施例中,在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:在待编码帧的分辨率下,获取待编码帧进行编码时对应的编码方式;将编码方式对应的编码方式信息添加至输入视频帧对应的编码数据中。
具体地,编码方式是与进行编码有关的处理方式。例如可以包括编码时对参考帧进行解码重建后的视频帧采用的上采样方式、参考规则对应的规则以及对参考帧进行采样处理的采样方式以及运动矢量对应的分辨率中的一个或多个。通过将编码方式对应的编码方式信息添加至输入视频帧对应的编码数据,解码时可以根据编码方式信息对待解码视频帧对应的编码数据进行解码。
在一个实施例中,也可以不将编码方式对应的编码方式信息添加到编码数据中。而是在编解码标准中预先设置编码方式,在解码端中设置与该编码方式对应的解码方式。或者编码端与解码端可以根据相同或者对应的算法计算得到匹配的编码方式以及解码方式。例如,在编解码标准中预先设置编码时对当前参考帧进行上采样方法与解码时对当前参考帧进行上采样的方法相同。
在一个实施例中,如图9A所示,步骤S804即根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:
步骤S902,根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧。
具体地,目标参考帧,是对当前参考帧进行采样处理后得到的视频帧。采样处理是通过待编码帧的分辨率信息对当前参考帧进行采样,使得到的目标参考帧的分辨率信息匹配的过程。在进行采样处理时,计算机设备可以先确定采样方式,采样方式包括直接分像素插值方式和采样后分像素插值方式中的一种。直接分像素插值方式直接对当前参考帧进行分像素插值处理,采样后分像素插值方式对当前参考帧进行采样处理后再分像素插值处理。
分像素插值是通过当前参考帧中整像素的参考数据插值得到分像素级别的参考数据的过程。比如,如图9B、9C所示,为一个实施例中对当前参考帧进行插值的示意图。参照图9B,A1、A2、A3、B1、B2、B3 等像素点为当前参考帧中的2*2整像素点,根据这些整像素点的参考数据计算得到分像素点的参考数据,比如,可根据A1、A2、A3三个整像素点的参考数据取平均值计算得到分像素点a23的参考数据,根据A2、B2、C2三个整像素点的参考数据取平均值计算得到分像素点a21的参考数据,再根据分像素点a23、a21的参考数据计算得到分像素点a22的参考数据,实现对当前参考帧进行1/2像素精度插值。参照图9C,A1、A2、A3、B1、B2、B3等像素点为当前参考帧中的4*4整像素点,根据这些整像素点的参考数据计算得到15个分像素点参考数据,实现对当前参考帧进行1/4像素精度插值。比如,根据A2、B2整像素点的参考数据计算得到分像素点a8的参考数据,根据A2、A3整像素点的参考数据计算得到分像素点a2的参考数据,同理计算得到a1至a15共15个分像素点的参考数据,实现对整像素点A2的1/4像素精度插值。
在对待编码帧进行编码的过程中,需要在当前参考帧中采用运动搜索技术找到与待编码帧中编码块相应的参考块,根据编码块相对于参考块的运动信息计算得到运动矢量,对运动矢量进行编码以告知解码端参考块对应的参考数据在当前参考帧中的位置,因而,通过对当前参考帧进行分像素插值处理得到目标参考帧,待编码帧就可以根据分辨率更高的目标参考帧进行运动估计,从而提高运动估计的准确度,提升编码质量。
在一个实施例中,编码端和解码端可在各自的编解码规则中设置根据当前参考帧进行处理得到目标参考帧时所采用的采样方式,采用的采样方式应当是一致的,在编、解码时就根据设置确定对当前参考帧进行处理所对应的采样方式。
在一个实施例中,在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将对当前参考帧进行采样处理对应的采样方式信息添加至当前参考帧对应的编码数据中。当前参考帧进行 采样处理对应的采样方式信息在编码数据中的添加位置可以是对应的序列级头信息、组级头信息以及帧级头信息中的任一个。采样方式信息在编码数据中的添加位置可以根据采样方式对应的作用范围确定。可将采样方式信息添加至输入视频帧对应的编码数据的帧级头信息中,表示输入视频帧在被编码时对应的当前参考帧采用采样方式信息对应的采样方式进行分像素插值处理。例如,当编码数据的帧级头信息中用于确定采样方式的标识位Pixel_Sourse_Interpolation为0时,表示输入视频帧对应的当前参考帧采用直接进行分像素插值处理,在Pixel_Sourse_Interpolation为1时,表示输入视频帧对应的当前参考帧采用采样处理后再分像素插值处理。解码端就可按照编码数据中的标识位所表示的分像素插值方式对当前参考帧进行分像素插值处理得到目标参考帧,从而可依据目标参考帧对编码数据进行解码得到重建视频帧。
在一个实施例中,计算机设备可根据待编码帧的分辨率与当前参考帧的分辨率之间的比例关系确定对当前参考帧进行采样的比例。比如,输入视频帧的分辨率均为2M*2N,通过对当前输入视频帧按照全分辨率处理方式进行处理,即直接将当前输入视频帧作为待编码帧,则待编码帧的分辨率为2M*2N,而对可作为参考帧的输入视频帧按照下采样处理方式进行处理,得到下采样后的当前待编码参考帧的分辨率为M*2N,则重建后得到的相应的当前参考帧的分辨率也为M*2N,那么就确定对当前参考帧以宽2、高1的采样比例进行上采样处理,得到与待编码帧分辨率相同的帧。若通过对当前输入视频帧按照下采样处理方式进行处理,下采样后得到的待编码帧的分辨率为M*N,而对可作为参考帧的输入视频帧按照全分辨率处理方式进行处理,那么重建后得到的当前参考帧的分辨率为2M*2N,则确定对当前参考帧以宽、高均为1/2的采样比例进行下采样处理,得到与待编码帧分辨率相同的帧。
在一个实施例中,由于输入视频帧的分辨率一般是相同的,计算机设备可根据输入视频帧进行下采样得到待编码帧所对应的下采样比例,以及对可作为参考帧的输入视频帧进行下采样得到待编码参考帧所对应的下采样比例,确定对当前参考帧进行采样的比例。比如,通过对输入视频帧以1/2的采样比例进行下采样处理得到待编码帧,通过对可作为参考帧的输入视频帧以1/4的采样比例进行下采样处理得到待编码参考帧,那么根据待编码参考帧的编码数据重建后得到的当前参考帧对应的下采样比例也是1/4,那么,根据两者下采样比例之间的倍数关系,可确定对当前参考帧以2的采样比例进行上采样处理得到与待编码帧分辨率相同的帧。
在一个实施例中,对当前参考帧进行采样的采样方法与对输入视频帧进行下采样得到待编码帧的采样算法匹配,即如果需要对当前参考帧进行下采样,则下采样算法与对待编码视频帧进行下采样得到待编码帧的下采样算法相同。如果需要对当前参考帧进行上采样,则上采样算法与对输入视频帧进行下采样得到当前待编码帧的下采样算法匹配的相反的采样算法。
本实施例中,对当前参考帧进行采样的采样算法与对待编码视频帧进行下采样得到当前编码视频帧的采样算法匹配,可进一步提高当前参考帧与当前编码视频帧的图像匹配度,进一步提高帧间预测的准确度,减小预测残差,提高编码图像的质量。
步骤S904,根据目标参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
具体地,得到目标参考帧后,从目标参考帧中搜索得到与编码块相似的图像块作为目标参考块,计算编码块与目标参考块的像素差值得到预测残差。根据编码块与对应的目标参考块的位移得到第一运动矢量。根据第一运动矢量以及预测残差得到编码数据。
在一个实施例中,计算机设备可以根据目标运动矢量单位分辨率信息对第一运动矢量进行变换,得到在目标分辨率下的目标运动矢量,根据目标运动矢量以及预测残差得生成编码数据。其中,根据目标运动矢量单位分辨率信息对第一运动矢量进行变换,得到目标运动矢量的方法在后面描述。
在一个实施例中,计算机设备也可以计算目标运动矢量和对应的预测运动矢量之间的矢量差值,对矢量差值进行编码,得到编码数据,进一步减少编码数据量。计算矢量差值的步骤可以包括:获取当前编码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前编码块对应的目标预测运动矢量;根据目标运动矢量和目标预测运动矢量得到运动矢量差。其中,目标预测运动矢量是在目标分辨率下的运动矢量,计算矢量差值的方法在后面描述。
在一个实施例中,步骤S902根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧。
其中,运动估计像素精度是待编码帧中的编码块对应的运动矢量的单位长度。计算机设备在对待编码帧中的编码块进行编码时,可按照获取的运动估计像素精度将编码块对应的运动矢量的单位长度进行细化,这样得到的运动向量更为精细和准确,因而,需要按照获取的运动估计像素精度对当前参考帧进行采样处理得到目标参考帧,再依据目标参考帧计算待编码帧中各编码块对应的第一运动向量,基于该第一运动向量进行编码得到待编码帧对应的编码数据。
具体地,计算机设备可获取当前参考帧的分辨率信息,根据待编码 帧采用的分像素插值方式,以及待编码帧的分辨率信息、当前参考帧的分辨率信息以及待编码帧对应的运动估计像素精度确定对当前参考帧进行何种采样处理方法、采样处理对应的采样比例以及像素插值精度。运动估计像素精度的大小可以根据需要设置,例如一般为1/2像素精度、1/4像素精度或1/8像素精度。
在一个实施例中,计算机设备可根据待编码帧的图像特征信息为该待编码帧配置相应的运动估计像素精度。图像特征信息比如可以是该待编码帧的大小、纹理信息、运动速度等。可综合多种图像特征信息确定待编码帧对应的运动估计像素精度。待编码帧所携带的图像数据越复杂,图像信息越丰富,相应的运动估计像素精度更高。比如,在对P帧进行帧间预测时,可采用较高的运动估计像素精度计算P帧中各编码块对应的运动矢量,而在对B帧进行帧间预测时,可采用较低的运动估计像素精度计算B帧中各编码块对应的运动矢量。
在一个实施例中,根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
具体地,像素插值精度是对当前参考帧进行分像素插值对应的像素精度。当分像素插值方式为直接分像素插值方式,表示可对当前参考帧直接进行分像素插值处理得到目标参考帧。因此可以根据待编码帧的分辨率信息以及运动估计像素精度计算得到像素插值精度。可以计算当前参考帧的分辨率信息与待编码帧的分辨率信息的比例,根据该比例以及运动估计像素精度得到像素插值精度。
在一个实施例中,当当前参考帧的分辨率大于待编码帧的分辨率时,当前参考帧中部分分像素点的数据可直接复用,可作为与运动估计像素 精度相应的分像素点对应的数据。比如,待编码帧的分辨率为M*N,当前参考帧的分辨率为2M*2N,若运动估计像素精度为1/2,像素插值精度为1,那么当前参考帧可直接作为目标参考帧;若运动估计像素精度为1/4,那么计算得到像素插值精度为1/2,可对当前参考帧以1/2像素插值精度进行分像素插值处理得到目标参考帧。
在一个实施例中,当待编码帧的分辨率信息所表示的分辨率与当前参考帧的分辨率相同时,则根据运动估计像素精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
具体地,通过全分辨处理方式对输入视频帧进行处理得到待编码帧,且当前参考帧的分辨率也是原分辨率时,则待编码帧的分辨率和当前参考帧的分辨率相同。或者,通过下采样方式对输入视频帧进行处理得到待编码帧,且当前参考帧帧也是采用相同采样比例的下采样方式编码得到的编码数据重建得到的,则待编码帧的分辨率和当前参考帧的分辨率相同。那么,就可以基于运动估计像素精度直接对当前参考帧进行分像素插值处理得到目标参考帧,并且,分像素插值处理对应的像素插值精度和运动估计像素精度相同。
在一个实施例中,根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据运动估计像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。
具体地,当待编码帧对应的分像素插值方式为采样后分像素插值方式,表示要对当前参考帧先进行采样处理,得到与待编码帧分辨率相同的中间参考帧,再对中间参考帧进行分像素插值处理得到对应的目标参考帧。
当待编码帧的分辨率信息所表示的分辨率小于当前参考帧的分辨率时,则根据待编码帧的分辨率信息对当前参考帧进行下采样处理,得 到中间参考帧,然后基于待编码帧对应的运动估计像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。举例说明如下:通过对分辨率为2M*2N的输入视频帧按照下采样处理方式进行下采样处理得到分辨率为M*N的待编码帧,而当前参考帧的分辨率为2M*2N(全分辨率处理方式),则对当前参考帧按照1/2的采样比例进行下采样处理得到分辨率为M*N的中间参考帧,若获取的待编码帧对应的运动估计像素精度为1/2,再对中间参考帧按照与运动估计像素精度相同的像素插值精度,即1/2分像素插值精度进行分像素插值处理,得到目标参考帧;若获取的待编码帧对应的运动估计像素精度为1/4,则对中间参考帧按照1/4分像素插值精度进行分像素插值处理,得到目标参考帧。
当待编码帧的分辨率信息所表示的分辨率大于当前参考帧的分辨率时,则计算机设备根据待编码帧的分辨率信息对当前参考帧进行上采样处理,得到中间参考帧,然后基于待编码帧对应的运动估计像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。比如,待编码帧的分辨率为2M*2N,当前参考帧的分辨率为1/2M*1/2N,则需要按照采样比例为4对当前参考帧进行上采样处理得到与待编码帧分辨率相同的中间参考帧,若运动估计像素精度为1/2,则继续对得到的中间参考帧按照1/2像素插值精度进行分像素插值处理,得到目标参考帧;若运动估计像素精度为1/4,则继续对得到的中间参考帧按照1/4像素插值精度进行分像素插值处理,得到目标参考帧。
如图10A所示,步骤S804在待编码帧的分辨率下,根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:
步骤S1002,根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数,第一分辨率信息包括当前参考帧的分辨率信息或者输入视频帧对应的目标运动矢量单位分辨率信息。
具体地,第一矢量变换参数用于对得到运动矢量的位置信息或者运 动矢量进行变换。分辨率信息是与分辨率相关的信息,例如可以是分辨率本身或者下采样比例等。第一矢量变换参数可以是待编码帧的分辨率信息和第一分辨率信息之间的比例。例如,假设当前参考帧的下采样比例为1/3,待编码帧的下采样比例为1/6。则第一矢量变换参数可以为1/3除以1/6等于2。
步骤S1004,根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量。
具体地,得到第一矢量变换参数后,根据第一矢量变换参数对得到的运动矢量或者运动矢量对应的位置信息进行变换,得到目标运动矢量。当利用第一矢量变换参数对运动矢量进行变换时,使得目标运动矢量是在目标运动矢量单位分辨率信息所表示的目标分辨率下的运动矢量,目标运动矢量单位分辨率信息是与目标运动矢量的单位所对应的目标分辨率对应的信息,例如可以是目标分辨率本身或者下采样比例。当利用第一矢量变换参数对运动矢量对应的位置信息进行变换时,使得待编码帧对应的位置信息与当前参考帧的位置信息处于同一量化尺度下,根据变换后的位置信息得到第二运动矢量,将第二运动矢量变换为目标分辨率下的目标运动矢量。
在一个实施例中,步骤S1002根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:根据待编码帧的分辨率信息和当前参考帧的分辨率信息确定第一矢量变换参数。步骤S1004根据第一矢量变换参数得到待编码帧中各个编码块对应的运动矢量包括:获取当前编码块对应的第一位置信息,获取当前编码块对应的目标参考块对应的第二位置信息;根据第一矢量变换参数、第一位置信息和第二位置信息计算得到当前编码块对应的目标运动矢量。
具体地,当前编码块是输入视频帧中当前需要进行预测编码的编码块。目标参考块是参考帧中用于对当前编码块进行预测编码的图像块。 当前编码块对应的第一位置信息可以用像素的坐标表示。当前编码块对应的第一位置信息可以包括当前编码块的全部像素对应的坐标,当前编码块对应的第一位置信息也可以是包括当前编码块的一个或多个像素的坐标。目标参考块对应的第二位置信息可以包括目标参考块的全部像素对应的坐标,目标参考块对应的第二位置信息也可以是包括目标参考块的一个或多个像素的坐标。例如,可以当前图像块的第一个像素点的坐标作为当前编码块的坐标值,以目标参考块的第一个像素点的坐标作为目标参考块的坐标值。
在一个实施例中,可以利用第一矢量变换参数对第一位置信息进行变换,得到对应的第一变换位置信息,根据第一变换位置信息与第二位置信息的差值得到目标运动矢量。或者可以利用第一矢量变换参数对第二位置信息进行变换,得到对应的第二变换位置信息,根据第一位置信息与第二位变换置信息的差值得到目标运动矢量。
在一个实施例中,第一矢量变换参数是待编码帧的分辨率与当前参考帧的分辨率信息中,大分辨率信息除以小分辨率信息得到的比例,其中,大分辨率信息对应的分辨率比小分辨率对应的分辨率大。第一矢量变换参数用于对待编码帧与当前参考帧中小分辨率信息的帧的位置信息进行变换。例如,待编码帧的分辨率为1200*1200像素,当前参考帧的分辨率为600*600像素,则大分辨率为1200*1200像素,小分辨率为600*600像素。第一矢量变换参数可以为2。假设第一位置信息为(6,8),第二位置信息为(3,3)。则目标运动矢量为(6,8)-(3*2,3*2)=(0,2)。本申请实施例中,通过对小分辨率信息的帧对应的位置信息进行变换,可以降低目标运动矢量的值,减少编码数据的数据量。
在一个实施例中,第一矢量变换参数是待编码帧的分辨率与当前参考帧的分辨率信息中,小分辨率信息除以大分辨率信息得到的比例。第一矢量变换参数用于对待编码帧与当前参考帧中,大分辨率信息的帧的 位置信息进行变换。例如,待编码帧的分辨率为1200*1200像素,当前参考帧的分辨率为600*600像素,第一矢量变换参数可以为1/2。假设第一位置信息为(6,8),第二位置信息为(3,3)。则目标运动矢量为(6*1/2,8*1/2)-(3,3)=(0,1)。
本申请实施例中,通过第一矢量变换参数对位置信息进行变换,使得待编码帧对应的位置信息与当前参考帧的位置信息处于同一量化尺度下,可以降低目标运动矢量的值,减少编码数据的数据量。例如,如图10B所示,当前参考帧的分辨率为待编码帧的分辨率的2倍,当前编码块为像素(1、1)、(1、2)、(2、1)以及(2、2)组成的,对应的目标参考块为像素(4、2)、(4、3)、(5、2)以及(5、3)组成的,如果不进行变换,则目标运动矢量为(-3,-1),而如果在计算目标运动矢量时,将待编码帧中对应的位置信息乘以2,再计算目标运动矢量,则目标运动矢量为(-2,0),比(-3,-1)小。
在一个实施例中,步骤S1002即根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:获取目标运动矢量单位分辨率信息;根据待编码帧的分辨率信息和目标运动矢量单位分辨率信息确定第一矢量变换参数。步骤S1004即根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量包括:根据当前编码块与对应的目标参考块的位移得到第一运动矢量;根据第一矢量变换参数以及第一运动矢量得到当前编码块对应的目标运动矢量。
具体地,目标运动矢量单位分辨率信息是指与目标运动矢量的单位对应的目标分辨率对应的信息,例如可以是目标分辨率或者对应的下采样比例。目标运动矢量是以该分辨率下的矢量单位为标准计算的。由于输入视频序列对应的各个待编码帧可能有一些分辨率与输入视频帧的原始分辨率相同,而另一些待编码帧的分辨率比输入视频帧的原始分辨率小,即视频序列中待编码帧的分辨率有多种,因此需要确定目标运动 矢量的单位对应的分辨率。目标运动矢量的单位对应的分辨率可以是在编码前已经设定或者根据编码过程的参数得到,具体可以根据需要进行设置。
第一运动矢量是根据当前编码块与对应的目标参考块的位移得到的。目标参考块可以是从当前参考帧中获取的,也可以从对当前参考帧进行处理后得到的目标参考帧中获取的。当得到第一运动矢量后,可以将第一矢量变换参数以及第一运动矢量相乘,将得到的乘积作为目标运动矢量。比如,假设目标运动矢量单位对应的分辨率是原始分辨率,而待编码帧对应的下采样比例为1/2。由于目标运动矢量单位是原始分辨率,而第一运动矢量是在待编码帧的分辨率下计算得到的,因此需要对第一运动矢量进行变换,第一矢量变换参数等于2,当得到的第一运动矢量为(2,2,),则目标运动矢量为(4,4)。得到目标运动矢量后,可以根据目标运动矢量进行编码,例如可以对目标运动矢量以及当前编码块对应的预测残差进行编码,得到编码数据。
在一个实施例中,当目标参考块是从当前参考帧中获取的,可以理解,对于同一编码块,第一运动矢量可等于第二运动矢量。
在一个实施例中,目标运动矢量的单位对应的分辨率可以是输入视频帧对应的分辨率,即原始分辨率,或者目标运动矢量的单位对应的分辨率可以是待编码帧对应的分辨率。第一矢量变换参数可以是目标运动矢量单位对应的分辨率信息与待编码帧的分辨率信息的比例。例如,假设目标运动矢量单位对应的分辨率是原始分辨率,目标运动矢量单位对应的采样比例为1,待编码帧的分辨率的采样比例为1/2,则,第一矢量变换参数可以为1除以1/2等于2。或者,假设目标运动矢量单位对应的分辨率是原始分辨率,为900*900像素,目待编码帧的分辨率的采样比例为450*600像素,第一矢量变换参数可以包括两个,水平方向的第一矢量变换参数以及垂直方向的第一矢量变换参数。则水平方向的第一 矢量变换参数为900/450=2,垂直方向的第一矢量变换参数为900/600=1.5。
在一个实施例中,可以根据进行编码的设备的计算能力得到目标运动矢量单位分辨率信息,例如,当进行编码的设备只能对整数进行运算或者当数值为小数时运算耗时长,则目标运动矢量单位对应的分辨率可以为输入视频帧对应的原始分辨率,当进行编码的设备能够快速进行小数的运算,目标运动矢量单位对应的分辨率可以为待编码帧对应的分辨率。
在一个实施例中,当待编码帧的分辨率信息和目标运动矢量单位分辨率信息一致时,第一矢量变换参数为1,第一运动矢量与目标运动矢量相同,因此,可以跳过步骤S1002,将第一运动矢量作为目标运动矢量。当待编码帧的分辨率信息和目标运动矢量单位分辨率信息不一致时,则执行步骤S1002。
本申请实施例中,当目标运动矢量的单位对应的分辨率为输入视频帧对应的分辨率,即原始分辨率,对于分辨率统一的视频序列,各个输入视频帧对应目标分辨率是一致的,可以保持目标运动矢量的统一性。当目标运动矢量的单位对应的分辨率为待编码视频帧对应的分辨率时,由于待编码帧的分辨率信息和目标运动矢量单位分辨率信息一致,因此不需要需对第一运动矢量进行变换,可以减少计算时间。
在一个实施例中,计算机设备可以将表示目标运动矢量单位分辨率信息的标识信息添加至编码数据中,使解码端可以获取得到目标运动矢量对应的目标分辨率。如果不携带标识信息,则编码端与解码端可以约定目标运动矢量对应的目标分辨率。该标识信息用于表示目标运动矢量所对应的分辨率信息。标识信息在编码数据中的添加位置可以是组级头信息、序列级头信息帧级头信息以及块级头信息中的一个或多个,其中块级头信息是指编码块对应的编码数据的头信息。标识信息在编码数据 中的添加位置可以根据目标运动矢量单位分辨率信息的作用范围确定。例如,若视频序列中矢量单位对应的分辨率一致,则添加位置可以是序列级头信息。具体的标志位的值所代表的分辨率信息可以根据需要设置。例如,当目标运动矢量单位分辨率信息对应的分辨率为原始分辨率时,该标识信息对应的标志位MV_Scale_Adaptive为0,当目标运动矢量单位分辨率信息对应的分辨率为待编码帧对应的分辨率时,对应的标志位MV_Scale_Adaptive为1。
在一个实施例,如图11所示,步骤S804即根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:
步骤S1102,获取当前编码块对应的初始预测运动矢量。
具体地,为了降低用于编码数据的比特数,计算机设备可以对当前编码块的运动矢量进行预测,得到预测值,计算目标运动矢量与预测值的差值,得到运动矢量差值,对运动矢量差值进行编码。初始预测运动矢量用于对当前编码块的运动矢量进行预测。初始预测运动矢量的数量可以为一个或多个,具体可以根据需要进行设置。初始预测运动矢量的获取规则可以根据需要进行设置。由于当前编码块与其相邻编码块往往具有空间相关性,因此可以将当前编码块对应的一个或多个相邻已编码块的对应的目标运动矢量值作为初始预测运动矢量。例如,可以将当前编码块右上角以及左上角的相邻已编码块对应的第一运动矢量值作为初始预测运动矢量。或者,可以将当前编码块对应的目标参考块所对应的目标参考块的运动矢量值作为初始预测运动矢量。
步骤S1104,根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数。
具体地,当前运动矢量单位分辨率信息是指与初始预测运动矢量的单位对应的当前分辨率的信息,例如可以是当前分辨率或者下采样比例。初始预测运动矢量的单位对应的分辨率是指该初始预测运动矢量的单 位是以当前分辨率下的矢量单位为标准计算的,即是当前分辨率下的运动矢量。当初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息不同时,则根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息得到第二矢量变换系数。第二矢量变换参数用于将初始预测运动矢量变换为目标分辨率下的运动矢量。第二矢量变换参数可以是目标运动矢量单位对应的分辨率信息与当前运动矢量单位分辨率信息的比例。例如,假设目标运动矢量单位对应的分辨率是200*200像素,当前运动矢量单位分辨率信息是100*100像素,则第一矢量变换参数为2。
步骤S1106,根据初始预测运动矢量和第二矢量变换系数得到当前编码块对应的目标预测运动矢量。
具体地,计算机设备得到第二矢量变换参数后,根据初始预测运动矢量与第二矢量变换系数进行运算,得到目标预测运动矢量,目标预测运动矢量是目标分辨率下的预测运动矢量。例如,当初始预测运动矢量为一个时,可以将初始预测运动矢量与第二矢量变换系数的乘积作为目标预测运动矢量。当初始预测运动矢量为多个时,可以对初始预测运动矢量进行取最小值、取平均值或取中位数值等计算,得到计算结果,根据计算结果与第二矢量变换系数得到目标运动矢量。计算结果可以是初始预测运动矢量中的最小值、平均值以及中位数值中的一种或多种。可以理解,根据初始预测运动矢量和第二矢量变换系数得到目标预测运动矢量的算法可以自定义,在解码端可以利用一致的自定义的算法计算得到相同的目标预测运动矢量。
步骤S1108,根据目标运动矢量和目标预测运动矢量得到运动矢量差。
具体地,将运动目标矢量与目标预测运动矢量的差值作为运动矢量差,以根据运动矢量差进行编码,得到编码数据,减少编码数据的数据 量。
本申请实施例中,通过对初始预测运动矢量进行变换,得到在目标分辨率下的目标预测运动矢量,使目标预测运动矢量以及目标运动矢量的单位是在匹配的量化尺度下的,因此得到的运动矢量差值小,减少了编码数据的数据量。
在一个实施例中,步骤S702即获取输入视频帧对应的处理方式包括:计算目标预测类型编码块在输入视频帧对应的前向编码视频帧中的比例;根据该比例确定输入视频帧对应的处理方式。
具体地,预测类型编码块是帧预测类型对应的编码块。目标预测类型的比例可以是帧内编码块对应的比例以及帧间编码块对应的比例中的一种或两种。目标预测类型编码块在输入视频帧对应的前向编码视频帧中的比例可以是该目标预测类型编码块与其他预测类型编码块的比例,也可以是该目标预测类型编码块与总编码块数量的比例。具体可以根据需要进行设置。例如计算机设备可以获取前向编码视频帧中帧内编码块的第一数量,前向编码视频帧中帧间编码块的第二数量。根据第一数量和第二数量计算得到帧内编码块与帧间编码块的比例,或者统计前向编码视频帧的全部编码块的第三数量,根据第一数量和第三数量计算得到帧内编码块与第三数量的比例。还可根据第二数量和第三数量计算得到帧间编码块与第三数量的比例。
前向编码视频帧是指对输入视频帧进行编码之前已经编码的视频帧,获取的前向编码视频帧的具体数量可自定义,例如,前向编码视频帧可以是输入视频帧的前一个已编码的编码视频帧,前向编码视频帧也可以是输入视频帧的前3个已编码的编码视频帧。在计算得到目标预测类型编码块对应的在前向编码视频帧中的比例后,根据计算得到的比例确定输入视频帧对应的处理方式。若获取到的前向编码视频帧的数量为多个时,可以计算得到不同类型编码块对应的在每一个前向编码视频帧 中的比例,根据各个比例进行加权计算得到总比例,再根据总比例和预设阈值确定输入视频帧对应的目标处理方式。其中,前向视频帧对应的权重可以与前向编码视频帧与输入视频帧的编码距离成负相关关系。
在一个实施例中,可以计算前向编码视频帧中帧内编码块在前向编码视频帧中的比例,当该比例大于目标阈值时,确定处理方式为下采样处理方式。
对于该帧内编码块对应的比例,可以是当该比例大于目标阈值时,确定输入视频帧对应的目标处理方式为下采样处理方式,否则确定输入视频帧对应的目标处理方式为全分辨率处理方式。
本申请实施例中,如果帧内编码块的比例大,则说明视频会相对比较复杂或者视频帧之间的相关度比较低,因此得到的预测残差比较大,因此更倾向于采用下采样处理方式进行编码,以减少编码数据量。
其中目标阈值可根据输入视频帧对应的参考帧的处理方式进行确定。当输入视频帧对应的参考帧的处理方式为下采样方式时,获取第一预设阈值T1,将第一预设阈值T1作为目标阈值。同样地,当输入视频帧对应的参考帧的处理方式为全分辨率处理方式时,获取第二预设阈值T2,将第二预设阈值T2作为目标阈值。进一步地,在根据输入视频帧对应的参考帧的分辨率信息获取到目标阈值后,根据目标阈值和前向编码视频帧中帧内编码块在前向编码视频帧中的比例确定输入视频帧的处理方式。其中,当前向编码视频帧中帧内编码块在前向编码视频帧中的比例大于目标阈值时,确定输入视频帧对应的处理方式为下采样处理方式。
在一个实施例中,第二预设阈值大于第一预设阈值,这样,当当前参考帧对应的处理方式为全分辨率处理方式时,输入视频帧更倾向于采用全分辨率处理方式,当当前参考帧为下采样处理方式时,输入视频帧更倾向于采用下采样处理方式。
以下假设视频序列A包括三个输入视频帧:a、b以及c,对视频编码方法进行说明。
1、获取视频序列A对应的目标视频序列编码模式,由于当前环境是视频通话环境,目标视频序列编码模式为混合分辨率编码模式。
2、利用混合分辨率编码框架中的处理决策单元对第一个输入视频帧a进行决策,得到处理方式为下采样方式,下采样比例为1/2;对a进行下采样处理,得到下采样后的视频帧a1;对a1进行帧内编码,得到a1对应的编码数据d1,并将a1对应的编码数据d1进行重建,得到对应的重建视频帧a2。
3、利用混合分辨率编码框架中的处理决策单元对第二个输入视频帧b进行决策,得到处理方式为下采样方式,采样比例为1/4。对b进行下采样,得到下采样后的视频帧b1,对b1进行编码,得到b对应的编码数据d2,并在编码数据中携带下采样比例对应的采样比例信息以及处理方式对应的处理方式信息。
其中编码过程包括:由于b为帧间预测帧,因此需要将a2作为当前参考帧,由于b1与a2的分辨率不同,故需要对a2进行采样处理。确定a2的采样方式为直接分像素插值,运动估计精度为1/4,故像素插值精度为1/4*2=1/2,根据像素插值精度对a2进行1/2分像素插值,得到目标参考帧a3。计算b1中的当前编码块与目标参考帧中的目标参考块的第一运动矢量MV1,预测残差为p1。并获取得到目标分辨率为原始分辨率,因此,目标运动矢量为4MV1。计算得到初始预测矢量为MV2,初始预测矢量是在1/4下采样比例对应的分辨率下计算得到,因此,目标预测矢量为4MV2,故当前编码块对应的运动矢量差MVD1等于4MV1-4MV2。对MVD1以及p1进行变换、量化以及熵编码,得到编码数据d2。
4、利用混合分辨率编码框架中的处理决策单元对第三个输入视频 帧c进行决策,得到处理方式为下采样方式,采样比例为1/8。对c进行下采样,得到下采样后的视频帧c1,对c1进行编码,得到c对应的编码数据d3。
其中编码过程包括:由于c为帧间预测帧,对应的当前参考帧为对b的编码数据重建得到的重建视频帧b2,由于c1与b2的分辨率不同,故需要对b2进行采样处理。确定b2的采样方式为直接分像素插值,运动估计精度为1/4,故像素插值精度为1/4*2=1/2,根据像素插值精度对b2进行1/2分像素插值,得到目标参考帧b3。计算c1中的当前编码块与目标参考帧中的目标参考块的第一运动矢量MV3,预测残差为p2。并获取得到目标分辨率为原始分辨率,因此,目标运动矢量为8MV3。获取初始预测矢量为MV4,初始预测矢量是在1/4下采样比例对应的分辨率下计算得到,因此,目标预测矢量为4MV4,故当前编码块对应的运动矢量差MVD2等于8MV3-4MV4。对MVD2以及p2进行变换、量化以及熵编码,得到编码数据d3。
5、将d1、d2以及d3组成编码数据包,作为视频序列对应的编码数据,发送到接收终端,其中,视频序列对应的编码数据中携带了描述目标视频序列编码模式为混合分辨率编码模式的标志位。
如图12所示,在一个实施例中,提出了一种视频解码方法,本实施例主要以该方法应用于上述图1中的终端110或服务器120来举例说明。具体可以包括以下步骤:
步骤S1202,获取待解码视频序列对应的已编码数据。
具体地,待解码视频序列是需要进行解码的视频序列。一个待解码视频序列可以包括多个待解码视频帧。待解码视频序列可以是实时获取的视频序列,也可以是预先存储的待解码视频序列。可以理解,在编码端,编码得到的是输入视频序列对应的编码数据,当编码数据传输到解码端时,解码端接收的编码数据为待解码视频序列对应的已编码数据。
步骤S1204,获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式。
具体地,计算机设备可以从编码数据中解析得到目标视频序列编码模式信息,根据目标视频序列编码模式信息得到目标视频序列解码模式。例如,当目标视频序列编码模式信息对应的目标视频序列编码模式为恒定分辨率编码模式时,对应的目标视频序列解码模式为恒定分辨率解码模式。在恒定分辨率解码模式中,视频序列的各个待解码视频帧的分辨率是一致的。当目标视频序列编码模式信息对应的目标视频序列编码模式为混合分辨率编码模式时,对应的目标视频序列解码模式为混合分辨率解码模式,即待解码视频序列对应的待解码视频帧存在分辨率不同的情况。
在一个实施例中,视频解码框架如图13所示。视频解码框架包括恒定分辨率解码框架以及混合分辨率解码框架。混合分辨率解码框架可以与图3中的解码框架对应。当得到待解码视频序列后,在视频序列解码模式获取模块处对视频序列解码模式进行决策,当目标视频序列解码模式为混合分辨率解码模式,则采用混合分辨率解码框架进行解码,当目标视频序列解码模式为恒定分辨率解码模式时,利用图13的恒定分辨率解码框架进行恒定分辨率解码。其中恒定分辨率解码框架可以是HEVC解码框架或者H.265解码框架等。
在一个实施例中,可从编码数据的头信息中确定待解码视频帧对应的解码框架。具体地,解码端可以从编码数据对应的序列级头信息中,获取当前编码数据对应的输入视频帧序列中每个输入视频帧在被编码时所采用的编码框架,从而确定与之匹配的待解码视频帧的解码框架。比如,当编码数据的序列级头信息中用于确定所采用编码框架的标识位Sequence_Mix_Flag为0时,表示输入视频帧序列中各个输入视频帧在 被编码时均采用恒定分辨率的编码框架,则解码端可采用恒定分辨率的解码框架对编码数据进行解码得到待解码视频帧对应的重建视频帧;在Sequence_Mix_Flag为1时,表示输入视频帧序列中各个输入视频帧在被编码时均采用混合分辨率的编码框架,解码端就可采用混合适应分辨率的解码框架对编码数据进行解码得到重建视频帧序列。
在一个实施例中,获取待解码视频序列对应的目标视频序列解码模式可以包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列解码模式中得到待解码视频序列对应的目标视频序列解码模式。
具体地,计算机设备也可以根据编码端计算视频序列编码模式的方法得到对应的目标视频序列解码模式,因此本申请实施例中根据当前环境信息确定目标视频序列解码模式与根据当前环境信息确定目标视频序列编码模式是一致的,在此不再赘述。
在一个实施例中,当前环境信息包括当前应用场景信息,当当前应用场景信息对应的当前应用场景为实时应用场景时,目标视频序列解码模式为混合分辨率解码模式。
步骤S1206,根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
具体地,当目标视频序列解码模式为恒定分辨率解码模式时,对输入视频序列的各个待解码视频帧进行恒定分辨率解码。当视频序列解码模式为混合分辨率解码模式时,根据待解码视频序列中待解码视频帧的分辨率信息进行解码,即输入视频序列对应的待解码视频帧存在分辨率不同的情况,需要根据待解码视频帧的分辨率信息进行解码。
上述视频解码方法,在进行视频解码时,获取待解码视频序列对应的已编码数据,获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码 模式,根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。因此进行解码时,可以灵活地根据待解码视频序列对应的目标视频序列解码模式进行解码,能够得到准确的解码视频帧。
在一个实施例中,如图14所示,步骤S1206即根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:
步骤S1402,当目标视频序列解码模式为混合分辨率解码模式时,获取待解码视频帧对应的分辨率信息。
具体地,待解码视频帧是待解码视频序列中的视频帧。待解码分辨率信息是与分辨率相关的信息,可以是分辨率本身也可以下采样比例。待解码视频帧对应的分辨率信息可以是编码数据中携带的,也可以是解码设备经过计算得到的。
在一个实施例中,编码数据中可以携带待解码视频帧对应的分辨率信息,例如可以携带待解码视频帧对应的分辨率或者下采样比例。
在一个实施例中,编码数据中可以携带处理方式信息。计算机设备从编码数据中获取处理方式信息,根据处理方式信息得到待解码视频帧对应的分辨率信息。例如,编码数据中可以携带处理方式信息对应的处理方式为下采样处理方式,编码标准以及解码标准中确定了下采样比例均为1/2或者编码数据中携带对应的下采样比例,则获取得到的分辨率信息为下采样比例为1/2。
步骤S1404,根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
具体地,重建视频帧是解码重建得到的视频帧。可以理解,该重建视频帧对应的分辨率信息与编码过程中的待编码帧的分辨率信息是对应的。如果在编码的过程中图像信息不存在损失,则重建视频帧与待编 码帧是相同的。如果在编码的过程中图像信息存在损失,则重建视频帧与待编码帧的差异与损失值对应。对编码数据进行解码是根据待解码视频帧对应的分辨率信息进行的。解码可以包括预测、反变换、反量化以及熵解码中的至少一个,具体根据编码的过程确定。在解码时,计算机设备根据待解码视频帧的分辨率信息对当前参考帧、待解码视频帧的各个待解码块对应的位置信息、当前参考帧的各个参考块对应的位置信息以及运动矢量中的至少一个进行处理,其中的处理方法与编码端进行编码时的处理方法是匹配的。例如可以获取待解码视频帧对应的当前参考帧,根据待解码视频帧对应的分辨率信息对当前参考帧进行处理,得到目标参考帧,根据编码数据携带的运动矢量信息获取目标参考块,根据目标参考块得到待解码块对应的预测值,并根据编码数据中的预测残差与预测值得到重建视频帧。
在一个实施例中,当编码端对位置信息进行了变换时,则在解码过程中获取得到相应的位置信息时,需要对该位置信息进行相应的变换,以保持编码端与解码端得到的目标参考块的一致性。
在一个实施例中,当编码数据中携带的运动矢量信息是目标运动矢量时,可以根据目标运动矢量单位分辨率信息与待解码视频帧对应的分辨率信息将目标运动矢量进行变换,得到在待解码视频帧对应的分辨率信息下的第一运动矢量,根据第一运动矢量得到待解码块对应的目标参考块。
在一个实施例中,当编码数据中携带的运动矢量信息是运动矢量差值时,获取当前待解码块对应的初始预测运动矢量,对各待解码块对应的运动矢量差值和初始预测运动矢量在相同分辨率下进行处理,得到相应待解码块所对应的、且在待解码视频帧的分辨率下的第一运动矢量,根据第一运动矢量得到待解码块对应的目标参考块。
具体地,计算机设备将运动矢量差值和初始预测运动矢量都变换到 相同分辨率下对应的运动矢量。例如可以将初始预测运动矢量变换为目标分辨率下的目标预测运动矢量,根据目标预测运动矢量以及运动矢量差值得到目标运动矢量,再将目标运动矢量变换到待解码视频帧的分辨率下的第一运动矢量。也可以将初始预测运动矢量变换为待解码视频帧的分辨率下的预测运动矢量,将运动矢量差值变换到待解码视频帧的分辨率下的运动矢量差值,根据待解码视频帧的分辨率下的运动矢量差值以及待解码视频帧的分辨率下的预测运动矢量得到第一运动矢量。
步骤S1406,根据待解码视频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。
具体地,对重建视频帧进行处理可以是采样处理,例如为上采样处理。对重建视频帧进行处理的方法与编码中对输入视频帧的处理方法可以是相对应的。例如,当输入视频帧的处理方式是下采样处理方式时,且分辨率信息是1/2下采样比例,则对重建视频帧进行上采样处理,上采样比例可以是2。
在一个实施例中,当解码端从编码数据的头信息中确定编码数据是通过下采样处理方式进行编码得到的,则解码端还可从头信息中获取所采用的下采样比例信息或下采样方法信息,并采用与下采样比例信息或下采样方法信息匹配的上采样比例、上采样方法对得到的重建视频帧进行上采样处理,得到解码视频帧。比如,下采样比例信息对应的采样比例为1/2,则解码端需要按照采样比例为2以及下采样方法信息匹配的上采样方法对重建视频帧进行上采样处理,得到解码视频帧。解码端可以从序列级头信息、组级头信息以及帧级头信息中的任一个获取到当前编码数据对应的下采样比例信息或下采样方法信息。
上述视频解码方法,获取待解码视频帧对应的编码数据,获取待解码视频帧对应的分辨率信息,根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧,根据待解码视 频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。因此进行解码时,可以灵活地根据待解码视频帧对应的分辨率信息进行解码,得到解码视频帧,且是在待解码视频帧的分辨率下进行解码的,能够得到准确的解码视频帧。
在一个实施例中,将待解码视频序列的待解码视频帧对应的重建视频帧都处理成相同的分辨率,例如将重建视频帧处理成与输入视频帧的原始分辨率相同的解码视频帧。
在一个实施例中,如图15所示,步骤S1404即根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:
步骤S1502,获取待解码视频帧对应的当前参考帧。
具体地,待解码视频帧对应的参考帧的个数可为一个或多个。例如当待解码视频帧为P帧,则对应的参考帧可以为1个。当待解码视频帧为B帧,则对应的参考帧可以为2个。待编码帧对应的参考帧可以是根据参考关系得到的,参考关系根据各个视频编解码标准可以不同。例如,对于一个视频图像组(Group Of Picture,GOP)中的第二个视频帧,为B帧,对应的待解码视频帧可以是该视频组的I帧以及视频组的第4帧。或者待解码视频帧对应的当前参考帧可以是其前向的已编码帧中的前一个或者两个。可以理解,这里的当前参考帧与编码过程的当前参考帧是一致的。
在一个实施例中,获取待解码视频帧对应的当前参考帧包括:获取第二参考规则,第二参考规则包括待解码视频帧与当前参考帧的分辨率大小关系;根据第二参考规则获取待解码视频帧对应的当前参考帧。
具体地,第二参考规则确定了待解码视频帧与当前参考帧的分辨率大小的限制关系,可以理解,为了保证编码过程中获取得到的当前参考帧与解码过程中获取得到的参考帧的一致性,第一参考规则与第二参考 规则是一致的。第一参考规则、第二参考规则可以是在编解码标准中预先设置的。或者,在进行编码时,可以根据编码的应用场景、实时性要求等选择第一参考规则,并在编码数据中携带参考规则信息,解码器根据编码数据中的参考规则信息得到第二参考规则。分辨率大小关系包括待解码视频帧的分辨率与参考帧的分辨率相同以及不同的至少一种。当第二参考规则包括待解码视频帧与参考帧的分辨率相同时,第二参考规则还可以包括待解码视频帧与当前参考帧的分辨率的处理方式参考规则。例如处理方式参考规则可以包括:全分辨率处理方式的待解码视频帧可以参考全分辨率处理方式的当前参考帧,以及下采样处理方式的待解码视频帧可以参考下采样处理方式的当前参考帧的一种或两种。当第二参考规则包括待解码视频帧与参考帧的分辨率不相同时,第二参考规则还可以包括待解码视频帧的分辨率大于当前参考帧的分辨率,以及待解码视频帧的分辨率小于当前参考帧的分辨率的一种或两种。因此,第二参考规则可以包括原始分辨率待解码视频帧可以参考下采样分辨率参考帧、下采样分辨率待解码视频帧可以参考原始分辨率参考帧、原始分辨率待解码视频帧可以参考原始分辨率参考帧以及下采样分辨率待解码视频帧可以参考下采样分辨率的参考帧中的一种或多种。其中原始分辨率待解码视频帧是指该待解码视频帧的分辨率与对应的输入视频帧的分辨率相同,原始分辨率参考帧是指该参考帧的分辨率与其对应的输入视频帧的分辨率相同。下采样分辨率待解码视频帧是指该待解码视频帧对应的分辨率信息为下采样。下采样分辨率参考帧是指该参考帧对应的分辨率信息为下采样。得到第二参考规则后,根据第二参考规则获取待解码视频帧对应的当前参考帧,使得到的当前参考帧满足第二参考规则。
步骤S1504,根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
具体地,计算机设备可以从当前参考帧中获取与待解码视频帧的待解码块对应的参考块,根据参考块对待解码块进行解码。也可以根据待解码视频帧的分辨率信息对当前参考帧进行处理,得到对应的目标参考帧,从目标参考帧中获取与待解码视频帧的待解码块对应的目标参考块,根据目标参考块对编码块进行解码,待解码视频帧对应的重建视频帧。
在一个实施例中,步骤S1504即根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧;根据目标参考帧对待解码视频帧进行解码,得到待解码视频帧对应的重建视频帧。
具体地,根据编码数据携带的运动矢量信息从目标参考帧中获取目标参考块,根据目标参考块得到待解码块对应的预测值,并根据编码数据中的预测残差与预测值得到重建视频帧。
在一个实施例中,根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧。
在一个实施例中,根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
在一个实施例中,根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据进行运动估计的像素精度对中间参考帧进行分像素插值处理, 得到目标参考帧。
具体地,待解码视频帧与待编码视频帧的分辨率是一致的,得到的目标参考帧也是一致的,因此,根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧的方法与编码端中根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧是一致的,本申请实施例在此不再赘述。
在一个实施例中,解码端还可从编码数据的头信息中获取待解码视频帧对应的采样方式信息。具体可以从序列级头信息、组级头信息以及帧级头信息中的任一个获取到待解码视频帧对应的分像素插值方式信息。例如,当编码数据的帧级头信息中用于确定采样方式的标识位Pixel_Sourse_Interpolation为0时,表示输入视频帧对应的当前参考帧采用直接进行分像素插值处理,在Pixel_Sourse_Interpolation为1时,表示输入视频帧对应的当前参考帧采用采样处理后再分像素插值处理。解码端就可按照与编码数据中标识位所表示的分像素插值方式相同的方式对当前参考帧进行分像素插值处理得到目标参考帧,从而可依据目标参考帧对编码数据进行解码得到重建视频帧。
在一个实施例中,如图16所示,步骤S1504即根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:
步骤S1602,根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数,第一分辨率信息包括目标运动矢量单位分辨率信息或者当前参考帧的分辨率信息。
具体地,第三矢量变换参数用于对得到的运动矢量的位置信息或者运动矢量进行变换。第三矢量变换参数可以是第一分辨率信息与待解码视频帧的分辨率信息之间的比例,第三矢量变换参数与第一矢量变换参数是对应的。当利用第三矢量变换参数对目标运动矢量进行变换时,计 算机设备可以将目标运动矢量变换到待解码视频帧对应的分辨率下,所对应的运动矢量,则第三矢量变换参数可以是第一矢量变换参数的倒数。当利用第三矢量变换参数对运动矢量对应的位置信息进行变换时,如果编码端中第一矢量变换参数用于对第一位置信息进行变换,则由于待解码块与编码块的位置信息相同,因此第三矢量变换参数与第一矢量变换参数相同。如果编码端中第一矢量变换参数用于对第二位置信息进行变换,由于根据目标运动矢量以及第一位置信息计算得到的位置值,是编码端中对根据第一矢量变换参数对第二位置信息进行变换后的位置值,因此第三矢量变换参数为第一矢量变换参数的倒数。
步骤S1604,根据编码数据获取待解码视频帧中各个待解码块对应的目标运动矢量。
具体地,当编码数据中携带目标运动矢量时,计算机设备从编码数据中读取目标运动矢量。当编码数据中携带的是运动矢量差时,则计算机设备可以计算得到目标预测运动矢量,根据运动矢量差以及目标预测运动矢量得到目标运动矢量。
步骤S1606,根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块。
具体地,得到第三矢量变换参数后,计算机设备根据第三矢量变换参数对得到的运动矢量或者运动矢量对应的位置信息进行变换,得到目标参考块对应的位置信息,从而得到目标参考块。
步骤S1608,根据目标参考块对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
具体地,得到目标参考块后,计算机设备根据目标参考块的像素值以及编码数据中携带的待解码块的预测残差,得到重建视频帧各个图像块的像素值,得到重建视频帧。
在一个实施例中,步骤S1602即根据待解码视频帧对应的分辨率信 息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和当前参考帧的分辨率信息确定第三矢量变换参数;步骤S1606即根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:获取当前待解码块对应的第一位置信息;根据第一位置信息、第三矢量变换参数以及目标运动矢量得到当前待解码块对应的目标参考块。
具体地,计算机设备可以根据第一位置信息、第三矢量变换参数以及目标运动矢量得到目标参考块对应的第二位置信息,根据第二位置信息得到目标参考块。由于编码与解码的对应性,如果编码端中第一矢量变换参数用于对第一位置信息进行变换,则由于待解码块与编码块的位置信息相同,因此第三矢量变换参数与第一矢量变换参数相同。如果编码端中第一矢量变换参数用于对第二位置信息进行变换,由于根据目标运动矢量以及第一位置信息计算得到的位置值,是编码端中根据第一矢量变换参数对第二位置信息进行变换后的位置值,因此第三矢量变换参数为第一矢量变换参数的倒数。
例如,待解码视频帧的分辨率为1200*1200像素,当前参考帧的分辨率为600*600像素。第一矢量变换参数用于对第二位置信息进行变换,第一矢量变换参数为2,则第三矢量变换参数为1/2。假设第一位置信息为(6,8),目标运动矢量为(0,2),则中间位置信息为(6,8)-(0,2)=(6,6),目标参考块对应的第二位置信息为(6*1/2,6*1/2)=(3,3)。
例如,待解码视频帧的分辨率为1200*1200像素,当前参考帧的分辨率为600*600像素,第一矢量变换参数用于对第一位置信息进行变换,第一矢量变换参数为1/2,则第三矢量变换参数为1/2。假设第一位置信息为(6,8),则目标运动矢量为(0,1),则第二位置信息为(6*1/2,8*1/2)-(0,1)=(3,3)。
在一个实施例中,步骤S1602即根据待解码视频帧对应的分辨率信 息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和目标运动矢量单位分辨率信息确定第三矢量变换参数;步骤S1606即根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:根据目标运动矢量以及第三矢量变换参数得到第一运动矢量;根据第一运动矢量获取当前待解码块对应的目标参考块。
具体地,第三矢量变换参数是根据待解码视频帧对应的分辨率信息和目标运动矢量单位分辨率信息确定的,用于将目标运动矢量变换到待解码帧对应的分辨率下的第一运动矢量。当得到第三矢量变换参数后,计算机设备可以将第三矢量变换参数以及目标运动矢量相乘,将得到的乘积作为第一运动矢量。可以理解,根据第三矢量变换参数以及目标运动矢量得到第一运动矢量这一过程与根据第一矢量变换参数以及第一运动矢量得到当前编码块对应的目标运动矢量是逆过程。例如如果在编码端中,该待解码块对应的编码块的第一矢量变换参数等于2,得到的第一运动矢量为(2,2),根据第一矢量变换参数与第一运动矢量(2,2,)的乘积得到目标运动矢量为(4,4)。那么解码过程中,第三矢量变换参数为1/2,得到的目标运动矢量为(4,4),根据第三矢量变换参数1/2与目标运动矢量(4,4)的乘积得到第一运动矢量为(2,2)。
在一个实施例中,当编码数据中携带的是运动矢量差时,则根据编码数据获取待解码视频帧中各个待解码块对应的目标运动矢量包括:根据编码数据获取待解码视频帧中的当前待解码块对应的运动矢量差;获取当前待解码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前解码块对应的目标预测运动矢量;根据目标预测运动矢量以及运动矢量差得到目标运动矢量。
具体地,由于解码与编码过程中待解码块与待编码块是对应的,初始预测运动矢量获取规则相同,因此当前待解码块对应的初始运动预测矢量与当前待编码块对应的初始预测运动矢量是一致的,得到目标预测运动矢量的方法可以参照编码过程中的方法,具体不再赘述。目标运动矢量是目标预测运动矢量以及运动矢量差的和。
在一个实施例中,还可以计算目标预测类型解码块在待解码视频帧对应的前向解码视频帧中的比例;根据该比例确定待解码视频帧对应的处理方式;根据处理方式得到待解码视频帧对应的分辨率信息。
具体地,目标预测类型解码块与目标预测类型编码块是对应的。前向解码视频帧是在待解码视频帧解码的视频帧,前向解码视频帧与前向编码视频帧也是对应的,因此编码端得到的目标预测类型编码块的比例与解码端得到的目标预测类型解码块的比例的计算方法以及结果也是一致的,得到目标预测类型解码块的比例的方法可以参照目标预测类型编码块的比例的方法,在此不再赘述。得到处理方式后,当处理方式为全分辨率处理方式,则对应的分辨率信息为原始分辨率。当处理方式为下采样处理方式,获取预设的下采样比例或者从编码数据中的头信息中获取下采样比例。
在一个实施例中,可以计算前向解码视频帧中帧内解码块在前向解码视频帧中的比例,当该比例大于目标阈值时,确定处理方式为下采样处理方式。
对于帧内解码块对应的比例,可以是当该比例大于目标阈值时,确定待解码视频帧对应的目标处理方式为下采样处理方式,否则确定待解码视频帧对应的目标处理方式为全分辨率处理方式。
其中目标阈值可根据待解码视频帧对应的参考帧的处理方式进行确定。当待解码视频帧对应的参考帧的处理方式为下采样方式时,获取第一预设阈值T1,将第一预设阈值T1作为目标阈值。同样地,当待解 码视频帧对应的参考帧的处理方式为全分辨率处理方式时,获取第二预设阈值T2,将第二预设阈值T2作为目标阈值。进一步地,在根据待解码视频帧对应的参考帧的分辨率信息获取到目标阈值后,根据目标阈值和前向解码视频帧中帧内解码块在前向解码视频帧中的比例确定待解码视频帧的处理方式。其中,当前向解码视频帧中帧内解码块在前向解码视频帧中的比例大于目标阈值时,确定待解码视频帧对应的处理方式为下采样处理方式。
以下以对视频序列A对应的编码数据进行解码为例,对视频解码方法进行说明。其中,假设输入视频帧a、b、c在解码端对应的待解码视频帧的名称分别为e、f以及g。
1、接收终端(即解码端)获取视频序列A对应的编码数据,从编码数据对应的序列头信息中获取得到目标视频序列编码模式为混合分辨率编码模式,因此,利用混合分辨率解码框架对编码数据进行解码。
2、混合分辨率解码框架的分辨率信息获取单元获取第一个待解码视频帧e对应的分辨率信息。可以理解,e对应的编码数据为对a1进行编码得到的数据。对e进行帧内解码,得到重建视频帧e1,由于e对应的分辨率信息为1/2,因此,可以对重建视频帧e1进行采样比例为2的上采样处理,得到解码视频帧e2。
3、混合分辨率解码框架的分辨率信息获取单元获取第二个待解码视频帧f对应的分辨率信息,可以理解,f对应的编码数据为对b1进行编码得到的数据。对f进行帧间解码,得到重建视频帧f1,由于f对应的分辨率信息为下采样比例为1/4,因此,可以对重建视频帧f1进行采样比例为4的上采样处理,得到解码视频帧f2。
解码过程如下:由于f为帧间预测帧,因此需要将重建视频帧e1作为当前参考帧,可以理解,e1与a2是相同的,对e1进行与a2相同的采样处理,得到e3,这里的e3与a3是相同的,为目标参考帧。从编 码数据中获取得到当前待解码块对应的运动矢量差为MVD1,由于MVD1是目标分辨率即原始分辨率下的,因此需要将MVD1转换为f对应的分辨率下,因此可以得到MVD3为MVD1/4。获取初始预测矢量为MV2,由于初始预测矢量是在1/4下采样比例对应的分辨率下计算得到,与f对应的分辨率相同,故可以得到第一运动矢量为MV1,其等于MVD1/4+MV2。根据MV1获取得到目标参考块。根据目标参考块得到待解码块对应的预测值,将预测残差p1加上预测值重建得到重建视频帧f1对应的重建块。
4、混合分辨率解码框架的分辨率信息获取单元获取第三个待解码视频帧g对应的编码数据,可以理解,g对应的编码数据为对c1进行编码得到的数据。对g进行帧间解码,得到重建视频帧g1,由于g对应的分辨率信息为1/8,因此,可以对重建视频帧f1进行采样比例为8的上采样处理,得到解码视频帧g2。
解码过程如下:由于g为帧间预测帧,因此需要将重建视频帧f1作为当前参考帧,可以理解,f1与b2是相同的,对f1进行与b2相同的采样处理,得到f3,这里的f3与b3是相同的,为目标参考帧。从编码数据中获取得到当前待解码块对应的运动矢量差为MVD2,由于MVD2是目标分辨率即原始分辨率下的,因此需要将MVD2转换为g对应的分辨率下,因此可以得到MVD2为MVD1/8。获取初始预测矢量为MV4,由于初始预测矢量是在1/4下采样比例对应的分辨率下计算得到,需要变换为f对应的分辨率下,f对应的下采样比例为1/8,故可以得到第一运动矢量为MV3,其等于MVD2/8+MV4/2。根据MV3获取得到目标参考块。根据目标参考块得到待解码块对应的预测值,将预测残差p2加上预测值重建得到重建视频帧g1对应的重建块。
5、接收终端播放e2、f2以及g2。
如图17所示,在一个实施例中,提供了一种视频编码装置,该视频编码装置可以集成于上述的计算机设备如终端110或服务器120中, 具体可以包括输入视频序列获取模块1702、编码模式获取模块1704以及编码模块1706。
输入视频序列获取模块1702,用于获取输入视频序列;
编码模式获取模块1704,用于从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;
编码模块1706,用于根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
在一个实施例中,编码模块1706用于:当目标视频序列编码模式为恒定分辨率编码模式时,对输入视频序列的各个输入视频帧进行恒定分辨率编码。
在一个实施例中,编码模块1706包括:
处理方式获取单元,用于当目标视频序列编码模式为混合分辨率编码模式时,获取输入视频帧对应的处理方式;
处理单元,用于根据处理方式对输入视频帧进行处理,得到待编码帧,处理方式对应的待编码帧的分辨率为输入视频帧的分辨率或者比输入视频帧的分辨率小;
编码单元,用于在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据。
在一个实施例中,编码模式获取模块1704用于:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列编码模式中得到输入视频序列对应的目标视频序列编码模式。
在一个实施例中,当前环境信息包括当前应用场景信息;当当前应用场景信息对应的当前应用场景为实时应用场景时,目标视频序列编码模式为混合分辨率编码模式。
在一个实施例中,编码模块1706用于:将目标视频序列编码模式对应的目标视频序列编码模式信息添加至编码数据中。
如图18所示,在一个实施例中,提供了一种视频解码装置,该视频解码装置可以集成于上述的计算机设备如服务器120或终端110中,具体可以包括编码数据获取模块1802、解码模式获取模块1804以及解码模块1806。
编码数据获取模块1802,用于获取待解码视频序列对应的已编码数据;
解码模式获取模块1804,用于获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;
解码模块1806,用于根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
在一个实施例中,解码模块1806用于:当目标视频序列解码模式为恒定分辨率解码模式时,对待解码视频序列的各个待解码视频帧进行恒定分辨率解码。
在一个实施例中,解码模块1806包括:
分辨率信息获取单元,用于当目标视频序列解码模式为混合分辨率解码模式时,获取待解码视频帧对应的分辨率信息;
解码单元,用于根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧;
处理单元,用于根据待解码视频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。
在一个实施例中,解码模式获取模块1804用于:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列解码模式中得到待解码视 频序列对应的目标视频序列解码模式。
在一个实施例中,解码模式获取模块1804用于:从待解码视频序列对应的已编码数据中解析得到目标视频序列解码模式。
图19示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110。如图19所示,该计算机设备包括通过系统总线连接的处理器1901、存储器1902、网络接口1903、输入装置1904和显示屏1905。其中,存储器1902包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器1901执行时,可使得处理器1901实现本申请任一实施例的视频编码方法以及视频解码方法中的至少一种方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器1901执行时,可使得处理器1901执行本申请任一实施例的视频编码方法以及视频解码方法中的至少一种方法。计算机设备的显示屏1905可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置1904可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
图20示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的服务器120。如图20所示,该计算机设备包括通过系统总线连接的处理器2001、存储器2002以及网络接口2003。其中,存储器2002包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器2001执行时,可使得处理器2001实现本申请任一实施例的视频编码方法以及视频解码方法中的至少一种方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器2001执行时,可使得处理器2001执行本申请任一实施例的视频编码方法以及视频解码方法中的至少一种方法。
本领域技术人员可以理解,图19以及图20示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的视频编码装置可以实现为一种计算机程序的形式,该计算机程序可在如图19以及20所示的计算机设备上运行。计算机设备的存储器中可存储组成该视频编码装置的各个程序模块,比如,图17所示的输入视频序列获取模块1702、编码模式获取模块1704以及编码模块1706。各个程序模块构成的计算机程序使得处理器执行本说明书中描述的本申请各个实施例的视频编码方法中的步骤。
例如,图19以及20所示的计算机设备可以通过如图17所示的输入视频序列获取模块1702获取输入视频序列。通过编码模式获取模块1704从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式。通过编码模块1706根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
在一个实施例中,本申请提供的视频解码装置可以实现为一种计算机程序的形式,该计算机程序可在如图19以及20所示的计算机设备上运行。计算机设备的存储器中可存储组成该视频解码装置的各个程序模块,比如,图18示的编码数据获取模块1802、解码模式获取模块1804以及解码模块1806。各个程序模块构成的计算机程序使得处理器执行本说明书中描述的本申请各个实施例的视频解码方法中的步骤。
例如,图19以及20所示的计算机设备可以通过如图18所示的编码数据获取模块1802获取待解码视频序列对应的已编码数据。通过解码模式获取模块1804获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率 解码模式。通过解码模块1806根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
在一个实施例中,提出了一种计算机设备,该计算机设备可如图19或图20所示。该计算机设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现以下步骤:获取输入视频序列;从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当目标视频序列编码模式为恒定分辨率编码模式时,对输入视频序列的各个输入视频帧进行恒定分辨率编码。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当目标视频序列编码模式为混合分辨率编码模式时,获取输入视频帧对应的处理方式;根据处理方式对输入视频帧进行处理,得到待编码帧,处理方式对应的待编码帧的分辨率为输入视频帧的分辨率或者比输入视频帧的分辨率小;在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据。
在一个实施例中,处理器所执行的从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列编码模式中得到输入视频序列对应的目标视频序列编码模式。
在一个实施例中,当前环境信息包括当前应用场景信息;当当前应用场景信息对应的当前应用场景为实时应用场景时,目标视频序列编码模式为混合分辨率编码模式。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:将目标视频序列编码模式对应的目标视频序列编码模式信息添加至编码数据中。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器执行的获取输入视频帧对应的处理方式包括:获取输入视频帧对应的处理参数,根据处理参数确定输入视频帧对应的处理方式;将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中包括:当处理参数在解码过程中不能重现时,将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:获取待编码帧对应的当前参考帧;在待编码帧的分辨率下,根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
在其中一个实施例中,处理器执行的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数,第一分辨率信息包括当前参考帧的分辨率信息或者输入视频帧对应的目标运动矢量单位分辨率信息;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量。
在其中一个实施例中,处理器执行的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:根据待编码帧的分辨率 信息对当前参考帧进行采样处理,得到对应的目标参考帧;根据目标参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:在待编码帧的分辨率下,获取待编码帧进行编码时对应的编码方式;将编码方式对应的编码方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:根据待编码帧的分辨率信息和当前参考帧的分辨率信息确定第一矢量变换参数;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量包括:获取当前编码块对应的第一位置信息,获取当前编码块对应的目标参考块对应的第二位置信息;根据第一矢量变换参数、第一位置信息和第二位置信息计算得到当前编码块对应的目标运动矢量。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:获取目标运动矢量单位分辨率信息;根据待编码帧的分辨率信息和目标运动矢量单位分辨率信息确定第一矢量变换参数;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量包括:根据当前编码块与对应的目标参考块的位移得到第一运动矢量;根据第一矢量变换参数以及第一运动矢量得到当前编码块对应的目标运动矢量。
在其中一个实施例中,处理器所执行的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:获取当前编码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前编码块对应的目标预测运动矢量;根据目标运动矢量和目标预测运动矢量得到运动矢量 差。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据运动估计像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将对当前参考帧进行采样处理对应的采样方式信息添加至当前参考帧对应的编码数据中。
在其中一个实施例中,处理器所执行的获取待编码帧对应的当前参考帧包括:获取第一参考规则,第一参考规则包括待编码帧与当前参考帧的分辨率大小关系;根据第一参考规则获取待编码帧对应的当前参考帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将第一参考规则对应的规则信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器所执行的获取输入视频帧对应的处理方式包括:计算目标预测类型编码块在输入视频帧对应的前向编码视频帧中的比例;根据该比例确定输入视频帧对应的处理方式。
在其中一个实施例中,处理方式包括下采样,处理器所执行的根据处理方式对输入视频帧进行处理,得到待编码帧包括:对输入视频帧进行下采样处理,得到待编码帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将下采样处理对应的下采样处理方式信息添加至输入视频帧对应的编码数据中。
在一个实施例中,提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时,使得处理器执行以下步骤:获取输入视频序列;从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式,其中,候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当目标视频序列编码模式为恒定分辨率编码模式时,对输入视频序列的各个输入视频帧进行恒定分辨率编码。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当目标视频序列编码模式为混合分辨率编码模式时,获取输入视频帧对应的处理方式;根据处理方式对输入视频帧进行处理,得到待编码帧,处理方式对应的待编码帧的分辨率为输入视频帧的分辨率或者比输入视频帧的 分辨率小;在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据。
在一个实施例中,处理器所执行的从候选视频序列编码模式中,获取输入视频序列对应的目标视频序列编码模式包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列编码模式中得到输入视频序列对应的目标视频序列编码模式。
在一个实施例中,当前环境信息包括当前应用场景信息;当当前应用场景信息对应的当前应用场景为实时应用场景时,目标视频序列编码模式为混合分辨率编码模式。
在一个实施例中,处理器所执行的根据目标视频序列编码模式对输入视频序列的各个输入视频帧进行编码,得到编码数据包括:将目标视频序列编码模式对应的目标视频序列编码模式信息添加至编码数据中。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器执行的获取输入视频帧对应的处理方式包括:获取输入视频帧对应的处理参数,根据处理参数确定输入视频帧对应的处理方式;将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中包括:当处理参数在解码过程中不能重现时,将处理方式对应的处理方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:获取待编码帧对应的当前参考帧;在待编码帧的分辨率下,根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
在其中一个实施例中,处理器执行的根据当前参考帧对待编码帧进 行编码,得到输入视频帧对应的编码数据包括:根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数,第一分辨率信息包括当前参考帧的分辨率信息或者输入视频帧对应的目标运动矢量单位分辨率信息;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量。
在其中一个实施例中,处理器执行的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧;根据目标参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据。
在其中一个实施例中,处理器执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:在待编码帧的分辨率下,获取待编码帧进行编码时对应的编码方式;将编码方式对应的编码方式信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:根据待编码帧的分辨率信息和当前参考帧的分辨率信息确定第一矢量变换参数;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量包括:获取当前编码块对应的第一位置信息,获取当前编码块对应的目标参考块对应的第二位置信息;根据第一矢量变换参数、第一位置信息和第二位置信息计算得到当前编码块对应的目标运动矢量。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息和第一分辨率信息确定第一矢量变换参数包括:获取目标运动矢量单位分辨率信息;根据待编码帧的分辨率信息和目标运动矢量单位分辨率信息确定第一矢量变换参数;根据第一矢量变换参数得到待编码帧中各个编码块对应的目标运动矢量包括:根据当前编码块与对应的目标参考块的位移得到第一运动矢量;根据第一矢量变换参数以及第一运动矢量得 到当前编码块对应的目标运动矢量。
在其中一个实施例中,处理器所执行的根据当前参考帧对待编码帧进行编码,得到输入视频帧对应的编码数据包括:获取当前编码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前编码块对应的目标预测运动矢量;根据目标运动矢量和目标预测运动矢量得到运动矢量差。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待编码帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待编码帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据运动估计像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将对当前参考帧进行采样处理对应的采样方式信息添加至当前参考帧对应的编码数据中。
在其中一个实施例中,处理器所执行的获取待编码帧对应的当前参考帧包括:获取第一参考规则,第一参考规则包括待编码帧与当前参考帧的分辨率大小关系;根据第一参考规则获取待编码帧对应的当前参考帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将第一参考规则对应的规则信息添加至输入视频帧对应的编码数据中。
在其中一个实施例中,处理器所执行的获取输入视频帧对应的处理方式包括:计算目标预测类型编码块在输入视频帧对应的前向编码视频帧中的比例;根据该比例确定输入视频帧对应的处理方式。
在其中一个实施例中,处理方式包括下采样,处理器所执行的根据处理方式对输入视频帧进行处理,得到待编码帧包括:对输入视频帧进行下采样处理,得到待编码帧。
在其中一个实施例中,处理器所执行的在待编码帧的分辨率下,对待编码帧进行编码得到输入视频帧对应的编码数据包括:将下采样处理对应的下采样处理方式信息添加至输入视频帧对应的编码数据中。
在一个实施例中,提出了一种计算机设备,该计算机设备可如图19或图20所示。该计算机设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现以下步骤:获取待解码视频序列对应的已编码数据;获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
在其中一个实施例中,处理器所执行的根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当目标视频序列解码模式为恒定分辨率解码模式时,对待解 码视频序列的各个待解码视频帧进行恒定分辨率解码。
在其中一个实施例中,处理器所执行的根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当目标视频序列解码模式为混合分辨率解码模式时,获取待解码视频帧对应的分辨率信息;根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧;根据待解码视频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。
在其中一个实施例中,处理器所执行的获取待解码视频序列对应的目标视频序列解码模式包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列解码模式中得到待解码视频序列对应的目标视频序列解码模式。
在其中一个实施例中,处理器所执行的获取待解码视频序列对应的目标视频序列解码模式包括:从待解码视频序列对应的已编码数据中解析得到目标视频序列解码模式。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的分辨率信息包括:从编码数据中读取处理方式信息,根据处理方式信息得到待解码视频帧对应的分辨率信息。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的分辨率信息包括:计算目标预测类型解码块在待解码视频帧对应的前向解码视频帧中的比例;根据该比例确定待解码视频帧对应的处理方式;根据处理方式得到待解码视频帧对应的分辨率信息。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:获取待解码视频帧对应的当前参考帧;根据待解码视频帧对应的分 辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数,第一分辨率信息包括目标运动矢量单位分辨率信息或者当前参考帧的分辨率信息;根据编码数据获取待解码视频帧中各个待解码块对应的目标运动矢量;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块;根据目标参考块对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和当前参考帧的分辨率信息确定第三矢量变换参数;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:获取当前待解码块对应的第一位置信息;根据第一位置信息、第三矢量变换参数以及目标运动矢量得到当前待解码块对应的目标参考块。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和目标运动矢量单位分辨率信息确定第三矢量变换参数;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:根据目标运动矢量以及第三矢量变换参数得到第一运动矢量;根据第一运动矢量获取当前待解码块对应的目标参考块。
在其中一个实施例中,处理器所执行的根据编码数据获取待解码视 频帧中各个待解码块对应的目标运动矢量包括:根据编码数据获取待解码视频帧中的当前待解码块对应的运动矢量差;获取当前待解码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前解码块对应的目标预测运动矢量;根据目标预测运动矢量以及运动矢量差得到目标运动矢量。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧;根据目标参考帧对待解码视频帧进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据进行运动估计的像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的当前参考帧包括:获取第二参考规则,第二参考规则包括待解码视频帧与参考帧的分辨率大小关系;根据第二参考规则获取待解码视频帧对应的当前参考帧。
在一个实施例中,提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时,使得处理器执行以下步骤:
获取待解码视频序列对应的已编码数据;获取待解码视频序列对应的目标视频序列解码模式,该目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
在其中一个实施例中,处理器所执行的根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当目标视频序列解码模式为恒定分辨率解码模式时,对待解码视频序列的各个待解码视频帧进行恒定分辨率解码。
在其中一个实施例中,处理器所执行的根据目标视频序列解码模式对待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当目标视频序列解码模式为混合分辨率解码模式时,获取待解码视频帧对应的分辨率信息;根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧;根据待解码视频帧对应的分辨率信息对重建视频帧进行处理,得到对应的解码视频帧。
在其中一个实施例中,处理器所执行的获取待解码视频序列对应的目标视频序列解码模式包括:获取当前环境信息,当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据当前环境信息从候选视频序列解码模式中得到待解码视频序列对应的目标视频 序列解码模式。
在其中一个实施例中,处理器所执行的获取待解码视频序列对应的目标视频序列解码模式包括:从待解码视频序列对应的已编码数据中解析得到目标视频序列解码模式。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的分辨率信息包括:从编码数据中读取处理方式信息,根据处理方式信息得到待解码视频帧对应的分辨率信息。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的分辨率信息包括:计算目标预测类型解码块在待解码视频帧对应的前向解码视频帧中的比例;根据该比例确定待解码视频帧对应的处理方式;根据处理方式得到待解码视频帧对应的分辨率信息。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:获取待解码视频帧对应的当前参考帧;根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数,第一分辨率信息包括目标运动矢量单位分辨率信息或者当前参考帧的分辨率信息;根据编码数据获取待解码视频帧中各个待解码块对应的目标运动矢量;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块;根据目标参考块对编码数据进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分 辨率信息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和当前参考帧的分辨率信息确定第三矢量变换参数;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:获取当前待解码块对应的第一位置信息;根据第一位置信息、第三矢量变换参数以及目标运动矢量得到当前待解码块对应的目标参考块。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及第一分辨率信息确定第三矢量变换参数包括:根据待解码视频帧对应的分辨率信息和目标运动矢量单位分辨率信息确定第三矢量变换参数;根据第三矢量变换参数以及目标运动矢量得到待解码视频帧中各个待解码块对应的目标参考块包括:根据目标运动矢量以及第三矢量变换参数得到第一运动矢量;根据第一运动矢量获取当前待解码块对应的目标参考块。
在其中一个实施例中,处理器所执行的根据编码数据获取待解码视频帧中各个待解码块对应的目标运动矢量包括:根据编码数据获取待解码视频帧中的当前待解码块对应的运动矢量差;获取当前待解码块对应的初始预测运动矢量;根据初始预测运动矢量对应的当前运动矢量单位分辨率信息和目标运动矢量单位分辨率信息,得到第二矢量变换系数;根据初始预测运动矢量和第二矢量变换系数得到当前解码块对应的目标预测运动矢量;根据目标预测运动矢量以及运动矢量差得到目标运动矢量。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息以及当前参考帧对编码数据进行解码,得到待解码视频帧对应的重建视频帧包括:根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧;根据目标参考帧对待解码视频帧进行解码,得到待解码视频帧对应的重建视频帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧对应的分辨率信息对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息以及运动估计像素精度计算得到像素插值精度;根据像素插值精度直接对当前参考帧进行分像素插值处理,得到对应的目标参考帧。
在其中一个实施例中,处理器所执行的根据待解码视频帧的分辨率信息以及运动估计像素精度对当前参考帧进行采样处理,得到对应的目标参考帧包括:根据待解码视频帧的分辨率信息对当前参考帧进行采样处理,得到中间参考帧;根据进行运动估计的像素精度对中间参考帧进行分像素插值处理,得到目标参考帧。
在其中一个实施例中,处理器所执行的获取待解码视频帧对应的当前参考帧包括:获取第二参考规则,第二参考规则包括待解码视频帧与参考帧的分辨率大小关系;根据第二参考规则获取待解码视频帧对应的当前参考帧。
应该理解的是,虽然本申请各实施例的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成的。该程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求书为准。
Claims (17)
- 一种视频编码方法,由计算机设备执行,所述方法包括:获取输入视频序列;从候选视频序列编码模式中,获取所述输入视频序列对应的目标视频序列编码模式,其中,所述候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据。
- 根据权利要求1所述的方法,所述根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当所述目标视频序列编码模式为恒定分辨率编码模式时,对所述输入视频序列的各个输入视频帧进行恒定分辨率编码。
- 根据权利要求1所述的方法,所述根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据包括:当所述目标视频序列编码模式为混合分辨率编码模式时,获取输入视频帧对应的处理方式;根据所述处理方式对所述输入视频帧进行处理,得到待编码帧,所述处理方式对应的待编码帧的分辨率为所述输入视频帧的分辨率或者比所述输入视频帧的分辨率小;在所述待编码帧的分辨率下,对所述待编码帧进行编码得到所述输入视频帧对应的编码数据。
- 根据权利要求1所述的方法,所述从候选视频序列编码模式中,获取所述输入视频序列对应的目标视频序列编码模式包括:获取当前环境信息,所述当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据所述当前环境信息从候选视频序列编码模式中得到所述输入视频序列对应的目标视频序列编码模式。
- 根据权利要求4所述的方法,所述当前环境信息包括当前应用场景信息;当所述当前应用场景信息对应的当前应用场景为实时应用场景时,所述目标视频序列编码模式为混合分辨率编码模式。
- 根据权利要求1所述的方法,所述根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据包括:将所述目标视频序列编码模式对应的目标视频序列编码模式信息添加至所述编码数据中。
- 根据权利要求6所述的方法,所述将所述目标视频序列编码模式对应的目标视频序列编码模式信息添加至所述编码数据中包括:在编码数据的序列级头信息中加入描述目标视频序列编码模式的标志位,当所述标志位为1时,对应的目标视频序列编码模式为混合分辨率编码模式,当所述标志位为0时,对应的目标视频序列编码模式为恒定分辨率编码模式。
- 一种视频解码方法,由计算机设备执行,所述方法包括:获取待解码视频序列对应的已编码数据;获取所述待解码视频序列对应的目标视频序列解码模式,所述目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
- 根据权利要求8所述的方法,所述根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当所述目标视频序列解码模式为恒定分辨率解码模式时,对所述待解码视频序列的各个待解码视频帧进行恒定分辨率解码。
- 根据权利要求8所述的方法,所述根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列包括:当所述目标视频序列解码模式为混合分辨率解码模式时,获取所述待解码视频帧对应的分辨率信息;根据所述待解码视频帧对应的分辨率信息对所述编码数据进行解码,得到所述待解码视频帧对应的重建视频帧;根据所述待解码视频帧对应的分辨率信息对所述重建视频帧进行处理,得到对应的解码视频帧。
- 根据权利要求8所述的方法,所述获取所述待解码视频序列对应的目标视频序列解码模式包括:获取当前环境信息,所述当前环境信息包括当前编码环境信息、当前解码环境信息中的至少一种信息;根据所述当前环境信息从候选视频序列解码模式中得到所述待解码视频序列对应的目标视频序列解码模式。
- 根据权利要求11所述的方法,所述当前环境信息包括当前应用场景信息;当所述当前应用场景信息对应的当前应用场景为实时应用场景时,所述目标视频序列解码模式为混合分辨率解码模式。
- 根据权利要求8所述的方法,所述获取所述待解码视频序列对应的目标视频序列解码模式包括:从所述待解码视频序列对应的已编码数据中解析得到所述目标视频序列解码模式,所述已编码数据的序列级头信息中包括所述目标视频序列解码模式对应的目标视频序列解码模式信息。
- 一种视频编码装置,所述装置包括:输入视频序列获取模块,用于获取输入视频序列;编码模式获取模块,用于从候选视频序列编码模式中,获取所述输入视频序列对应的目标视频序列编码模式,其中,所述候选视频序列编码模式包括恒定分辨率编码模式和混合分辨率编码模式;编码模块,用于根据所述目标视频序列编码模式对所述输入视频序列的各个输入视频帧进行编码,得到编码数据。
- 一种视频解码装置,所述装置包括:编码数据获取模块,用于获取待解码视频序列对应的已编码数据;解码模式获取模块,用于获取所述待解码视频序列对应的目标视频序列解码模式,所述目标视频序列解码模式包括恒定分辨率解码模式或者混合分辨率解码模式;解码模块,用于根据所述目标视频序列解码模式对所述待解码视频序列对应的已编码数据进行解码,得到对应的解码视频帧序列。
- 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行权利要求1至7或8至13中任一项权利要求所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至7或8至13中任一项权利要求所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/988,511 US11330254B2 (en) | 2018-06-20 | 2020-08-07 | Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810637511.2 | 2018-06-20 | ||
CN201810637511.2A CN108848377B (zh) | 2018-06-20 | 2018-06-20 | 视频编码、解码方法、装置、计算机设备和存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/988,511 Continuation US11330254B2 (en) | 2018-06-20 | 2020-08-07 | Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019242408A1 true WO2019242408A1 (zh) | 2019-12-26 |
Family
ID=64202941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/084927 WO2019242408A1 (zh) | 2018-06-20 | 2019-04-29 | 视频编码方法、视频解码方法、装置、计算机设备和存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11330254B2 (zh) |
CN (1) | CN108848377B (zh) |
WO (1) | WO2019242408A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108848377B (zh) * | 2018-06-20 | 2022-03-01 | 腾讯科技(深圳)有限公司 | 视频编码、解码方法、装置、计算机设备和存储介质 |
CN112118446B (zh) * | 2019-06-20 | 2022-04-26 | 杭州海康威视数字技术股份有限公司 | 图像压缩方法及装置 |
WO2021000245A1 (en) * | 2019-07-02 | 2021-01-07 | Alibaba Group Holding Limited | Constant rate factor control for adaptive resolution video coding |
US20220224925A1 (en) * | 2019-07-09 | 2022-07-14 | Alibaba Group Holding Limited | Resolution-adaptive video coding |
CN110536168B (zh) * | 2019-09-11 | 2021-09-17 | 北京达佳互联信息技术有限公司 | 视频上传方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090041944A (ko) * | 2007-10-25 | 2009-04-29 | (주)씨앤에스 테크놀로지 | 인근 블록의 모드정보를 이용한 움직임 추정 방법 및 장치 |
CN105898308A (zh) * | 2015-12-18 | 2016-08-24 | 乐视云计算有限公司 | 变分辨率的编码模式预测方法及装置 |
CN107155107A (zh) * | 2017-03-21 | 2017-09-12 | 腾讯科技(深圳)有限公司 | 视频编码方法和装置、视频解码方法和装置 |
CN108848377A (zh) * | 2018-06-20 | 2018-11-20 | 腾讯科技(深圳)有限公司 | 视频编码、解码方法、装置、计算机设备和存储介质 |
CN109495740A (zh) * | 2018-11-07 | 2019-03-19 | 建湖云飞数据科技有限公司 | 一种基于环境信息来对图像进行编码的方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8391368B2 (en) * | 2005-04-08 | 2013-03-05 | Sri International | Macro-block based mixed resolution video compression system |
WO2010092740A1 (ja) * | 2009-02-10 | 2010-08-19 | パナソニック株式会社 | 画像処理装置、画像処理方法、プログラムおよび集積回路 |
US20110013692A1 (en) * | 2009-03-29 | 2011-01-20 | Cohen Robert A | Adaptive Video Transcoding |
JP5225201B2 (ja) * | 2009-06-01 | 2013-07-03 | 京セラドキュメントソリューションズ株式会社 | 画像処理装置 |
EP2630799A4 (en) * | 2010-10-20 | 2014-07-02 | Nokia Corp | METHOD AND DEVICE FOR VIDEO CODING AND DECODING |
CN102883157B (zh) * | 2011-07-12 | 2015-09-09 | 浙江大学 | 视频编码方法和视频编码器 |
US20160323600A1 (en) * | 2015-04-30 | 2016-11-03 | Zhan Ma | Methods and Apparatus for Use of Adaptive Prediction Resolution in Video Coding |
US10785279B2 (en) * | 2016-12-29 | 2020-09-22 | Facebook, Inc. | Video encoding using starve mode |
-
2018
- 2018-06-20 CN CN201810637511.2A patent/CN108848377B/zh active Active
-
2019
- 2019-04-29 WO PCT/CN2019/084927 patent/WO2019242408A1/zh active Application Filing
-
2020
- 2020-08-07 US US16/988,511 patent/US11330254B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090041944A (ko) * | 2007-10-25 | 2009-04-29 | (주)씨앤에스 테크놀로지 | 인근 블록의 모드정보를 이용한 움직임 추정 방법 및 장치 |
CN105898308A (zh) * | 2015-12-18 | 2016-08-24 | 乐视云计算有限公司 | 变分辨率的编码模式预测方法及装置 |
CN107155107A (zh) * | 2017-03-21 | 2017-09-12 | 腾讯科技(深圳)有限公司 | 视频编码方法和装置、视频解码方法和装置 |
CN108848377A (zh) * | 2018-06-20 | 2018-11-20 | 腾讯科技(深圳)有限公司 | 视频编码、解码方法、装置、计算机设备和存储介质 |
CN109495740A (zh) * | 2018-11-07 | 2019-03-19 | 建湖云飞数据科技有限公司 | 一种基于环境信息来对图像进行编码的方法 |
Also Published As
Publication number | Publication date |
---|---|
CN108848377B (zh) | 2022-03-01 |
US20200374511A1 (en) | 2020-11-26 |
CN108848377A (zh) | 2018-11-20 |
US11330254B2 (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019242490A1 (zh) | 视频编码和解码方法、装置、计算机设备及存储介质 | |
CN108848381B (zh) | 视频编码方法、解码方法、装置、计算机设备及存储介质 | |
WO2019242491A1 (zh) | 视频编码、解码方法、装置、计算机设备和存储介质 | |
WO2019242528A1 (zh) | 视频编码、解码方法、装置、存储介质和计算机设备 | |
CN108769682B (zh) | 视频编码、解码方法、装置、计算机设备和存储介质 | |
WO2019242506A1 (zh) | 视频编码方法、解码方法、装置、计算机设备和存储介质 | |
WO2019242486A1 (zh) | 视频编码方法、解码方法、装置、计算机设备及存储介质 | |
WO2019242424A1 (zh) | 视频编码、解码方法、装置、计算机设备和存储介质 | |
CN108924553B (zh) | 视频编码、解码方法、装置、计算机设备和存储介质 | |
CN108833923B (zh) | 视频编码、解码方法、装置、存储介质和计算机设备 | |
WO2019242408A1 (zh) | 视频编码方法、视频解码方法、装置、计算机设备和存储介质 | |
US11025950B2 (en) | Motion field-based reference frame rendering for motion compensated prediction in video coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19821698 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19821698 Country of ref document: EP Kind code of ref document: A1 |