CN115086654A - Video encoding and decoding method and device, computer readable medium and electronic equipment - Google Patents

Video encoding and decoding method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN115086654A
CN115086654A CN202110273043.7A CN202110273043A CN115086654A CN 115086654 A CN115086654 A CN 115086654A CN 202110273043 A CN202110273043 A CN 202110273043A CN 115086654 A CN115086654 A CN 115086654A
Authority
CN
China
Prior art keywords
intra
mode
prediction modes
candidate
prediction mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110273043.7A
Other languages
Chinese (zh)
Inventor
王力强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110273043.7A priority Critical patent/CN115086654A/en
Publication of CN115086654A publication Critical patent/CN115086654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a video coding and decoding method, a video coding and decoding device, a computer readable medium and electronic equipment. The video decoding method includes: decoding the coding block of the video image frame to obtain a SAWP index value; acquiring intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and acquiring intra-frame prediction mode statistical results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; determining a maximum possible intra-frame prediction mode list according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results; the technical scheme of the embodiment of the application can improve the coding and decoding efficiency of the video by decoding other coding blocks of the video image frame based on the maximum possible intra-frame prediction mode list.

Description

Video encoding and decoding method and device, computer readable medium and electronic equipment
Technical Field
The present application relates to the field of computer and communication technologies, and in particular, to a video encoding and decoding method, an apparatus, a computer readable medium, and an electronic device.
Background
In the related art, only the intra Prediction Mode of a spatially adjacent block and a preset intra Prediction Mode are used in the MPM (Most Probable Prediction Mode) construction of the SAWP (Spatial Angular Weighted Prediction), but in the screen content coding, the intra Prediction Mode of the current coding block and the intra Prediction Mode of the spatially adjacent block have a weak correlation, which further causes the average coding length of the intra Prediction Mode of the SAWP to be longer, and affects the coding and decoding efficiency.
Disclosure of Invention
Embodiments of the present application provide a video encoding and decoding method, an apparatus, a computer-readable medium, and an electronic device, so that encoding and decoding efficiency of a video can be improved at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a video decoding method including: decoding the coding block of the video image frame to obtain a SAWP index value; acquiring intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and acquiring intra-frame prediction mode statistical results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; determining a maximum possible intra-frame prediction mode list according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results; decoding other encoded blocks of the video image frame based on the list of most probable intra-prediction modes.
According to an aspect of an embodiment of the present application, there is provided a video encoding method, including: determining a SAWP index value corresponding to a coding block of a video image frame; acquiring intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and acquiring intra-frame prediction mode statistical results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; determining a maximum possible intra-frame prediction mode list according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results; encoding other encoded blocks of the video image frame based on the list of most probable intra-prediction modes.
According to an aspect of an embodiment of the present application, there is provided a video decoding apparatus including: the decoding unit is configured to decode the coding blocks of the video image frame to obtain a spatial domain angle weighted prediction (SAWP) index value; a first obtaining unit, configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; a first processing unit, configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics; a second processing unit configured to perform decoding processing on other encoded blocks of the video image frame based on the maximum possible intra-prediction mode list.
In some embodiments of the present application, based on the foregoing solution, the first obtaining unit is configured to: and counting the appointed intra-frame prediction mode adopted by the coding blocks in the video image frame to obtain the intra-frame prediction mode counting result.
In some embodiments of the present application, based on the foregoing scheme, the specifying the intra prediction mode comprises:
all intra prediction modes; or
A horizontal prediction mode and a vertical prediction mode; or
A horizontal prediction mode, a vertical prediction mode and a Bilinear prediction mode; or
A horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode and a DC prediction mode; or
An intra prediction mode selected from mode 0 to mode 33 among the intra prediction modes; or
An intra prediction mode selected from a horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode, a DC prediction mode, a class horizontal prediction mode, and a class vertical prediction mode; wherein the class horizontal prediction modes include mode 56 to mode 59, mode 23, and mode 25 among the intra prediction modes; the class of vertical prediction modes includes modes 42 to 45, 11, and 13 among the intra prediction modes; or
An intra-frame prediction mode selected from the intra-frame prediction modes adopted by the SAWP; or
An Intra-frame prediction Mode selected from Intra-frame prediction modes counted by a FIMC (Frequency-based Intra Mode Coding) method.
In some embodiments of the present application, based on the foregoing solution, the first obtaining unit is configured to: and obtaining the intra-frame prediction mode statistical result obtained by the FIMC mode statistics.
In some embodiments of the present application, based on the foregoing scheme, the video decoding apparatus further includes: and the updating unit is configured to acquire two intra-frame prediction modes of a SAWP mode obtained by decoding the coding blocks of the video image frame, and update the intra-frame prediction mode statistical result counted by the FIMC mode through 1 or 2 intra-frame prediction modes in the two intra-frame prediction modes.
In some embodiments of the present application, based on the foregoing solution, the first processing unit includes: a determining unit, configured to determine a plurality of candidate prediction modes having an order relationship according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values, and the intra-frame prediction mode statistics result; and the selecting unit is configured to select a set number of different candidate prediction modes from the plurality of candidate prediction modes according to the sequence relation so as to generate the maximum possible intra-frame prediction mode list.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: determining a first candidate mode set according to the intra-frame prediction modes adopted by the adjacent blocks, and determining a second candidate mode set according to the SAWP index value; according to the sequence of the statistic values from large to small, selecting n1 intra-frame prediction modes from the intra-frame prediction mode statistic results; determining the plurality of candidate prediction modes having the sequential relationship according to the first candidate mode set, the second candidate mode set and the n1 intra prediction modes.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: replacing n1 intra prediction modes in the first candidate mode set by the n1 intra prediction modes to obtain an updated first candidate mode set, wherein n1 is less than or equal to the number of candidate modes in the first candidate mode set; and generating the plurality of candidate prediction modes with the sequential relation according to the second candidate mode set and the updated first candidate mode set.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: replacing n1 intra-frame prediction modes in a third candidate mode set formed by the first candidate mode set and the second candidate mode set by the n1 intra-frame prediction modes to obtain an updated third candidate mode set, wherein n1 is less than or equal to the number of candidate modes contained in the third candidate mode set; and generating the plurality of candidate prediction modes with the sequential relation according to the updated third candidate mode set.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: sorting the candidate prediction modes in the first candidate mode set and the candidate prediction modes in the second candidate mode set to obtain sorted candidate prediction modes; adding the n1 intra-prediction modes to designated positions in the ordered candidate prediction modes to generate the plurality of candidate prediction modes having the sequential relationship; wherein the designated position comprises the front most, the rear most or is inserted into the sorted candidate prediction modes.
In some embodiments of the present application, based on the foregoing solution, the first processing unit is further configured to: replacing a first intra-prediction mode in the first set of candidate modes, or in the first set of candidate modes and the n1 intra-prediction modes, with a vertical prediction mode or a vertical-like prediction mode, and a second intra-prediction mode in the first set of candidate modes with a horizontal prediction mode or a horizontal-like prediction mode, before determining the plurality of candidate prediction modes having an order relationship according to the first set of candidate modes, the second set of candidate modes, and the n1 intra-prediction modes; wherein the first intra prediction mode is mode 0, mode 1 or mode 32 of intra prediction modes, the second intra prediction mode is mode 2, mode 3 or mode 33 of intra prediction modes, the class of horizontal prediction modes includes mode 56 to mode 59, mode 23 and mode 25 of intra prediction modes, and the class of vertical prediction modes includes mode 42 to mode 45, mode 11 and mode 13 of intra prediction modes.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: according to the sequence of the statistic values from large to small, selecting the first n1 intra-frame prediction modes from the intra-frame prediction mode statistic results; or
And according to the sequence of the statistic values from large to small, selecting the first n1 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistic results.
In some embodiments of the present application, based on the foregoing solution, the first processing unit is configured to: generating a maximum possible intra-frame prediction mode initial list according to the intra-frame prediction modes adopted by the adjacent blocks and the SAWP index value; according to the sequence of statistics values from large to small, selecting the first n2 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistical results; replacing part of the intra prediction modes in the initial list of the maximum possible intra prediction modes by the first n2 SAWP-compliant intra prediction modes to obtain the list of the maximum possible intra prediction modes.
In some embodiments of the present application, based on the foregoing solution, the first processing unit is configured to: according to the order of statistics values from large to small, selecting the first n3 SAWP-compliant intra-prediction modes from the intra-prediction mode statistics results to generate the maximum possible intra-prediction mode list, wherein the number of the intra-prediction modes counted in the intra-prediction mode statistics results is greater than or equal to the number of the intra-prediction modes contained in the maximum possible intra-prediction mode list.
In some embodiments of the present application, based on the foregoing solution, the first processing unit is further configured to: determining whether a corresponding coding block needs to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the adjacent blocks, the SAWP index value and the intra prediction mode statistical result when decoding according to at least one of the following modes:
the value of an index identifier contained in a sequence header of a coding block corresponding to a video image frame sequence;
and taking the value of the index identifier contained in the image header of the coding block corresponding to the video image frame.
According to an aspect of an embodiment of the present application, there is provided a video encoding apparatus including: the determining unit is configured to determine a SAWP index value corresponding to a coding block of the video image frame; a second obtaining unit, configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; a third processing unit, configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics; a fourth processing unit configured to perform encoding processing on other encoded blocks of the video image frame based on the maximum possible intra prediction mode list.
According to an aspect of embodiments of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a video decoding method or a video encoding method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a video decoding method or a video encoding method as described in the above embodiments.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the video decoding method or the video encoding method provided in the various alternative embodiments described above.
In the technical solutions provided in some embodiments of the present application, an intra prediction mode statistical result obtained by performing statistics on intra prediction modes used by coding blocks in a video image frame is obtained, and then a maximum possible intra prediction mode list is determined according to intra prediction modes used by neighboring blocks, an SAWP index value, and the intra prediction mode statistical result, so that the accuracy of a constructed MPM list can be improved, the average coding length of the intra prediction mode of the SAWP can be reduced, and coding and decoding efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
fig. 2 is a schematic diagram showing the placement of a video encoding apparatus and a video decoding apparatus in a streaming system;
FIG. 3 shows a basic flow diagram of a video encoder;
fig. 4 illustrates a diagram of prediction directions in an intra prediction mode;
FIG. 5 illustrates an image of a complex texture;
FIG. 6 shows a schematic diagram of 8 weight generation angles;
FIG. 7 shows a schematic of 7 reference weight predicted positions;
FIG. 8 is a diagram illustrating the positional relationship between a current block and a neighboring block;
FIG. 9 shows a flow diagram of a video decoding method according to an embodiment of the present application;
FIG. 10 illustrates a flow diagram for determining a plurality of candidate prediction modes having an order relationship according to an embodiment of the present application;
FIG. 11 shows a flow diagram of a video encoding method according to an embodiment of the present application;
FIG. 12 shows a block diagram of a video decoding apparatus according to an embodiment of the present application;
FIG. 13 shows a block diagram of a video encoding apparatus according to an embodiment of the present application;
FIG. 14 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should be noted that: reference herein to "a plurality" means two or more. "and/or" describe the association relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture 100 includes a plurality of end devices that may communicate with each other over, for example, a network 150. For example, the system architecture 100 may include a first end device 110 and a second end device 120 interconnected by a network 150. In the embodiment of fig. 1, the first terminal device 110 and the second terminal device 120 perform unidirectional data transmission.
For example, first terminal device 110 may encode video data (e.g., a stream of video pictures captured by terminal device 110) for transmission over network 150 to second terminal device 120, the encoded video data being transmitted as one or more encoded video streams, second terminal device 120 may receive the encoded video data from network 150, decode the encoded video data to recover the video data, and display the video pictures according to the recovered video data.
In one embodiment of the present application, the system architecture 100 may include a third end device 130 and a fourth end device 140 that perform bi-directional transmission of encoded video data, such as may occur during a video conference. For bi-directional data transmission, each of third end device 130 and fourth end device 140 may encode video data (e.g., a stream of video pictures captured by the end device) for transmission over network 150 to the other of third end device 130 and fourth end device 140. Each of the third terminal device 130 and the fourth terminal device 140 may also receive encoded video data transmitted by the other of the third terminal device 130 and the fourth terminal device 140, and may decode the encoded video data to recover the video data, and may display a video picture on an accessible display device according to the recovered video data.
In the embodiment of fig. 1, the first terminal device 110, the second terminal device 120, the third terminal device 130, and the fourth terminal device 140 may be a server, a personal computer, and a smart phone, but the principles disclosed herein may not be limited thereto. Embodiments disclosed herein are applicable to laptop computers, tablet computers, media players, and/or dedicated video conferencing equipment. Network 150 represents any number of networks that communicate encoded video data between first end device 110, second end device 120, third end device 130, and fourth end device 140, including, for example, wired and/or wireless communication networks. The communication network 150 may exchange data in circuit-switched and/or packet-switched channels. The network may include a telecommunications network, a local area network, a wide area network, and/or the internet. For purposes of this application, the architecture and topology of the network 150 may be immaterial to the operation of the present disclosure, unless explained below.
In one embodiment of the present application, fig. 2 illustrates the placement of a video encoding device and a video decoding device in a streaming environment. The subject matter disclosed herein is equally applicable to other video-enabled applications including, for example, video conferencing, digital TV (television), storing compressed video on digital media including CDs, DVDs, memory sticks, and the like.
The streaming system may include an acquisition subsystem 213, and the acquisition subsystem 213 may include a video source 201, such as a digital camera, that creates an uncompressed video picture stream 202. In an embodiment, the video picture stream 202 includes samples taken by a digital camera. The video picture stream 202 is depicted as a thick line to emphasize a high data amount video picture stream compared to the encoded video data 204 (or the encoded video codestream 204), the video picture stream 202 can be processed by an electronic device 220, the electronic device 220 comprising a video encoding device 203 coupled to a video source 201. The video encoding device 203 may comprise hardware, software, or a combination of hardware and software to implement or embody aspects of the disclosed subject matter as described in greater detail below. The encoded video data 204 (or encoded video codestream 204) is depicted as a thin line compared to the video picture stream 202 to emphasize the lower data amount of the encoded video data 204 (or encoded video codestream 204), which may be stored on the streaming server 205 for future use. One or more streaming client subsystems, such as client subsystem 206 and client subsystem 208 in fig. 2, may access streaming server 205 to retrieve copies 207 and 209 of encoded video data 204. Client subsystem 206 may include, for example, video decoding device 210 in electronic device 230. Video decoding device 210 decodes incoming copies 207 of the encoded video data and generates an output video picture stream 211 that may be presented on a display 212 (e.g., a display screen) or another presentation device. In some streaming systems, encoded video data 204, video data 207, and video data 209 (e.g., video streams) may be encoded according to certain video encoding/compression standards. Examples of such standards include ITU-T H.265. In an embodiment, the Video Coding standard under development is informally referred to as next generation Video Coding (VVC), and the present application may be used in the context of the VVC standard.
It should be noted that electronic devices 220 and 230 may include other components not shown in the figures. For example, electronic device 220 may comprise a video decoding device, and electronic device 230 may also comprise a video encoding device.
In an embodiment of the present application, taking the international Video Coding standard HEVC (High Efficiency Video Coding), VVC (scalable Video Coding), and the chinese national Video Coding standard AVS as an example, after a Video frame image is input, the Video frame image is divided into a plurality of non-overlapping processing units according to a block size, and each processing unit performs a similar compression operation. This processing Unit is called a CTU (Coding Tree Unit), or a LCU (Largest Coding Unit). The CTU can continue to perform finer partitioning further down to obtain one or more basic coding units CU, which are the most basic elements in a coding link. Some concepts when coding a CU are introduced below:
predictive Coding (Predictive Coding): the predictive coding includes intra-frame prediction and inter-frame prediction, and the original video signal is predicted by the selected reconstructed video signal to obtain a residual video signal. The encoding side needs to decide which predictive coding mode to select for the current CU and inform the decoding side. The intra-frame prediction means that a predicted signal comes from an already coded and reconstructed region in the same image; inter-prediction means that the predicted signal is from another picture (called a reference picture) than the current picture that has already been coded.
Transformation and Quantization (Transform & Quantization): the residual video signal is subjected to Transform operations such as DFT (Discrete Fourier Transform), DCT (Discrete Cosine Transform), etc., and then converted into a Transform domain, which is referred to as Transform coefficients. The transform coefficients are further subjected to lossy quantization operations, losing certain information, so that the quantized signal is favorable for compressed representation. In some video coding standards, more than one transform mode may be selectable, so the encoding side also needs to select one of the transform modes for the current CU and inform the decoding side. The Quantization fineness is usually determined by a Quantization Parameter (QP), and the QP has a larger value, and a coefficient indicating a larger value range is quantized into the same output, so that larger distortion and lower code rate are usually brought; conversely, the QP value is smaller, and the coefficients representing a smaller value range will be quantized to the same output, thus usually causing less distortion and corresponding to a higher code rate.
Entropy Coding (Entropy Coding) or statistical Coding: and the quantized transform domain signal is subjected to statistical compression coding according to the frequency of each value, and finally, a compressed code stream of binarization (0 or 1) is output. Meanwhile, other information generated by encoding, such as a selected encoding mode, motion vector data, and the like, also needs to be entropy encoded to reduce the code rate. The statistical Coding is a lossless Coding method, which can effectively reduce the code rate required for expressing the same signal, and the common statistical Coding methods include Variable Length Coding (VLC) or context-based Binary Arithmetic Coding (CABAC).
Loop Filtering (Loop Filtering): the transformed and quantized signal is subjected to inverse quantization, inverse transformation and prediction compensation to obtain a reconstructed image. Compared with the original image, the reconstructed image has a different part of information from the original image due to the quantization effect, i.e., the reconstructed image is distorted (Distortion). Therefore, the reconstructed image may be filtered, for example, by a Filter such as Deblocking Filter (DB), SAO (Sample Adaptive Offset), or ALF (Adaptive Loop Filter), so as to effectively reduce the distortion degree generated by quantization. The above-described filtering operation is also referred to as loop filtering, i.e. a filtering operation within the coding loop, since these filtered reconstructed pictures will be used as references for subsequent coded pictures to predict future picture signals.
In one embodiment of the present application, fig. 3 shows a basic flow chart of a video encoder, in which intra prediction is taken as an example for illustration. Wherein the original image signal s k [x,y]And a predicted image signal
Figure BDA0002975440400000111
Performing difference operation to obtain a residual signal u k [x,y]Residual signal u k [x,y]Obtaining a quantized coefficient after transformation and quantization processing, wherein the quantized coefficient obtains a coded bit stream through entropy coding on one hand, and obtains a reconstructed residual signal u 'through inverse quantization and inverse transformation processing on the other hand' k [x,y]Predicting an image signal
Figure BDA0002975440400000112
And reconstructed residual signal u' k [x,y]Superimposing the generated image signals
Figure BDA0002975440400000113
Image signal
Figure BDA0002975440400000114
The signal is input to an intra mode decision module and an intra prediction module for intra prediction processing on the one hand, and a reconstructed image signal s 'is output through loop filtering on the other hand' k [x,y]Reconstruction of the image Signal s' k [x,y]It can be used as the reference image of the next frame for motion estimation and motion compensated prediction. Then s 'based on the result of the motion compensation prediction' r [x+m x ,y+m y ]And intra prediction results
Figure BDA0002975440400000115
Obtaining a predicted image signal of the next frame
Figure BDA0002975440400000116
And continuing to repeat the process until the coding is completed.
Based on the above encoding process, after obtaining a compressed code stream (i.e., a bit stream) at a decoding end for each CU, entropy decoding is performed to obtain various mode information and quantization coefficients. And then, carrying out inverse quantization and inverse transformation on the quantized coefficient to obtain a residual signal. On the other hand, according to the known coding mode information, a prediction signal corresponding to the CU can be obtained, then a reconstructed signal can be obtained by adding the residual signal and the prediction signal, and the reconstructed signal is subjected to loop filtering and other operations to generate a final output signal.
When decoding an image, a frame of image is usually divided into image blocks (LCUs) with equal size for reconstruction, and the LCUs are decoded sequentially from left to right and from top to bottom in raster scan (each row of LCUs, from left to right). Each LCU is further divided into a plurality of sub-blocks by a Quadtree (QT), a Binary-Tree (BT), and an Extended Quadtree (EQT), and the sub-blocks are processed sequentially from left to right and from top to bottom.
There are 3 ways for the intra coding mode of AVS 3: general Intra prediction technology, Intra Block copy technology (IBC), and Intra String copy technology (ISC). As shown in fig. 4, 66 types of intra prediction modes are available for the general intra prediction technology, among which, the modes 3-32 and 34-65 are angle prediction modes, the mode 33 is a PCM (Pulse Code Modulation) mode, the mode 0 is a DC prediction mode, the mode 1 is a Plane prediction mode, and the mode 2 is a Biliner prediction mode.
The dashed arrows in fig. 4 indicate the newly introduced angle extension Mode (EIPM) in the second stage of AVS3, and modes 12 and 24 respectively indicate the vertical Prediction Mode and the horizontal Prediction Mode. Assuming that the total number of intra prediction modes is IPD _ CNT, if the EIPM is turned off, IPD _ CNT is 34; if the EIPM is turned on, IPD _ CNT is 66.
When the FIMC technology is used, a buffer with a length of IPD _ CNT needs to be established, and intra prediction modes corresponding to the coding blocks in the decoded area are counted, so as to designate two intra prediction modes with the highest frequency count as MPMs corresponding to the current coding block to be decoded. As shown in table 1 below, the AVS3 has 2 MPMs in the current MPM list, and if the prefix included in the parsed coding block is 1, it indicates that the intra prediction mode of the current coding block is in the MPM list, and then parses the 1-bit suffix in the coding block to specifically distinguish the MPM modes. If the prefix in the analysis coding block is 0, the intra-frame prediction mode of the current coding block is not in the MPM list, and then the intra-frame prediction mode of the current coding block needs to be specifically distinguished by analyzing the fixed length code of 5bit or 6 bit.
Figure BDA0002975440400000121
TABLE 1
In addition, in view of the fact that the conventional single prediction mode cannot adapt to more complicated image textures, such as the image shown in fig. 5 that includes two textures, the SAWP proposes to perform prediction by using 2 different intra prediction modes for the same coding block, and to generate a final prediction image by weighting the 2 intra prediction images.
Specifically, it is assumed that predicted images obtained in 2 intra prediction modes are predMatrix respectively 0 And predMatrix 1 The final predicted image generated by SAWP is predMatrixSawp, and masked as weightMatrixAwap, [ i ] is][j]Representing a coordinate point within an image block, then there is the following equation:
predMatrixSawp[i][j]=(predMatrix 0 [i][j]×weightMatrixAwap[i][j]+
predMatrix 1 [i][j]×(8-weightMatrixAwap[i][j])+4)>>3
fig. 6 shows 8 weight generation angles, fig. 7 shows 7 reference weight prediction positions (i.e., 7 weight configurations), and each weight configuration can generate a mask weight matrix xawap along the 8 angles shown in fig. 6, and thus can generate 8 × 7 — 56 masks.
Meanwhile, both intra prediction modes of the SAWP can use only 28 angular prediction modes in total using the modes 4-31 shown in fig. 4, and the MPM list has a length of 4. The method for constructing the MPM list of the SAWP comprises the following steps:
step 1, setting an array cand _ mode [10], initializing all values in the array to invalid values, and performing the following operations on the array:
referring to FIG. 8, if a neighboring block F of the current block "exists" and is a block employing a normal intra prediction mode, cand _ mode [0] is equal to the intra prediction mode of the neighboring block F (specifically, its luma intra prediction mode). If the neighboring block G of the current block "exists" and is a block that employs normal intra prediction mode, cand _ mode [1] is equal to the intra prediction mode of the neighboring block G (specifically its luma intra prediction mode). If neighboring block C of the current block "exists" and is a block that employs normal intra prediction mode, cand _ mode [2] is equal to the intra prediction mode of neighboring block C (specifically its luma intra prediction mode). If the neighboring block A of the current block "exists" and is a block that employs normal intra prediction mode, cand _ mode [3] is equal to the intra prediction mode of neighboring block A (specifically its luma intra prediction mode). If neighboring block B of the current block "exists" and is a block that employs normal intra prediction mode, cand _ mode [4] is equal to the intra prediction mode of neighboring block B (specifically its luma intra prediction mode). If neighboring block D of the current block "exists" and is a block that employs normal intra prediction mode, cand _ mode [5] is equal to the intra prediction mode of neighboring block D (specifically its luma intra prediction mode).
Meanwhile, cand _ mode [6] is equal to the value of sawlndex% 8 (i.e., the result of the subtraction of the value of the SAWP index from 8) at the value of candidate mode 0 corresponding to table 2 below; cand _ mode [7] equals the value of SawpIndex% 8 for candidate mode 1 as shown in Table 2 below; cand _ mode [8] equals the value of SawpIndex% 8 for candidate mode 2 as shown in Table 2 below; cand _ mode [9] equals the value of SawpIndex% 8 for candidate mode 3 as shown in Table 2 below.
SawpIndex%8 0 1 2 3 4 5 6 7
Candidate pattern 0 30 27 24 21 18 15 12 9
Candidate pattern 1 6 24 12 24 17 14 24 12
Candidate pattern 2 24 6 23 18 19 16 11 8
Candidate pattern 3 12 9 25 12 24 12 13 10
TABLE 2
Step 2, for i from 0 to 5, the following operations are carried out:
A) if cand _ mode [ i ] ≦ 3 or cand _ mode [ i ] equal to 32 or 33, then it will be
cand _ mode [ i ] is set to an invalid value.
B) If cand _ mode [ i ] > 33, the following is performed:
if 33 < cand _ mode [ i ] < 44, let cand _ mode [ i ] ═ cand _ mode [ i ] - [ 30;
if 44 ≦ cand _ mode [ i ] < 58, let cand _ mode [ i ] ═ cand _ mode [ i ] - [ 33 ];
if cand _ mode [ i ] is greater than or equal to 58, let cand _ mode [ i ] be cand _ mode [ i ] -34.
C) If the value of cand _ mode [ i ] is some value from 4-31, then the value of cand _ mode [ i ] is not modified.
Step 3, keeping MPM _ num as 0, sequentially selecting cand _ mode [ i ] with non-invalid values for i from 0 to 9, comparing cand _ mode [ i ] with predrapredmode [ j ] in the MPM list, wherein j is a value from 0 to MPM _ num-1, and if cand _ mode [ i ] is not equal to predrapredmode [ j ], performing the following operations:
let predrampredmode [ mpm _ num ] equal to cand _ mode [ i ], let mpm _ num equal to mpm _ num +1, and if mpm _ num is equal to 4, end step 3.
And 4, sorting 4 numbers of predrintrPredMode [4] from small to large to obtain an MPM list.
According to the MPM list construction process, the MPM list of the SAWP only uses the intra-frame prediction mode of the spatially neighboring block and the preset intra-frame prediction mode in the construction process, however, in the screen content coding, the correlation between the intra-frame prediction mode of the current coding block and the intra-frame prediction mode of the spatially neighboring block is weak, which further causes the average coding length of the intra-frame prediction mode of the SAWP to be longer, and affects the coding and decoding efficiency. Based on this, the technical scheme of the embodiment of the application introduces a historical intra-frame prediction mode to improve the construction precision of the MPM list; in addition, in the MPM list construction process of the SAWP in the embodiment of the present application, when the spatial neighboring block is not an intra-coded block, or the intra-prediction mode of the spatial neighboring block is not a mode available for the SAWP, the spatial neighboring block may be replaced by a corresponding mode, so as to reduce the average coding length of the intra-prediction mode of the SAWP to a certain extent, thereby improving the coding and decoding efficiency.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 9 shows a flowchart of a video decoding method according to an embodiment of the present application, which may be performed by a device having a computing processing function, such as a terminal device or a server. Referring to fig. 9, the video decoding method at least includes steps S910 to S940, which are described in detail as follows:
in step S910, a coding block of a video image frame is decoded to obtain a spatial domain angle weighted prediction SAWP index value.
In one embodiment of the present application, a video image frame sequence includes a series of images, each of which may be further divided into slices (Slice), which may be further divided into a series of LCUs (or CTUs), where an LCU includes several CUs. Video image frames are encoded in units of blocks, and in some new video encoding standards, for example, in the h.264 standard, there are Macroblocks (MBs), which can be further divided into a plurality of prediction blocks (predictions) that can be used for prediction encoding. In the HEVC standard, basic concepts such as a coding unit CU, a Prediction Unit (PU), and a Transform Unit (TU) are used, and various block units are functionally divided and described using a brand-new tree-based structure. For example, a CU may be partitioned into smaller CUs in a quadtree manner, and the smaller CUs may be further partitioned to form a quadtree structure. The coding block in the embodiment of the present application may be a CU, or a smaller block than the CU, such as a smaller block obtained by dividing the CU.
In one embodiment of the present application, the SAWP index value is SawpIndex, and the values of cand _ mode [6] -cand _ mode [9] can be obtained by subtracting from SawpIndex and 8 and combining with the above Table 2, so as to determine the MPM list of SAWP subsequently.
In step S920, intra prediction modes used by neighboring blocks of the coding block are obtained, and an intra prediction mode statistical result obtained by performing statistics on the intra prediction modes used by the coding blocks in the video image frame is obtained.
In one embodiment of the present application, the neighboring blocks of the encoding block may be, for example, the neighboring block a, the neighboring block B, the neighboring block C, the neighboring block D, the neighboring block F, and the neighboring block G shown in fig. 8.
In an embodiment of the present application, the process of obtaining an intra prediction mode statistic result obtained by counting intra prediction modes adopted by coding blocks in a video image frame may be: and counting the appointed intra-frame prediction mode adopted by the coding block in the video image frame to obtain the intra-frame prediction mode counting result. The specified intra prediction mode has the following embodiments:
in one embodiment of the present application, the specified intra prediction mode may be all intra prediction modes. For example, in the AVS3 standard, the total number of all intra prediction modes is 66.
In one embodiment of the present application, specifying the intra prediction mode may include: a horizontal prediction mode and a vertical prediction mode. That is, the technical solution of this embodiment may count only the statistics of the horizontal prediction mode and the vertical prediction mode.
In one embodiment of the present application, specifying the intra prediction mode may include: horizontal prediction mode, vertical prediction mode, and Bilinear prediction mode. That is, the technical solution of this embodiment may count only the statistics of the horizontal prediction mode, the vertical prediction mode, and the Bilinear prediction mode.
In one embodiment of the present application, specifying the intra prediction mode may include: horizontal prediction mode, vertical prediction mode, Bilinear prediction mode, and DC prediction mode. That is, the technical solution of this embodiment may count only the statistics of the horizontal prediction mode, the vertical prediction mode, the Bilinear prediction mode, and the DC prediction mode.
In one embodiment of the present application, specifying the intra prediction mode may include: the intra prediction mode is selected from mode 0 to mode 33 among the intra prediction modes. Alternatively, for example, the specified intra prediction mode may be mode 0 to mode 33, that is, statistics of mode 0 to mode 33 are required at the time of statistics.
In one embodiment of the present application, the designated intra prediction mode may include a plurality of intra prediction modes selected from a horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode, a DC prediction mode, a class horizontal prediction mode, and a class vertical prediction mode; wherein the class level prediction modes include mode 56 to mode 59, mode 23, and mode 25 among the intra prediction modes; the vertical prediction mode-like includes modes 42 to 45, 11, and 13 among the intra prediction modes.
In one embodiment of the present application, the designated intra prediction mode may be an intra prediction mode selected from intra prediction modes employed by the SAWP, i.e., an intra prediction mode selected from modes 4 to 31.
In one embodiment of the present application, the specified intra prediction mode may be an intra prediction mode selected from intra prediction modes counted by the FIMC mode. For example, the specified intra prediction mode may be a FIMC mode statistical intra prediction mode.
In an embodiment of the present application, the process of obtaining an intra prediction mode statistic result obtained by counting intra prediction modes adopted by coding blocks in a video image frame may be: and obtaining the intra-frame prediction mode statistical result obtained by the FIMC mode statistics. That is, in this embodiment, SWAP shares the same intra-prediction mode statistics with FIMC techniques.
In an embodiment of the present application, if the SWAP and FIMC technologies share the same intra prediction mode statistics result, after two intra prediction modes of the SAWP method obtained by decoding the coding block of the video image frame are acquired, the intra prediction mode statistics result counted by the FIMC method may be updated through 1 or 2 of the two intra prediction modes. Of course, in other embodiments of the present application, if the SWAP and FIMC techniques share the same intra-prediction mode statistics, the intra-prediction mode statistics counted by the FIMC method may not be updated by the two intra-prediction modes of the SAWP method.
With continued reference to fig. 9, in step S930, a maximum possible intra prediction mode list is determined according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics.
In an embodiment of the present application, the determining the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics in step S930 may include: according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results, a plurality of candidate prediction modes with sequence relation are determined, and according to the sequence relation, a set number of different candidate prediction modes are selected from the candidate prediction modes to generate a maximum possible intra-frame prediction mode list.
Alternatively, if 4 MPMs are included in the MPM list, after determining a plurality of candidate prediction modes having an order relationship, 4 different candidate prediction modes may be sequentially selected from the plurality of candidate prediction modes to generate the MPM list.
In an embodiment of the present application, as shown in fig. 10, the process of determining a plurality of candidate prediction modes having an order relationship according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics may include the following steps S1010 to S1030, which are described in detail as follows:
in step S1010, a first set of candidate modes is determined according to intra prediction modes adopted by neighboring blocks, and a second set of candidate modes is determined according to the SAWP index values.
In an embodiment of the present application, the process of determining the first candidate mode set, i.e., cand _ mode [0] to cand _ mode [5], according to the intra prediction modes adopted by the neighboring blocks may refer to the process of determining cand _ mode [0] to cand _ mode [5] in the foregoing embodiment in combination with fig. 8. The process of determining the second candidate mode set, i.e., cand _ mode [6] to cand _ mode [9], based on the SAWP index values can be performed by referring to the foregoing embodiment in conjunction with Table 2.
In step S1020, n1 intra prediction modes are selected from the intra prediction mode statistics in descending order of the statistics.
It should be noted that there is no strict priority between the steps S1020 and S1010, in other words, the step S1010 may be executed first, and then the step S1020 may be executed; step S1020 may be executed first, and then step S1010 is executed; it is also possible to perform step S1010 and step S1020 at the same time.
In an embodiment of the present application, the first n1 intra prediction modes may be selected from the intra prediction mode statistics in order of decreasing statistics. That is, in this embodiment, the first n1 intra prediction modes are selected only according to the order of statistics from large to small, but there may be an intra prediction mode that does not conform to the SAWP among the n1 intra prediction modes. Therefore, in an embodiment of the present application, the first n1 SAWP-compliant intra prediction modes can be selected from the intra prediction mode statistics in descending order of statistics.
In step S1030, a plurality of candidate prediction modes having an order relationship are determined from the first candidate mode set, the second candidate mode set, and the n1 intra prediction modes.
In one embodiment of the present application, when determining a plurality of candidate prediction modes having an order relationship, n1 intra prediction modes in the first candidate mode set may be replaced by the n1 intra prediction modes to obtain an updated first candidate mode set, where n1 is less than or equal to the number of candidate modes included in the first candidate mode set, and then the plurality of candidate prediction modes having an order relationship is generated according to the second candidate mode set and the updated first candidate mode set. For example, n1 may be 2, 3, etc.
Alternatively, the procedure of generating a plurality of candidate prediction modes having an order relation according to the second candidate mode set and the updated first candidate mode set may adopt the scheme of "step 2" in the foregoing embodiment, i.e., first setting the candidate prediction modes having the mode numbers of 32 and 33 as invalid values and setting the numbers of the other candidate prediction modes to be between 4 and 31, wherein the mode numbers (i.e., cand _ mode [0] to cand _ mode [5]) in the updated first candidate mode set are less than or equal to 3; then, a plurality of candidate prediction modes with a sequential relationship are obtained by sorting according to the sequence from cand _ mode [0] to cand _ mode [5] and cand _ mode [6] to cand _ mode [9 ]. Where cand _ mode [6] to cand _ mode [9] are the numbers of candidate prediction modes contained in the second candidate mode set.
In an embodiment of the present application, when determining a plurality of candidate prediction modes having an order relationship, n1 intra prediction modes in a third candidate mode set formed by a first candidate mode set and a second candidate mode set may be replaced by the n1 intra prediction modes to obtain an updated third candidate mode set, where n1 is less than or equal to the number of candidate modes included in the third candidate mode set, and then a plurality of candidate prediction modes having an order relationship are generated according to the updated third candidate mode set.
Alternatively, assuming that cand _ mode [0] to cand _ mode [5] are the numbers of candidate prediction modes contained in the first candidate mode set, cand _ mode [6] to cand _ mode [9] are the numbers of candidate prediction modes contained in the second candidate mode set, then the numbers of candidate prediction modes contained in the third candidate mode set are cand _ mode [0] to cand _ mode [9 ]. In this case, generating a plurality of candidate prediction modes having a sequential relationship according to the updated third candidate mode set may employ a scheme similar to "step 2" in the foregoing embodiment, i.e., first setting the candidate prediction modes having the mode numbers of 32 and 33 as invalid values and setting the numbers of the other candidate prediction modes to be between 4 and 31, in which the mode numbers (i.e., cand _ mode [0] to cand _ mode [9]) in the updated third candidate mode set are less than or equal to 3, so as to obtain a plurality of candidate prediction modes having a sequential relationship.
In one embodiment of the present application, when determining a plurality of candidate prediction modes having an order relationship, the candidate prediction modes in the first candidate mode set and the candidate prediction modes in the second candidate mode set may be ranked to obtain ranked candidate prediction modes, and then the selected n1 intra prediction modes are added to specified positions in the ranked candidate prediction modes to generate the plurality of candidate prediction modes having the order relationship; wherein the designated position comprises the foremost, rearmost or insertion of the sorted candidate prediction modes.
Alternatively, assuming that cand _ mode [0] to cand _ mode [5] are the numbers of the candidate prediction modes included in the first candidate mode set, and cand _ mode [6] to cand _ mode [9] are the numbers of the candidate prediction modes included in the second candidate mode set, the numbers of the candidate prediction modes obtained by sorting the candidate prediction modes in the first candidate mode set and the candidate prediction modes in the second candidate mode set are cand _ mode [0] to cand _ mode [9 ]. In this case, after the n1 selected intra prediction modes are added to the designated positions in the sorted candidate prediction modes, a scheme similar to the aforementioned "step 2" in the embodiment may be adopted to obtain a plurality of candidate prediction modes having a sequential relationship, i.e., the candidate prediction modes having the mode numbers of 3 or less in the sorted candidate prediction modes and the mode numbers of 32 and 33 are set as invalid values, and the numbers of the other candidate prediction modes are adjusted to be between 4 and 31 to obtain a plurality of candidate prediction modes having a sequential relationship.
In one embodiment of the present application, in determining a plurality of candidate prediction modes having an order relationship, a first candidate mode set, or a first intra prediction mode in the first candidate mode set and n1 intra prediction modes, may be replaced with a vertical prediction mode or a vertical-like prediction mode, and a second intra prediction mode in the first candidate mode set may be replaced with a horizontal prediction mode or a horizontal-like prediction mode; wherein the first intra prediction mode is mode 0, mode 1 or mode 32 among the intra prediction modes, the second intra prediction mode is mode 2, mode 3 or mode 33 among the intra prediction modes, the class horizontal prediction mode includes mode 56 to mode 59, mode 23 and mode 25 among the intra prediction modes, and the class vertical prediction mode includes mode 42 to mode 45, mode 11 and mode 13 among the intra prediction modes. According to the technical scheme of the embodiment, when the intra-frame prediction mode is not the intra-frame prediction mode available for the SAWP, the intra-frame prediction modes can be replaced by the specified prediction mode, so that the average coding length of the intra-frame prediction mode of the SAWP is reduced, and the coding and decoding efficiency is improved.
In an embodiment of the present application, the determining the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics in step S930 may include: generating a maximum possible intra-frame prediction mode initial list according to the intra-frame prediction modes adopted by the adjacent blocks and the SAWP index values, then selecting the first n2 intra-frame prediction modes conforming to the SAWP from the intra-frame prediction mode statistical results according to the descending order of the statistical values, and replacing partial intra-frame prediction modes in the maximum possible intra-frame prediction mode initial list by the first n2 intra-frame prediction modes conforming to the SAWP to obtain the maximum possible intra-frame prediction mode list.
Optionally, the process of generating the initial maximum possible intra prediction mode list according to the intra prediction modes and the SAWP index values adopted by the neighboring blocks may refer to "step 1", "step 2", "step 3", and "step 4" in the foregoing embodiment, and is not described again.
In an embodiment of the present application, the maximum possible intra prediction mode list may also be generated by directly selecting the first n3 intra prediction modes conforming to the SAWP from the intra prediction mode statistics result in order of descending statistics value, wherein the number of intra prediction modes counted in the intra prediction mode statistics result is greater than or equal to the number of intra prediction modes included in the maximum possible intra prediction mode list. The technical solution of this embodiment is to directly use the intra prediction mode conforming to the SAWP selected from the intra prediction mode statistical results as the MPM to generate the MPM list.
As shown with continued reference to fig. 9, in step S940, the decoding process is performed on other encoded blocks of the video image frame based on the maximum possible intra prediction mode list.
Based on the technical solution of the embodiment shown in fig. 9, in an embodiment of the present application, it may further be determined whether the corresponding coding block needs to determine the maximum possible intra prediction mode list according to the intra prediction mode adopted by the neighboring block, the SAWP index value, and the intra prediction mode statistic result when decoding, according to at least one of the following manners:
the value of an index identifier contained in a sequence header of a coding block corresponding to a video image frame sequence;
and taking the value of the index identifier contained in the image header of the coding block corresponding to the video image frame.
Specifically, when determining whether the most probable intra prediction mode list needs to be determined according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics, the following manners may be adopted:
1. indicated by an index identification in the sequence header of the coding block corresponding to the video image frame sequence. For example, if the index flag in the sequence header is 1 (the numerical value is merely an example), it indicates that all the coding blocks corresponding to the video image frame sequence need to determine the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics results when decoding.
2. Indicated by the index identification in the picture header of the coding block corresponding to the video image frame. For example, if the index flag in the header is 1 (the numerical value is merely an example), it indicates that all the coding blocks corresponding to the video frame need to determine the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics results when decoding.
3. The common indication is realized by the index identification in the sequence header of the coding block corresponding to the video image frame sequence and the index identification in the image header of the coding block corresponding to the video image frame. For example, if the index flag in the sequence header is 1 (the numerical value is merely an example), and the index flag in the picture header is 1 (the numerical value is merely an example), it indicates that all the coding blocks corresponding to the picture header need to determine the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics results when decoding.
Fig. 11 shows a flow diagram of a video encoding method according to an embodiment of the present application, which may be performed by a device having a computational processing function, such as a terminal device or a server. Referring to fig. 11, the video encoding method at least includes steps S1110 to S1140, which are described in detail as follows:
in step S1110, a SAWP index value corresponding to a coding block of a video image frame is determined.
In step S1120, the intra prediction modes used by the adjacent blocks of the coding blocks are obtained, and the intra prediction mode statistics result obtained by performing statistics on the intra prediction modes used by the coding blocks in the video image frame is obtained.
In step S1130, a maximum possible intra prediction mode list is determined according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics.
In step S1140, other encoded blocks of the video image frame are subjected to encoding processing based on the maximum possible intra prediction mode list.
It should be noted that the process of determining the maximum possible intra prediction mode list by the encoding end is similar to that of the decoding end, and is not described again.
According to the technical scheme of the embodiment of the application, the construction precision of the MPM list is improved by introducing the historical intra-frame prediction mode, and the average coding length of the intra-frame prediction mode of the SAWP can be reduced to a certain extent, so that the coding and decoding efficiency is improved.
Embodiments of the apparatus of the present application are described below, which may be used to perform the methods described in the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method described above in the present application.
Fig. 12 shows a block diagram of a video decoding apparatus according to an embodiment of the present application, which may be disposed in a device having a calculation processing function, such as a terminal device or a server.
Referring to fig. 12, a video decoding apparatus 1200 according to an embodiment of the present application includes: a decoding unit 1202, a first obtaining unit 1204, a first processing unit 1206 and a second processing unit 1208.
The decoding unit 1202 is configured to decode a coding block of a video image frame to obtain a spatial domain angle weighted prediction (SAWP) index value; the first obtaining unit 1204 is configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; the first processing unit 1206 is configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics; the second processing unit 1208 is configured to perform decoding processing on other encoded blocks of the video image frame based on the maximum possible intra prediction mode list.
In some embodiments of the present application, based on the foregoing solution, the first obtaining unit 1204 is configured to: and counting the appointed intra-frame prediction mode adopted by the coding blocks in the video image frame to obtain the intra-frame prediction mode counting result.
In some embodiments of the present application, based on the foregoing scheme, the specifying the intra prediction mode comprises: all intra prediction modes; or
A horizontal prediction mode and a vertical prediction mode; or
A horizontal prediction mode, a vertical prediction mode and a Bilinear prediction mode; or alternatively
A horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode and a DC prediction mode; or an intra prediction mode selected from mode 0 to mode 33 among the intra prediction modes; or
An intra prediction mode selected from a horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode, a DC prediction mode, a class horizontal prediction mode, and a class vertical prediction mode; wherein the class horizontal prediction modes include mode 56 to mode 59, mode 23, and mode 25 among the intra prediction modes; the class of vertical prediction modes includes mode 42 to mode 45, mode 11, and mode 13 among the intra prediction modes; or
An intra prediction mode selected from intra prediction modes adopted by the SAWP; or alternatively
An Intra-frame prediction Mode selected from Intra-frame prediction modes counted by a FIMC (Frequency-based Intra Mode Coding) method.
In some embodiments of the present application, based on the foregoing solution, the first obtaining unit 1204 is configured to: and obtaining the intra-frame prediction mode statistical result obtained by the FIMC mode statistics.
In some embodiments of the present application, based on the foregoing scheme, the video decoding apparatus 1200 further comprises: and the updating unit is configured to acquire two intra-frame prediction modes of a SAWP mode obtained by decoding the coding blocks of the video image frame, and update the intra-frame prediction mode statistical result counted by the FIMC mode through 1 or 2 intra-frame prediction modes in the two intra-frame prediction modes.
In some embodiments of the present application, based on the foregoing solution, the first processing unit 1206 comprises: a determining unit, configured to determine a plurality of candidate prediction modes having an order relationship according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values, and the intra-frame prediction mode statistics result; and the selecting unit is configured to select a set number of different candidate prediction modes from the plurality of candidate prediction modes according to the sequence relation so as to generate the maximum possible intra-frame prediction mode list.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: determining a first candidate mode set according to the intra-frame prediction modes adopted by the adjacent blocks, and determining a second candidate mode set according to the SAWP index value; according to the sequence of the statistic values from large to small, selecting n1 intra-frame prediction modes from the intra-frame prediction mode statistic results; determining the plurality of candidate prediction modes having an order relationship according to the first candidate mode set, the second candidate mode set, and the n1 intra prediction modes.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: replacing n1 intra prediction modes in the first candidate mode set by the n1 intra prediction modes to obtain an updated first candidate mode set, wherein n1 is less than or equal to the number of candidate modes in the first candidate mode set; and generating the plurality of candidate prediction modes with the sequential relation according to the second candidate mode set and the updated first candidate mode set.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: replacing n1 intra-frame prediction modes in a third candidate mode set formed by the first candidate mode set and the second candidate mode set by the n1 intra-frame prediction modes to obtain an updated third candidate mode set, wherein n1 is less than or equal to the number of candidate modes contained in the third candidate mode set; and generating the plurality of candidate prediction modes with the sequential relation according to the updated third candidate mode set.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: sorting the candidate prediction modes in the first candidate mode set and the candidate prediction modes in the second candidate mode set to obtain sorted candidate prediction modes; adding the n1 intra-prediction modes to designated positions in the ordered candidate prediction modes to generate the plurality of candidate prediction modes having the sequential relationship; wherein the designated position comprises the front most, the rear most or is inserted into the sorted candidate prediction modes.
In some embodiments of the present application, based on the foregoing, the first processing unit 1206 is further configured to: replacing a first intra-prediction mode in the first set of candidate modes, or in the first set of candidate modes and the n1 intra-prediction modes, with a vertical prediction mode or a vertical-like prediction mode, and a second intra-prediction mode in the first set of candidate modes with a horizontal prediction mode or a horizontal-like prediction mode, before determining the plurality of candidate prediction modes having an order relationship according to the first set of candidate modes, the second set of candidate modes, and the n1 intra-prediction modes; wherein the first intra prediction mode is mode 0, mode 1 or mode 32 of intra prediction modes, the second intra prediction mode is mode 2, mode 3 or mode 33 of intra prediction modes, the class of horizontal prediction modes includes mode 56 to mode 59, mode 23 and mode 25 of intra prediction modes, and the class of vertical prediction modes includes mode 42 to mode 45, mode 11 and mode 13 of intra prediction modes.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: according to the sequence of the statistic values from large to small, selecting the first n1 intra-frame prediction modes from the intra-frame prediction mode statistic results; or
And according to the sequence of the statistic values from large to small, selecting the first n1 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistic results.
In some embodiments of the present application, based on the foregoing, the first processing unit 1206 is configured to: generating a maximum possible intra-frame prediction mode initial list according to the intra-frame prediction modes adopted by the adjacent blocks and the SAWP index value; according to the sequence of statistics values from large to small, selecting the first n2 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistical results; replacing part of the intra prediction modes in the initial list of the maximum possible intra prediction modes by the first n2 SAWP-compliant intra prediction modes to obtain the list of the maximum possible intra prediction modes.
In some embodiments of the present application, based on the foregoing, the first processing unit 1206 is configured to: according to the order of statistics values from large to small, selecting the first n3 SAWP-compliant intra-prediction modes from the intra-prediction mode statistics results to generate the maximum possible intra-prediction mode list, wherein the number of the intra-prediction modes counted in the intra-prediction mode statistics results is greater than or equal to the number of the intra-prediction modes contained in the maximum possible intra-prediction mode list.
In some embodiments of the present application, based on the foregoing, the first processing unit 1206 is further configured to: determining whether a corresponding coding block needs to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the adjacent blocks, the SAWP index value and the intra prediction mode statistical result when decoding according to at least one of the following modes:
the value of an index identifier contained in a sequence header of a coding block corresponding to a video image frame sequence;
and taking the value of the index identifier contained in the image header of the coding block corresponding to the video image frame.
Fig. 13 shows a block diagram of a video encoding apparatus according to an embodiment of the present application, which may be disposed in a device having a calculation processing function, such as a terminal device or a server.
Referring to fig. 13, a video encoding apparatus 1300 according to an embodiment of the present application includes: a determination unit 1302, a second acquisition unit 1304, a third processing unit 1306 and a fourth processing unit 1308.
The determining unit 1302 is configured to determine a SAWP index value corresponding to a coding block of a video image frame; the second obtaining unit 1304 is configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame; the third processing unit 1306 is configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics; the fourth processing unit 1308 is configured to perform an encoding process on other encoded blocks of the video image frame based on the maximum possible intra prediction mode list.
FIG. 14 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1400 of the electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 14, a computer system 1400 includes a Central Processing Unit (CPU)1401, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1402 or a program loaded from a storage portion 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data necessary for system operation are also stored. The CPU 1401, ROM 1402, and RAM 1403 are connected to each other via a bus 1404. An Input/Output (I/O) interface 1405 is also connected to the bus 1404.
The following components are connected to the I/O interface 1405: an input portion 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 1408 including a hard disk and the like; and a communication section 1409 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1409 performs communication processing via a network such as the internet. The driver 1410 is also connected to the I/O interface 1405 as necessary. A removable medium 1411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1410 as necessary, so that a computer program read out therefrom is installed into the storage section 1408 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable medium 1411. When the computer program is executed by a Central Processing Unit (CPU)1401, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs, which when executed by one of the electronic devices, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (20)

1. A video decoding method, comprising:
decoding a coding block of a video image frame to obtain a spatial domain angle weighted prediction SAWP index value;
acquiring intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and acquiring intra-frame prediction mode statistical results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame;
determining a maximum possible intra-frame prediction mode list according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results;
decoding other encoded blocks of the video image frame based on the list of most probable intra-prediction modes.
2. The video decoding method of claim 1, wherein obtaining intra-prediction mode statistics that are statistics of intra-prediction modes employed by coding blocks in the video image frame comprises:
and counting the appointed intra-frame prediction mode adopted by the coding blocks in the video image frame to obtain the intra-frame prediction mode counting result.
3. The video decoding method of claim 2, wherein the specifying the intra-prediction mode comprises:
all intra prediction modes; or
A horizontal prediction mode and a vertical prediction mode; or
A horizontal prediction mode, a vertical prediction mode and a Bilinear prediction mode; or alternatively
A horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode and a DC prediction mode; or
An intra prediction mode selected from mode 0 to mode 33 among the intra prediction modes; or
An intra prediction mode selected from a horizontal prediction mode, a vertical prediction mode, a Bilinear prediction mode, a DC prediction mode, a class horizontal prediction mode, and a class vertical prediction mode; wherein the class horizontal prediction modes include mode 56 to mode 59, mode 23, and mode 25 among the intra prediction modes; the class of vertical prediction modes includes modes 42 to 45, 11, and 13 among the intra prediction modes; or
An intra prediction mode selected from intra prediction modes adopted by the SAWP; or
And an intra-frame prediction mode selected from the intra-frame prediction modes counted by the intra-frame coding FIMC mode based on the frequency information.
4. The video decoding method of claim 1, wherein obtaining intra-prediction mode statistics that are statistics of intra-prediction modes employed by coding blocks in the video image frame comprises:
and obtaining an intra-frame prediction mode statistical result obtained by the FIMC mode statistics.
5. The video decoding method of claim 4, wherein the method further comprises:
acquiring two intra-frame prediction modes of a SAWP mode obtained by decoding the coding blocks of the video image frame;
and updating the intra-frame prediction mode statistical result counted by the FIMC mode through 1 or 2 intra-frame prediction modes in the two intra-frame prediction modes.
6. The method of claim 1, wherein determining the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index value, and the intra prediction mode statistics comprises:
determining a plurality of candidate prediction modes with sequential relation according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results;
and according to the sequence relation, selecting a set number of different candidate prediction modes from the plurality of candidate prediction modes to generate the maximum possible intra-frame prediction mode list.
7. The method of claim 6, wherein determining candidate prediction modes with an order relation according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics comprises:
determining a first candidate mode set according to the intra-frame prediction modes adopted by the adjacent blocks, and determining a second candidate mode set according to the SAWP index value;
according to the sequence of the statistic values from large to small, selecting n1 intra-frame prediction modes from the intra-frame prediction mode statistic results;
determining the plurality of candidate prediction modes having the sequential relationship according to the first candidate mode set, the second candidate mode set and the n1 intra prediction modes.
8. The video decoding method of claim 7, wherein determining the plurality of candidate prediction modes having the sequential relationship according to the first candidate mode set, the second candidate mode set, and the n1 intra prediction modes comprises:
replacing n1 intra-frame prediction modes in the first candidate mode set by the n1 intra-frame prediction modes to obtain an updated first candidate mode set, wherein n1 is less than or equal to the number of candidate modes in the first candidate mode set;
and generating the plurality of candidate prediction modes with the sequential relation according to the second candidate mode set and the updated first candidate mode set.
9. The video decoding method of claim 7, wherein determining the plurality of candidate prediction modes with the sequential relationship according to the first candidate mode set, the second candidate mode set and the n1 intra prediction modes comprises:
replacing n1 intra-frame prediction modes in a third candidate mode set formed by the first candidate mode set and the second candidate mode set by the n1 intra-frame prediction modes to obtain an updated third candidate mode set, wherein n1 is less than or equal to the number of candidate modes contained in the third candidate mode set;
and generating the plurality of candidate prediction modes with the sequential relation according to the updated third candidate mode set.
10. The video decoding method of claim 7, wherein determining the plurality of candidate prediction modes having the sequential relationship according to the first candidate mode set, the second candidate mode set, and the n1 intra prediction modes comprises:
sorting the candidate prediction modes in the first candidate mode set and the candidate prediction modes in the second candidate mode set to obtain sorted candidate prediction modes;
adding the n1 intra-prediction modes to designated positions in the ordered candidate prediction modes to generate the plurality of candidate prediction modes having the sequential relationship;
wherein the designated position comprises the front, the back or is inserted into the ordered candidate prediction modes.
11. The video decoding method of claim 7, wherein before determining the plurality of candidate prediction modes having the sequential relationship according to the first candidate mode set, the second candidate mode set, and the n1 intra prediction modes, the video decoding method further comprises:
replacing the first candidate mode set, or a first intra-prediction mode of the first candidate mode set and the n1 intra-prediction modes, with a vertical prediction mode or a vertical-like prediction mode, and a second intra-prediction mode of the first candidate mode set with a horizontal prediction mode or a horizontal-like prediction mode;
wherein the first intra prediction mode is mode 0, mode 1 or mode 32 of intra prediction modes, the second intra prediction mode is mode 2, mode 3 or mode 33 of intra prediction modes, the class of horizontal prediction modes includes mode 56 to mode 59, mode 23 and mode 25 of intra prediction modes, and the class of vertical prediction modes includes mode 42 to mode 45, mode 11 and mode 13 of intra prediction modes.
12. The video decoding method according to any of claims 7 to 11, wherein selecting n1 intra prediction modes from the intra prediction mode statistics in descending order of statistics comprises:
according to the sequence of the statistic values from large to small, selecting the first n1 intra-frame prediction modes from the intra-frame prediction mode statistic results; or
And according to the sequence of the statistic values from large to small, selecting the first n1 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistic results.
13. The method of claim 1, wherein determining the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index value, and the intra prediction mode statistics comprises:
generating a maximum possible intra-frame prediction mode initial list according to the intra-frame prediction modes adopted by the adjacent blocks and the SAWP index value;
according to the sequence of statistics values from large to small, selecting the first n2 SAWP-compliant intra-frame prediction modes from the intra-frame prediction mode statistical results;
replacing part of the intra prediction modes in the initial list of the maximum possible intra prediction modes by the first n2 SAWP-compliant intra prediction modes to obtain the list of the maximum possible intra prediction modes.
14. The method of claim 1, wherein determining the maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index value, and the intra prediction mode statistics comprises:
according to the order of statistics values from large to small, selecting the first n3 SAWP-compliant intra-prediction modes from the intra-prediction mode statistics results to generate the maximum possible intra-prediction mode list, wherein the number of the intra-prediction modes counted in the intra-prediction mode statistics results is greater than or equal to the number of the intra-prediction modes contained in the maximum possible intra-prediction mode list.
15. The video decoding method of claim 1, wherein the video decoding method further comprises: determining whether a corresponding coding block needs to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the adjacent blocks, the SAWP index value and the intra prediction mode statistical result when decoding according to at least one of the following modes:
the value of an index identifier contained in a sequence header of a coding block corresponding to a video image frame sequence;
and the value of the index identifier contained in the image header of the coding block corresponding to the video image frame.
16. A video encoding method, comprising:
determining a SAWP index value corresponding to a coding block of a video image frame;
acquiring intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and acquiring intra-frame prediction mode statistical results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame;
determining a maximum possible intra-frame prediction mode list according to the intra-frame prediction modes adopted by the adjacent blocks, the SAWP index values and the intra-frame prediction mode statistical results;
encoding other encoded blocks of the video image frame based on the list of most probable intra-prediction modes.
17. A video decoding apparatus, comprising:
the decoding unit is configured to decode the coding blocks of the video image frame to obtain a spatial domain angle weighted prediction (SAWP) index value;
a first obtaining unit, configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame;
a first processing unit, configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics;
a second processing unit configured to perform decoding processing on other encoded blocks of the video image frame based on the maximum possible intra-prediction mode list.
18. A video encoding apparatus, comprising:
the determining unit is configured to determine a SAWP index value corresponding to a coding block of the video image frame;
a second obtaining unit, configured to obtain intra-frame prediction modes adopted by adjacent blocks of the coding blocks, and obtain intra-frame prediction mode statistics results obtained by performing statistics on the intra-frame prediction modes adopted by the coding blocks in the video image frame;
a third processing unit, configured to determine a maximum possible intra prediction mode list according to the intra prediction modes adopted by the neighboring blocks, the SAWP index values, and the intra prediction mode statistics;
a fourth processing unit configured to perform encoding processing on other encoded blocks of the video image frame based on the maximum possible intra prediction mode list.
19. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out a video decoding method according to any one of claims 1 to 15 or carries out a video encoding method according to claim 16.
20. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the video decoding method of any one of claims 1 to 15 or the video encoding method of claim 16.
CN202110273043.7A 2021-03-14 2021-03-14 Video encoding and decoding method and device, computer readable medium and electronic equipment Pending CN115086654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110273043.7A CN115086654A (en) 2021-03-14 2021-03-14 Video encoding and decoding method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110273043.7A CN115086654A (en) 2021-03-14 2021-03-14 Video encoding and decoding method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115086654A true CN115086654A (en) 2022-09-20

Family

ID=83241366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110273043.7A Pending CN115086654A (en) 2021-03-14 2021-03-14 Video encoding and decoding method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115086654A (en)

Similar Documents

Publication Publication Date Title
US10419763B2 (en) Method and apparatus of context modelling for syntax elements in image and video coding
CN111866512B (en) Video decoding method, video encoding method, video decoding apparatus, video encoding apparatus, and storage medium
CN112533000B (en) Video decoding method and device, computer readable medium and electronic equipment
CN114930817A (en) Signaling technique for quantizing related parameters
CN112543338B (en) Video decoding method and device, computer readable medium and electronic equipment
CN112543337B (en) Video decoding method, device, computer readable medium and electronic equipment
CN112565751B (en) Video decoding method and device, computer readable medium and electronic equipment
EP4246975A1 (en) Video decoding method and apparatus, video coding method and apparatus, and device
CN113207002B (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
CN112995671B (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
US20230053118A1 (en) Video decoding method, video coding method, and related apparatus
CN115209157A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
WO2021263251A1 (en) State transition for dependent quantization in video coding
CN114885160A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
CN115086654A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
CN114071158A (en) Motion information list construction method, device and equipment in video coding and decoding
CN114079773B (en) Video decoding method and device, computer readable medium and electronic equipment
CN114079772B (en) Video decoding method and device, computer readable medium and electronic equipment
CN115209141A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
WO2022174701A1 (en) Video coding method and apparatus, video decoding method and apparatus, and computer-readable medium and electronic device
CN114979656A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
WO2022037464A1 (en) Video decoding method and apparatus, video coding method and apparatus, device, and storage medium
WO2022116854A1 (en) Video decoding method and apparatus, readable medium, electronic device, and program product
CN115209138A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
CN115209146A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40074378

Country of ref document: HK