CN110572679B - Method, device and equipment for coding intra-frame prediction and readable storage medium - Google Patents

Method, device and equipment for coding intra-frame prediction and readable storage medium Download PDF

Info

Publication number
CN110572679B
CN110572679B CN201910926492.XA CN201910926492A CN110572679B CN 110572679 B CN110572679 B CN 110572679B CN 201910926492 A CN201910926492 A CN 201910926492A CN 110572679 B CN110572679 B CN 110572679B
Authority
CN
China
Prior art keywords
prediction
mode
partition
unit
prediction unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910926492.XA
Other languages
Chinese (zh)
Other versions
CN110572679A (en
Inventor
韩庆瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910926492.XA priority Critical patent/CN110572679B/en
Publication of CN110572679A publication Critical patent/CN110572679A/en
Application granted granted Critical
Publication of CN110572679B publication Critical patent/CN110572679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Abstract

The application discloses an intra-frame prediction encoding method, an intra-frame prediction encoding device, an intra-frame prediction encoding equipment and a readable storage medium, and relates to the field of video processing. The method comprises the following steps: acquiring an image frame to be encoded; dividing LCUs in an image frame to be coded in a first dividing mode and a second dividing mode; when the coding cost adopting the second division mode is not more than the coding cost adopting the first division mode, dividing the LCU in a third division mode; and coding the image frame to be coded according to the division mode of the coding units. When the first division mode with smaller division granularity is superior to the second division mode, the first division mode is used as the division mode of the coding unit in the second prediction unit, and when the second division mode is superior to the first division mode, the comparison between the second division mode and the third division mode is continuously carried out, so that the problem of more redundant calculation is avoided, and the efficiency of intra-frame prediction is improved.

Description

Method, device and equipment for coding intra-frame prediction and readable storage medium
Technical Field
The present disclosure relates to the field of video processing, and in particular, to an intra prediction encoding method, apparatus, device, and readable storage medium.
Background
In the process of encoding a video, an I frame needs to be encoded by means of intra-frame prediction, and in the process of intra-frame prediction, the I frame needs to be divided by taking a Largest Coding Unit (LCU) as a division range, and each LCU is divided according to the complexity of image content, and generally, a portion of the image content with complexity is divided by a smaller-sized Coding Unit, for example: the division is performed by coding units of 4 × 4 size, while the portions of the image content that are more unitary are divided by coding units of larger size, such as: the division is performed by coding units of size 16 × 16.
In the related art, in the process of dividing the coding unit of the LCU, the LCU is predicted by the size of the LCU (e.g., 64 × 64), then the LCU is predicted by the coding unit of 32 × 32 size, and the advantages and disadvantages of the two prediction results are compared, and under each coding unit of 32 × 32 size, the coding unit of 16 × 16 size is predicted, and the prediction of the coding units of 8 × 8 size and 4 × 4 size is performed step by step, and finally the division of the coding unit in the LCU is completed.
However, since the partition process requires complete prediction of all sizes of the LCU, and most of the partition results are concentrated in coding units of 8 × 8 size and 4 × 4 size, the partition efficiency is low and the partition process is complicated when the coding units are partitioned in the above manner.
Disclosure of Invention
The embodiment of the application provides an intra-frame prediction encoding method, an intra-frame prediction encoding device, an intra-frame prediction encoding equipment and a readable storage medium, and can solve the problems of low dividing efficiency and complex dividing process when an encoding unit is divided. The technical scheme is as follows:
in one aspect, an encoding method for intra prediction is provided, the method including:
acquiring an image frame to be encoded, wherein the image frame to be encoded is an image frame encoded in an intra-frame prediction mode;
dividing a largest coding unit LCU in the image frame to be coded in a first dividing mode and a second dividing mode, wherein the first dividing mode is a mode of dividing a first prediction unit in the largest coding unit LCU, the second dividing mode is a mode of dividing a second prediction unit in the largest coding unit LCU, the first prediction unit is obtained by dividing the first prediction unit in the second prediction unit, and the size of the first prediction unit is smaller than that of the second prediction unit;
when the coding cost adopting the second division mode is not greater than the coding cost adopting the first division mode, dividing the largest coding unit LCU in a third division mode, and determining the division mode of the coding unit to be coded according to the second division mode and the third division mode, wherein the third division mode is a mode of dividing a third prediction unit in the largest coding unit LCU, and the second prediction unit is obtained by dividing in the third prediction unit;
and coding the largest coding unit LCU in the image frame to be coded according to the division mode of the coding units.
In another aspect, an apparatus for encoding intra prediction is provided, the apparatus including:
the device comprises an acquisition module, a decoding module and a decoding module, wherein the acquisition module is used for acquiring an image frame to be encoded, and the image frame to be encoded is an image frame encoded in an intra-frame prediction mode;
a dividing module, configured to divide a largest coding unit LCU in the image frame to be encoded in a first dividing manner and a second dividing manner, where the first dividing manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second dividing manner is a manner of dividing a second prediction unit in the largest coding unit LCU, the first prediction unit is divided in the second prediction unit, and a size of the first prediction unit is smaller than a size of the second prediction unit;
a determining module, configured to, when a coding cost of the second partition manner is not greater than a coding cost of the first partition manner, partition the largest coding unit LCU in a third partition manner, and determine a partition manner of a coding unit to be coded according to the second partition manner and the third partition manner, where the third partition manner is a manner of partitioning a third prediction unit in the largest coding unit LCU, and the second prediction unit is obtained by partitioning in the third prediction unit;
and the coding module is used for coding the largest coding unit LCU in the image frame to be coded according to the dividing mode of the coding unit.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction, at least one program, code set, or instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the intra-prediction encoding method as provided in the embodiments of the present application.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, code set or set of instructions that is loaded and executed by the processor to implement an encoding method for intra prediction as provided in embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer causes the computer to execute the encoding method for intra prediction as provided in the embodiments of the present application.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by comparing the first division mode with the second division mode, when the first division mode with smaller division granularity is superior to the second division mode, the first division mode is used as the division mode of the coding unit in the second prediction unit, and when the second division mode is superior to the first division mode, the comparison of the second division mode and the third division mode is continued, so that the problem of more redundant calculation generated when a large coding block encodes a small coding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram illustrating encoding of a current block provided by an exemplary embodiment of the present application;
fig. 2 is a block diagram of a communication system 200 provided in an exemplary embodiment of the present application;
FIG. 3 illustrates placement of a video encoder and a video decoder in a streaming environment, as provided by an exemplary embodiment of the present application;
fig. 4 is a schematic diagram illustrating a division result when an image frame is divided by an encoding unit according to an exemplary embodiment of the present application;
FIG. 5 is a flowchart of an encoding method for intra prediction provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of an intra-prediction encoding method according to another exemplary embodiment of the present application;
fig. 7 is a schematic diagram of a division manner of coding blocks provided by an exemplary embodiment of the present application;
fig. 8 is a flowchart of an encoding method for intra prediction according to another exemplary embodiment of the present application;
FIG. 9 is a diagram illustrating a dependency relationship of prediction angles among multi-layered coding units according to an exemplary embodiment of the present application;
fig. 10 is a block diagram illustrating an apparatus for encoding intra prediction according to an exemplary embodiment of the present application;
fig. 11 is a block diagram illustrating an intra prediction encoding apparatus according to another exemplary embodiment of the present application;
fig. 12 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Various Motion Vector (MV) prediction mechanisms are described in H.265/HEVC (ITU-T H.265 recommendation, "High Efficiency Video Coding", 2016 (12 months) 2016). Among the various MV prediction mechanisms provided by HEVC, described herein is a technique referred to hereinafter as "spatial merging.
Referring to fig. 1, a current block (101) includes samples that have been found by an encoder during a motion search process, which samples can be predicted from previous blocks of the same size that have generated spatial offsets. In addition, instead of directly encoding the MVs, the MVs may be derived from metadata associated with one or more reference pictures. For example, MVs are derived (in decoding order) from the metadata of the most recent reference picture using MVs associated with any of a0, a1, and B0, B1, B2 (102-106, respectively) five surrounding samples. In h.265, MV prediction can use the prediction value of the same reference picture that the neighboring block is also using.
Fig. 2 is a simplified block diagram of a communication system (200) according to an embodiment disclosed herein. The communication system (200) includes a plurality of terminal devices that can communicate with each other through, for example, a network (250). For example, a communication system (200) includes a first terminal device (210) and a second terminal device (220) interconnected by a network (250). In the embodiment of fig. 2, the first terminal device (210) and the second terminal device (220) perform unidirectional data transmission. For example, a first end device (210) may encode video data, such as a stream of video pictures captured by the end device (210), for transmission over a network (250) to a second end device (220). The encoded video data is transmitted in the form of one or more encoded video streams. The second terminal device (220) may receive the encoded video data from the network (250), decode the encoded video data to recover the video data, and display a video picture according to the recovered video data. Unidirectional data transmission is common in applications such as media services.
In another embodiment, a communication system (200) includes a third terminal device (230) and a fourth terminal device (240) that perform bidirectional transmission of encoded video data, which may occur, for example, during a video conference. For bi-directional data transmission, each of the third terminal device (230) and the fourth terminal device (240) may encode video data (e.g., a stream of video pictures captured by the terminal device) for transmission over the network (250) to the other of the third terminal device (230) and the fourth terminal device (240). Each of the third terminal device (230) and the fourth terminal device (240) may also receive encoded video data transmitted by the other of the third terminal device (230) and the fourth terminal device (240), and may decode the encoded video data to recover the video data, and may display video pictures on an accessible display device according to the recovered video data.
In the embodiment of fig. 2, the first terminal device (210), the second terminal device (220), the third terminal device (230), and the fourth terminal device (240) may be a server, a personal computer, and a smart phone, but the principles disclosed herein may not be limited thereto. Embodiments disclosed herein are applicable to laptop computers, tablet computers, media players, and/or dedicated video conferencing equipment. Network (250) represents any number of networks that communicate encoded video data between first terminal device (210), second terminal device (220), third terminal device (230), and fourth terminal device (240), including, for example, wired (wired) and/or wireless communication networks. The communication network (250) may exchange data in circuit-switched and/or packet-switched channels. The network may include a telecommunications network, a local area network, a wide area network, and/or the internet. For purposes of this application, the architecture and topology of the network (250) may be immaterial to the operation disclosed herein, unless explained below.
By way of example, fig. 3 illustrates the placement of a video encoder and a video decoder in a streaming environment. The subject matter disclosed herein is equally applicable to other video-enabled applications including, for example, video conferencing, digital TV, storing compressed video on digital media including CDs, DVDs, memory sticks, and the like.
The streaming system may include an acquisition subsystem (313), which may include a video source (301), such as a digital camera, that creates an uncompressed video picture stream (302). In an embodiment, the video picture stream (302) includes samples taken by a digital camera. The video picture stream (302) is depicted as a thick line to emphasize a high data amount video picture stream compared to the encoded video data (304) (or encoded video code stream), the video picture stream (302) being processable by an electronic device (320), the electronic device (320) comprising a video encoder (303) coupled to a video source (301). The video encoder (303) may comprise hardware, software, or a combination of hardware and software to implement or embody aspects of the disclosed subject matter as described in more detail below. The encoded video data (304) (or encoded video codestream (304)) is depicted as a thin line to emphasize the lower data amount of the encoded video data (304) (or encoded video codestream (304)) as compared to the video picture stream (302), which may be stored on a streaming server (305) for future use.
One or more streaming client subsystems, such as client subsystem (306) and client subsystem (308) in fig. 3, may access streaming server (305) to retrieve copies (307) and copies (309) of encoded video data (304). The client subsystem (306) may include, for example, a video decoder (310) in an electronic device (330). The video decoder (310) decodes incoming copies (307) of the encoded video data and generates an output video picture stream (311) that may be presented on a display (312), such as a display screen, or another presentation device (not depicted). In some streaming systems, encoded video data (304), video data (307), and video data (309) (e.g., video streams) may be encoded according to certain video encoding/compression standards.
Examples of such standards include ITU-T H.265. In an embodiment, the Video Coding standard under development is informally referred to as next generation Video Coding (VVC), and the present application may be used in the context of the VVC standard.
It should be noted that electronic device (320) and electronic device (330) may include other components (not shown). For example, the electronic device (320) may include a video decoder (not shown), and the electronic device (330) may also include a video encoder (not shown).
In the process of encoding video, I frames (i.e. key frames) are encoded in an intra-frame encoding manner, and the I frames are divided into encoding units. Generally, the more complex parts of the image content are divided by the smaller size coding units, such as: the division is performed by coding units of 4 × 4 size, while the portions of the image content that are more unitary are divided by coding units of larger size, such as: the division is performed by coding units of size 16 × 16.
Referring to fig. 4, after the image frame 400 is divided into coding units, as shown in fig. 4, the coding unit 411 for dividing the low complexity texture area 410 is larger in size, and the coding unit 421 for dividing the high complexity texture area 420 is smaller in size.
For an exemplary aspect, the ratio of coding units of different sizes obtained by dividing in different I frames refers to the following table one:
watch 1
Figure BDA0002219076330000071
As can be seen from the table I, the occupied ratio of the coding units divided by the two division methods is the largest, the occupied ratio of the coding units divided by the two division methods is close to 80%, while 16x16 and 32x32 are fewer, and when the I-frame resolution is lower, the number of the coding units of 16x16 and 32x32 is nearly zero, so that a large amount of redundant calculation is generated when prediction is performed from a large-size coding unit to a small-size coding unit.
Fig. 5 is a flowchart of an encoding method for intra prediction according to an exemplary embodiment of the present application, which is described by way of example as being applied to an encoder, and as shown in fig. 5, the method includes:
step 501, obtaining an image frame to be encoded, where the image frame to be encoded is an image frame encoded in an intra-frame prediction mode.
Optionally, the image frame to be encoded is an I frame in a video image frame, that is, a key frame, and optionally, the key frame is an image frame encoded by an intra-frame prediction encoding method.
Optionally, when the image frame to be encoded is acquired, a video stream to be encoded is acquired first, each frame of key frame in the video stream is acquired in sequence, and the current key frame to be encoded is used as the image frame to be encoded.
Step 502, dividing a largest coding unit LCU in an image frame to be coded in a first division manner and a second division manner, wherein the first division manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second division manner is a manner of dividing a second prediction unit in the largest coding unit LCU, and the first prediction unit is obtained by dividing in the second prediction unit.
Optionally, the image frame to be encoded is divided into a number of largest coding units LCU, and the division of the coding units is performed for each LCU. Optionally, the first prediction unit is smaller in size than the second prediction unit.
Optionally, taking an example of a coding unit partitioning process of a single LCU, first, the LCU is partitioned in a first partitioning manner and a second partitioning manner, where the first partitioning manner is a manner of partitioning the first prediction unit in the second prediction unit partitioned by the second partitioning manner.
Optionally, the first partition manner and the second partition manner may be performed from preset positions in the LCU, such as: the second partition manner is to partition from the upper left corner of the LCU to obtain 1 second prediction unit with 8 × 8 size, and the first partition manner is to partition in the second prediction unit with 8 × 8 size to obtain 4 first prediction units with 4 × 4 size; or, the second partition manner is to partition 4 second prediction units with 8 × 8 size from the upper left corner of the LCU, and the first partition manner is to partition 4 first prediction units with 4 × 4 size in each second prediction unit with 8 × 8 size.
And 503, when the first partition mode is better than the second partition mode, taking the first partition mode as the partition mode of the coding unit to be coded in the second prediction unit corresponding to the second partition mode.
Optionally, when the first partition mode is better than the second partition mode, that is, the small-sized partition mode is better in the coding prediction, the first partition mode is used as the partition mode of the coding unit in the second prediction unit range.
Illustratively, the intra-division of the second prediction unit with the size of 8 × 8 results in 4 first prediction units with the size of 4 × 4, and the prediction result of the 4 first prediction units with the size of 4 × 4 is better than that of the second prediction units with the size of 8 × 8, and then the division manner of 4 × 4 is taken as the division manner of the coding unit in the second prediction unit.
Optionally, the second prediction unit is divided in a first division manner to obtain n first prediction units, where n is a positive integer, the n first prediction units are predicted to obtain n first coding costs, the second prediction unit is predicted to obtain a second coding cost, and when the sum of the n first coding costs is less than the second coding cost, it is determined that the first division manner is better than the second division manner.
Optionally, the second prediction unit (8 × 8) is divided into 4 first prediction units (4 × 4) by a first division method, the LCU is divided into 4 second prediction units by a second division method, the 4 second prediction units are subjected to first division to obtain four groups of first prediction units, and when there is a group of coding cost sum smaller than the corresponding second coding cost in the coding cost sum of each group of first prediction units in the 4 groups of first prediction units, the first division method is better than the second division method.
Illustratively, the second prediction unit a (coding cost a) is divided into a first prediction unit 1 (coding cost 1), a first prediction unit 2 (coding cost 2), a first prediction unit 3 (coding cost 3) and a first prediction unit 4 (coding cost 4), the second prediction unit B (coding cost B) is divided into a first prediction unit 5 (coding cost 5), a first prediction unit 6 (coding cost 6), a first prediction unit 7 (coding cost 7) and a first prediction unit 8 (coding cost 8), the second prediction unit C (coding cost C) is divided into a first prediction unit 9 (coding cost 9), a first prediction unit 10 (coding cost 10), a first prediction unit 11 (coding cost 11) and a first prediction unit 12 (coding cost 12), and the second prediction unit D (coding cost D) is divided into a first prediction unit 13 (coding cost 13) and a first prediction unit 12 (coding cost 12), The method comprises a first prediction unit 14 (coding cost 14), a first prediction unit 15 (coding cost 15) and a first prediction unit 16 (coding cost 16), wherein the sum of the coding cost 1, the coding cost 2, the coding cost 3 and the coding cost 4 is less than the coding cost a, the first partition mode is determined to be superior to the second partition mode, and the partition mode of the first prediction unit is determined to be the partition mode of the coding unit in the range of the second prediction unit A.
Optionally, after the first partition manner is used as the partition manner of the coding unit in the second prediction unit, the undivided region is continuously partitioned in the LCU in the first partition manner and the second partition manner, and the advantages and disadvantages of the first partition manner and the second partition manner are compared until the LCU is completely partitioned.
Illustratively, if the 4 × 4 prediction unit partition and the 8 × 8 prediction unit partition are performed from the upper left corner of the LCU and it is determined that the 4 × 4 prediction unit partition is better than the 8 × 8 prediction unit partition, the 4 × 4 partition is used as the partition of the coding unit in the 8 × 8 prediction unit, and the 4 × 4 prediction unit partition and the 8 × 8 prediction unit partition are performed on the left side of the 8 × 8 prediction unit.
And step 504, when the second division mode is better than the first division mode, dividing the largest coding unit LCU in a third division mode, and determining the division mode of the coding unit to be coded according to the second division mode and the third division mode.
Optionally, the third partition manner is a manner of partitioning a third prediction unit in the largest coding unit LCU, and the second prediction unit is partitioned within the third prediction unit, that is, the second partition manner is a manner of partitioning the second prediction unit within the third prediction unit partitioned by the third partition manner.
Illustratively, taking the second partition manner as an example of dividing the second prediction unit with a size of 8 × 8 in the LCU, the third partition manner is dividing the third prediction unit with a size of 16 × 16 in the LCU, the second prediction unit is a prediction unit divided from the third prediction unit, and 4 second prediction units are divided from the third prediction unit.
Optionally, when the second coding costs of the m second prediction units are not greater than the sum of the n corresponding first coding costs, it is determined that the second partition manner is better than the first partition manner, where the m second prediction units are arranged consecutively, and m is a positive integer. Optionally, the value of m is 4, and the m second prediction units are arranged in a square.
Illustratively, the second prediction unit a (coding cost a) is divided into a first prediction unit 1 (coding cost 1), a first prediction unit 2 (coding cost 2), a first prediction unit 3 (coding cost 3) and a first prediction unit 4 (coding cost 4), the second prediction unit B (coding cost B) is divided into a first prediction unit 5 (coding cost 5), a first prediction unit 6 (coding cost 6), a first prediction unit 7 (coding cost 7) and a first prediction unit 8 (coding cost 8), the second prediction unit C (coding cost C) is divided into a first prediction unit 9 (coding cost 9), a first prediction unit 10 (coding cost 10), a first prediction unit 11 (coding cost 11) and a first prediction unit 12 (coding cost 12), and the second prediction unit D (coding cost D) is divided into a first prediction unit 13 (coding cost 13) and a first prediction unit 12 (coding cost 12), A first prediction unit 14 (coding cost 14), a first prediction unit 15 (coding cost 15) and a first prediction unit 16 (coding cost 16), wherein the sum of coding cost 1, coding cost 2, coding cost 3 and coding cost 4 is not less than coding cost a, the sum of coding cost 5, coding cost 6, coding cost 7 and coding cost 8 is not less than coding cost b, the sum of coding cost 9, coding cost 10, coding cost 11 and coding cost 12 is not less than coding cost c, the sum of coding cost 13, coding cost 14, coding cost 15 and coding cost 16 is not less than coding cost d, then it is determined that the second partition mode is better than the first partition mode, and the LCU is partitioned in the third partition mode.
And 505, coding the largest coding unit LCU in the image frame to be coded according to the dividing mode of the coding units.
Optionally, each coding unit in the LCU is encoded in turn with each coding unit as a coding range. Optionally, for each LCU in the image frame to be encoded, the encoding process of the image frame to be encoded is implemented through the above steps 501 to 505.
In summary, in the intra-frame prediction encoding method provided in this embodiment, the first partition manner and the second partition manner are compared first, and when the first partition manner with a smaller partition granularity is better than the second partition manner, the first partition manner is used as the partition manner of the encoding unit in the second prediction unit, and when the second partition manner is better than the first partition manner, the second partition manner and the third partition manner are continuously compared, so that the problem of more redundant computations generated when a large encoding block encodes a small encoding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
In an optional embodiment, when determining the partition manner of the coding unit according to the second partition manner and the third partition manner, the two partition manners need to be compared, fig. 6 is a flowchart of an encoding method for intra prediction according to another exemplary embodiment of the present application, which is described by taking application of the method to an encoder as an example, as shown in fig. 6, the method includes:
step 601, obtaining an image frame to be encoded, where the image frame to be encoded is an image frame encoded in an intra-frame prediction mode.
Optionally, the image frame to be encoded is an I frame in a video image frame, that is, a key frame, and optionally, the key frame is an image frame encoded by an intra-frame prediction encoding method.
Step 602, dividing a largest coding unit LCU in an image frame to be coded in a first division manner and a second division manner, wherein the first division manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second division manner is a manner of dividing a second prediction unit in the largest coding unit LCU, and the first prediction unit is obtained by dividing in the second prediction unit.
Optionally, the first partition manner and the second partition manner may be performed from preset positions in the LCU, such as: the second partition manner is to partition from the upper left corner of the LCU to obtain 1 second prediction unit with 8 × 8 size, and the first partition manner is to partition in the second prediction unit with 8 × 8 size to obtain 4 first prediction units with 4 × 4 size; or, the second partition manner is to partition 4 second prediction units with 8 × 8 size from the upper left corner of the LCU, and the first partition manner is to partition 4 first prediction units with 4 × 4 size in each second prediction unit with 8 × 8 size.
Step 603, when the first partition method is better than the second partition method, the first partition method is used as the partition method of the coding unit in the second prediction unit corresponding to the second partition method.
Optionally, when the first partition mode is better than the second partition mode, that is, the small-sized partition mode is better in the coding prediction, the first partition mode is used as the partition mode of the coding unit in the second prediction unit range.
Optionally, when the first partition manner is better than the second partition manner, a first flag bit corresponding to the first partition manner is generated for the second prediction unit, and the first flag bit is used for indicating that the decoded block obtained by partitioning the second prediction unit in the first partition manner is decoded in the decoding process.
And step 604, when the second division mode is better than the first division mode, dividing the largest coding unit LCU in a third division mode.
Optionally, the third partition manner is a manner of partitioning a third prediction unit in the largest coding unit LCU, and the second prediction unit is partitioned within the third prediction unit.
Illustratively, taking the second partition manner as an example of dividing the second prediction unit with a size of 8 × 8 in the LCU, the third partition manner is dividing the third prediction unit with a size of 16 × 16 in the LCU, the second prediction unit is a prediction unit divided from the third prediction unit, and 4 second prediction units are divided from the third prediction unit.
In step 605, when the second partition method is better than the third partition method, the second partition method is used as the partition method of the coding unit in the third prediction unit corresponding to the third partition method.
Optionally, the third prediction unit is divided into n second prediction units by a second division manner, such as: and 4 second prediction units of 8 × 8 are obtained by dividing the inside of the 16 × 16 third prediction unit in a second division mode. And predicting the n second prediction units to obtain n second coding costs, predicting the third prediction unit to obtain a third coding cost, and determining that the second division mode is superior to the third division mode when the sum of the n second coding costs is less than the third coding cost.
Optionally, when the second division manner is better than the third division manner, a second flag bit corresponding to the second division manner is generated for the third prediction unit, and the second flag bit is used for indicating that the decoded block obtained by dividing the third prediction unit in the second division manner is decoded in the decoding process.
Step 606, when the third partition is better than the second partition and the size of the third prediction unit is smaller than the size of the LCU, the fourth partition is determined.
Optionally, the fourth partition manner is a manner of partitioning a fourth prediction unit within the largest coding unit LCU, and the third prediction unit is partitioned within the fourth prediction unit.
Optionally, when the cost of the third coding of the m third prediction units is not greater than the sum of the n second coding costs respectively corresponding to the m third prediction units, determining that the third partition mode is better than the second partition mode, where the m third prediction units are arranged consecutively, and m is a positive integer. Optionally, the value of m is 4, and the m third prediction units are arranged in a square.
Illustratively, taking the third division manner as an example of dividing the third prediction unit with a size of 16 × 16 in the LCU, the fourth division manner is dividing the fourth prediction unit with a size of 32 × 32 in the LCU, the third prediction unit is a prediction unit divided from the fourth prediction unit, and 4 third prediction units are divided from the fourth prediction unit.
Step 607, determining the partition mode of the coding unit according to the third partition mode and the fourth partition mode.
Optionally, when the third partition method is better than the fourth partition method, the third partition method is used as the partition method of the coding unit in the fourth prediction unit corresponding to the fourth partition method. Optionally, when the third division manner is better than the fourth division manner, a third flag corresponding to the third division manner is generated for the fourth prediction unit, and the third flag is used for indicating that the decoded block divided by the fourth prediction unit in the third division manner is decoded in the decoding process.
And when the fourth division mode is better than the third division mode, taking the fourth division mode as the division mode of the coding unit in the LCU.
Optionally, the fourth prediction unit is divided into n third prediction units in a third division manner, such as: and 4 third prediction units of 16 × 16 are obtained by dividing the 32 × 32 third prediction units in a third division mode. And predicting the n third prediction units to obtain n third coding costs, predicting the fourth prediction unit to obtain a fourth coding cost, and determining that the third division mode is superior to the fourth division mode when the sum of the n third coding costs is less than the fourth coding cost.
Optionally, when the cost of the fourth coding of the m fourth prediction units is not greater than the sum of the n third coding costs respectively corresponding to the m fourth prediction units, determining that the fourth partition mode is better than the third partition mode, where the m fourth prediction units are arranged consecutively, and m is a positive integer. Optionally, the value of m is 4, and the m fourth prediction units are arranged in a square. Referring to fig. 7, schematically, the coding blocks are divided as shown in fig. 7, where a sequence number 1 represents a 32 × 32 coding block, sequence numbers 2, 7, 12, and 17 represent 4 16 × 16 coding blocks, and sequence numbers 3, 4, 8, 9, 5, 6, 10, 11, 13, 14, 18, 19, 15, 16, 20, and 21 represent 16 8 × 8 coding blocks, and optionally, the 8 × 8 coding blocks may be further divided into 4 × 4 coding blocks.
And 608, encoding the largest coding unit LCU in the image frame to be encoded according to the dividing mode of the coding units.
Optionally, each coding unit in the LCU is encoded in turn with each coding unit as a coding range.
In summary, in the intra-frame prediction encoding method provided in this embodiment, the first partition manner and the second partition manner are compared first, and when the first partition manner with a smaller partition granularity is better than the second partition manner, the first partition manner is used as the partition manner of the encoding unit in the second prediction unit, and when the second partition manner is better than the first partition manner, the second partition manner and the third partition manner are continuously compared, so that the problem of more redundant computations generated when a large encoding block encodes a small encoding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
In the method provided by this embodiment, the first partition manner and the second partition manner are compared first, when the second partition manner is better, the second partition manner and the third partition manner are compared, and when the third partition manner is better, the third partition manner and the fourth partition manner are compared layer by layer, so that the problem of more redundant calculations generated when a large coding block encodes a small coding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
In an optional embodiment, in the prediction process of the first prediction unit, the second prediction unit, the third prediction unit or the fourth prediction unit, prediction is performed by using a prediction angle mode. Fig. 8 is a flowchart of an encoding method for intra prediction according to another exemplary embodiment of the present application, which is described by way of example as being applied to an encoder, and as shown in fig. 8, the method includes:
step 801, obtaining an image frame to be encoded, where the image frame to be encoded is an image frame encoded in an intra-frame prediction mode.
Optionally, the image frame to be encoded is an I frame in a video image frame, that is, a key frame, and optionally, the key frame is an image frame encoded by an intra-frame prediction encoding method.
Step 802, dividing a largest coding unit LCU in an image frame to be coded in a first division manner and a second division manner, wherein the first division manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second division manner is a manner of dividing a second prediction unit in the largest coding unit LCU, and the first prediction unit is obtained by dividing in the second prediction unit.
Optionally, the first partition manner and the second partition manner may be performed from preset positions in the LCU, such as: the second partition manner is to partition from the upper left corner of the LCU to obtain 1 second prediction unit with 8 × 8 size, and the first partition manner is to partition in the second prediction unit with 8 × 8 size to obtain 4 first prediction units with 4 × 4 size; or, the second partition manner is to partition 4 second prediction units with 8 × 8 size from the upper left corner of the LCU, and the first partition manner is to partition 4 first prediction units with 4 × 4 size in each second prediction unit with 8 × 8 size.
In step 803, n first angle prediction modes corresponding to the n first prediction units are determined.
Alternatively, each coding unit in the frame of h.265/HEVC has up to 35 angles, in order to reduce the complexity of prediction Mode selection in intra-frame coding, prediction is performed by a coarse Mode Decision (RMD) algorithm, by which Absolute values of residual hadamard transforms (SATD) of 35 angular prediction modes are first calculated, then a k-middle angular prediction Mode with the smallest SATD is determined, and the most probable angular prediction Mode of the current prediction unit is determined according to the best angular prediction modes of the left-hand and right-hand prediction units, where the values of k are different according to the sizes of the prediction units, as shown in table two below:
watch two
Figure BDA0002219076330000141
Figure BDA0002219076330000151
From table two, it can be seen that k is 8 when the PU size is 4x4 or 8x8, and k is 3 when the PU size is 16x16, 32x32 or 64x 64.
Optionally, each first angle prediction mode corresponds to a first prediction angle. Alternatively, the first prediction angles corresponding to the n first angle prediction modes may be different from each other, or there may be at least two first prediction angles that are the same. Optionally, the n first angle prediction modes correspond to at least two first prediction angles. And determining a first prediction angle range according to the at least two first prediction angles, and determining a second angle prediction mode of the second prediction unit in the first prediction angle range, so that the second angle prediction mode predicts the second prediction unit.
Step 804, for the first prediction unit, performing prediction in the corresponding first angle prediction mode to obtain n first coding costs.
In step 805, a first maximum angle and a first minimum angle of the at least two first predicted angles are determined.
Optionally, there is a certain dependency relationship between the coding units of each layer, and the dependency relationship is used to represent the association relationship between the prediction angles between the coding units of two layers, such as: the prediction angle of the second layer coding unit is within the prediction angle range of the first layer coding unit, and the second layer coding unit depends on the first layer coding unit. Schematically, please refer to the following table three:
watch III
Figure BDA0002219076330000152
As can be seen from the table III, the angle correlation between the upper layer and the lower layer is very large, and about 85% of the angle correlation is within the optimal angle range selected by the lower layer, such as: the angles of the 4 blocks of the 8x8 layers are: 20, 22, 24, 26, the corresponding angle of 16x16 layers is within 20-26.
Referring to fig. 9, schematically, which shows the dependency of the prediction angles among the multi-layer coding units, as shown in fig. 9, 4 8 × 8 coding units determine an angle range 910, the prediction angles of 16 × 16 coding units are within the angle range 910, 4 16 × 16 coding units determine an angle range 920, the prediction angles of 32 × 32 coding units are within the angle range 920, 4 32 × 32 coding units determine an angle range 930, and the prediction angles of 64 × 64 coding units are within the angle range 930.
In step 806, a second prediction angle corresponding to the second prediction unit is determined between the first minimum angle and the first maximum angle.
Optionally, an angular range between the first minimum angle and the first maximum angle is determined as the first angular range.
Optionally, in combination with the RMD algorithm, SATD calculation is performed on an angle between the first minimum angle and the first maximum angle, and a second prediction angle corresponding to the second prediction unit is determined according to a calculation result.
In step 807, the second prediction unit is predicted in a second angle prediction mode corresponding to the second prediction angle, so as to obtain a second coding cost.
And 808, when the sum of the n first coding costs is less than the second coding cost, determining that the first partition mode is superior to the second partition mode, and taking the first partition mode as the partition mode of the coding unit in a second prediction unit corresponding to the second partition mode.
Optionally, when the first partition mode is better than the second partition mode, that is, the small-sized partition mode is better in the coding prediction, the first partition mode is used as the partition mode of the coding unit in the second prediction unit range.
And step 809, when the second coding costs of the m second prediction units are not greater than the sum of the n corresponding first coding costs, determining that the second partition mode is better than the first partition mode, partitioning the largest coding unit LCU in a third partition mode, and determining the partition mode of the coding unit according to the second partition mode and the third partition mode.
Optionally, the n second prediction units are predicted to obtain n second coding costs, the third prediction unit is predicted to obtain a third coding cost, when the sum of the n second coding costs is less than the third coding cost, it is determined that the second partition mode is superior to the third partition mode, and the second partition mode is used as the partition mode of the coding unit in the third prediction unit corresponding to the third partition mode. In the prediction process of the n second coding costs, referring to step 805, the second prediction angles of the n second prediction units are determined according to the first prediction angle of the first prediction unit corresponding to the second prediction unit, and the second prediction units are predicted according to the second angle prediction mode corresponding to the second prediction angle.
Optionally, the prediction process of the third coding cost is to determine a second prediction angle range according to at least two second prediction angles corresponding to the n second prediction units, where the second prediction angle range is determined according to the at least two second prediction angles, for example: and determining a second maximum angle and a second minimum angle in the at least two second prediction angles, determining an angle range between the second minimum angle and the second maximum angle as a second prediction angle range, and determining a third prediction angle corresponding to the third prediction unit from the second prediction angle range, so as to perform prediction through a third angle prediction mode corresponding to the third prediction angle. Optionally, when the fourth prediction unit needs to be predicted, according to at least two third prediction angles corresponding to the n third prediction units, determining a third prediction angle range according to the at least two third prediction angles, such as: determining a third maximum angle and a third minimum angle in the at least two third prediction angles, determining an angle range between the third minimum angle and the third maximum angle as a third prediction angle range, and determining a fourth prediction angle corresponding to the fourth prediction unit from the third prediction angle range, so as to perform prediction through a fourth angle prediction mode corresponding to the fourth prediction angle. Optionally, the size of the third prediction unit is smaller than the size of the fourth prediction unit.
Alternatively, the sizes of the first prediction unit, the second prediction unit, the third prediction unit, and the fourth prediction unit are in a relationship of gradually increasing according to a set ratio.
That is, first, prediction of 4 prediction units of 4 × 4 size is completed, 4 prediction angles are obtained, which are denoted as a1, a2, a3, and a4, when prediction of a prediction unit of 8 × 8 size is performed, selection of a second prediction angle is performed within the angle range MAX (a1, a2, a3, a4) -MIN (a1, a2, a3, and a4), prediction is performed based on the selected angle, and after prediction is performed on 4 prediction units of 8 × 8 size, 4 optimal prediction angles of 8 × 8, which are denoted as b1, b2, b3, and b4, and so on, prediction of a prediction unit of 16 × 16 size, prediction unit of 32 × 32 size, and prediction unit of 64 × 64 size are performed.
And 810, coding the largest coding unit LCU in the image frame to be coded according to the dividing mode of the coding units.
Optionally, each coding unit in the LCU is encoded in turn with each coding unit as a coding range.
In summary, in the intra-frame prediction encoding method provided in this embodiment, the first partition manner and the second partition manner are compared first, and when the first partition manner with a smaller partition granularity is better than the second partition manner, the first partition manner is used as the partition manner of the encoding unit in the second prediction unit, and when the second partition manner is better than the first partition manner, the second partition manner and the third partition manner are continuously compared, so that the problem of more redundant computations generated when a large encoding block encodes a small encoding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
According to the method provided by the embodiment, when the prediction angle of the second prediction unit is determined, the maximum angle and the minimum angle in the first prediction angles of the n first prediction units are determined, so that the determination efficiency of the prediction angle is improved, and the prediction efficiency of the second prediction unit is improved.
Fig. 10 is a block diagram of an intra prediction encoding apparatus according to an exemplary embodiment of the present application, which is illustrated as being applied to an encoder, and as shown in fig. 10, the apparatus includes an obtaining module 1010, a dividing module 1020, a determining module 1030, and an encoding module 1040;
an obtaining module 1010, configured to obtain an image frame to be encoded, where the image frame to be encoded is an image frame encoded in an intra-frame prediction manner;
a dividing module 1020, configured to divide a largest coding unit LCU in the image frame to be encoded in a first dividing manner and a second dividing manner, where the first dividing manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second dividing manner is a manner of dividing a second prediction unit in the largest coding unit LCU, the first prediction unit is divided in the second prediction unit, and a size of the first prediction unit is smaller than a size of the second prediction unit;
a determining module 1030, configured to, when the coding cost in the second partition manner is not greater than the coding cost in the first partition manner, partition the largest coding unit LCU in a third partition manner, and determine a partition manner of a coding unit to be coded according to the second partition manner and the third partition manner, where the third partition manner is a manner of partitioning a third prediction unit in the largest coding unit LCU, and the second prediction unit is obtained by partitioning in the third prediction unit;
the encoding module 1040 is configured to encode the largest coding unit LCU in the image frame to be encoded according to the partition manner of the coding unit.
In an optional embodiment, the apparatus further comprises:
the partitioning module 1020 is further configured to use the first partitioning scheme as the partitioning scheme of the coding unit in the second prediction unit corresponding to the second partitioning scheme, when the first partitioning scheme is better than the second partitioning scheme.
In an optional embodiment, the second prediction unit is divided in the first division manner to obtain n first prediction units, where n is a positive integer;
as shown in fig. 11, the apparatus further includes:
a prediction module 1150, configured to predict the n first prediction units to obtain n first coding costs;
the prediction module 1150 is further configured to predict the second prediction unit to obtain a second coding cost;
the determining module 1030 is further configured to determine that the first partition manner is better than the second partition manner when a sum of the n first coding costs is less than the second coding cost.
In an optional embodiment, the prediction module 1150 is further configured to determine n first angle prediction modes corresponding to n first prediction units, where n first angle prediction modes correspond to at least two first prediction angles; for the first prediction unit, predicting in the corresponding first angle prediction mode;
the prediction module 1150 is further configured to determine a first prediction angle range according to the at least two first prediction angles; determining a second angular prediction mode of the second prediction unit within the first prediction angular range; predicting the second prediction unit in the second angular prediction mode.
In an optional embodiment, the prediction module 1150 is further configured to determine a first maximum angle and a first minimum angle of the at least two first predicted angles; determining an angular range between the first minimum angle and the first maximum angle as the first predicted angular range.
In an optional embodiment, the determining module 1030 is further configured to determine that the second partition is better than the first partition when the second coding costs of m second prediction units are not greater than a sum of corresponding n first coding costs, where m second prediction units are arranged consecutively, and m is a positive integer.
In an optional embodiment, the dividing module 1020 is further configured to generate, for the second prediction unit, a first flag bit corresponding to the first division manner when the first division manner is better than the second division manner, where the first flag bit is used to instruct to decode a decoded block obtained by dividing the second prediction unit in the first division manner in a decoding process.
In an optional embodiment, the determining module 1030 is further configured to, when the second partition is better than the third partition, use the second partition as the partition of the coding unit in the third prediction unit corresponding to the third partition; otherwise, when the size of the third prediction unit is smaller than the size of the largest coding unit LCU, determining a fourth partition manner, and determining the partition manner of the coding unit according to the third partition manner and the fourth partition manner, where the fourth partition manner is a manner of partitioning the fourth prediction unit within the largest coding unit LCU, the third prediction unit is partitioned within the fourth prediction unit, and the size of the third prediction unit is smaller than the size of the fourth prediction unit.
In an optional embodiment, the dividing module 1020 is further configured to generate a second flag bit corresponding to the second dividing manner for the third prediction unit when the second dividing manner is better than the third dividing manner, where the second flag bit is used to indicate that a decoded block obtained by dividing the third prediction unit in the second dividing manner is decoded in a decoding process.
In an optional embodiment, the determining module 1030 is further configured to, when the third partition is better than the fourth partition, use the third partition as the partition of the coding unit in the fourth prediction unit corresponding to the fourth partition; otherwise, in the largest coding unit LCU, the fourth division mode is used as the division mode of the coding unit.
In an optional embodiment, the dividing module 1020 is further configured to generate a third flag bit corresponding to the third dividing manner for the fourth prediction unit when the third dividing manner is better than the fourth dividing manner, where the third flag bit is used to instruct, in a decoding process, to decode a decoded block obtained by dividing the fourth prediction unit in the third dividing manner.
In an optional embodiment, the third prediction unit is divided in the second division manner to obtain n second prediction units, where n is a positive integer;
the device, still include:
a prediction module 1150, configured to predict the n second prediction units to obtain n second coding costs; predicting the third prediction unit to obtain a third coding cost;
the determining module 1030 is further configured to determine that the second partition is better than the third partition when the sum of the n second coding costs is less than the third coding cost.
In an optional embodiment, the prediction module 1150 is further configured to determine n second angular prediction modes corresponding to n second prediction units, where the n second angular prediction modes correspond to at least two second prediction angles; for the second prediction unit, predicting in the corresponding second angle prediction mode;
the prediction module 1150 is further configured to determine a second prediction angle range according to the at least two second prediction angles; determining a third angular prediction mode of the third prediction unit within the second prediction angular range; predicting the third prediction unit in the third angular prediction mode.
In an alternative embodiment, the determining module 1030 is further configured to determine a second maximum angle and a second minimum angle of the at least two second predicted angles; determining an angle range between the second minimum angle and the second maximum angle as the second predicted angle range.
In an optional embodiment, the determining module 1030 is further configured to determine that the third partition is better than the second partition when the third coding cost of m third prediction units is not greater than the sum of corresponding n second coding costs, where m third prediction units are arranged consecutively, and m is a positive integer.
In an alternative embodiment, the sizes of the first prediction unit, the second prediction unit, the third prediction unit and the fourth prediction unit are in a relationship of gradually increasing according to a set ratio.
In an optional embodiment, the first prediction unit is a prediction unit with a size of 4 × 4, the second prediction unit is a prediction unit with a size of 8 × 8, the third prediction unit is a prediction unit with a size of 16 × 16, the fourth prediction unit is a prediction unit with a size of 32 × 32, n is 4, the second prediction unit is divided by the first division method to obtain 4 first prediction units arranged in a square, the third prediction unit is divided by the second division method to obtain 4 second prediction units arranged in a square, and the fourth prediction unit is divided by the third division method to obtain 4 third prediction units arranged in a square.
In summary, in the intra-frame prediction encoding apparatus provided in this embodiment, the first partition manner and the second partition manner are compared first, and when the first partition manner with a smaller partition granularity is better than the second partition manner, the first partition manner is used as the partition manner of the encoding unit in the second prediction unit, and when the second partition manner is better than the first partition manner, the comparison between the second partition manner and the third partition manner is continued, so that the problem of more redundant computations generated when a large encoding block encodes a small encoding block is avoided, the efficiency of intra-frame prediction is improved, and the complexity of intra-frame prediction is reduced.
It should be noted that: the intra-frame prediction encoding apparatus provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the intra-frame prediction encoding apparatus provided in the above embodiments and the intra-frame prediction encoding method embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments, and are not described herein again.
Fig. 12 is a block diagram illustrating a terminal 1200 according to an exemplary embodiment of the present invention. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement the intra-prediction encoding methods provided by method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, pointing component 1208, and power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is configured to locate a current geographic Location of the terminal 1200 to implement navigation or LBS (Location Based Service). The Positioning component 1208 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical button or vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A method for encoding intra prediction, the method comprising:
acquiring an image frame to be encoded, wherein the image frame to be encoded is an image frame encoded in an intra-frame prediction mode;
dividing a largest coding unit LCU in the image frame to be coded in a first dividing mode and a second dividing mode, wherein the first dividing mode is a mode of dividing a first prediction unit in the largest coding unit LCU, the second dividing mode is a mode of dividing a second prediction unit in the largest coding unit LCU, the second prediction unit is divided into n first prediction units in the first dividing mode, and n is a positive integer;
determining n first angle prediction modes corresponding to n first prediction units, the n first angle prediction modes corresponding to at least two first prediction angles;
for the first prediction unit, predicting in the corresponding first angle prediction mode to obtain n first coding costs;
determining a first prediction angle range according to the at least two first prediction angles;
determining a second angular prediction mode of the second prediction unit within the first prediction angular range;
predicting the second prediction unit in the second angle prediction mode to obtain a second coding cost;
when the second coding cost is not greater than the sum of the n first coding costs, dividing the largest coding unit LCU in a third dividing mode, and determining a dividing mode of a coding unit to be coded according to the second dividing mode and the third dividing mode, wherein the third dividing mode is a mode of dividing a third prediction unit in the largest coding unit LCU, and the second prediction unit is obtained by dividing in the third prediction unit;
and coding the largest coding unit LCU in the image frame to be coded according to the division mode of the coding units.
2. The method of claim 1, further comprising:
and when the first partition mode is better than the second partition mode, taking the first partition mode as the partition mode of the coding unit in the second prediction unit corresponding to the second partition mode.
3. The method according to claim 2, wherein determining a first prediction angle range from the at least two first prediction angles comprises:
determining a first maximum angle and a first minimum angle of the at least two first predicted angles;
determining an angular range between the first minimum angle and the first maximum angle as the first predicted angular range.
4. The method according to claim 2, wherein before said dividing the LCU in a third division manner when the second coding cost is not greater than the sum of n first coding costs, further comprising:
and when the second coding costs of the m second prediction units are not more than the sum of the corresponding n first coding costs, determining that the second division mode is better than the first division mode, wherein the m second prediction units are continuously arranged, and m is a positive integer.
5. The method according to claim 2, wherein, when the first partition method is better than the second partition method, regarding the first partition method as the partition method of the coding unit in the second prediction unit corresponding to the second partition method, the method comprises:
when the first partition mode is better than the second partition mode, generating a first flag bit corresponding to the first partition mode for the second prediction unit, wherein the first flag bit is used for indicating that the decoded blocks obtained by the second prediction unit through partition in the first partition mode are decoded in the decoding process.
6. The method according to claim 1, wherein the determining the partition mode of the coding unit to be coded according to the second partition mode and the third partition mode comprises:
when the second partition mode is better than the third partition mode, the second partition mode is used as the partition mode of the coding unit in the third prediction unit corresponding to the third partition mode;
otherwise, when the size of the third prediction unit is smaller than the size of the largest coding unit LCU, determining a fourth partition manner, and determining the partition manner of the coding unit according to the third partition manner and the fourth partition manner, where the fourth partition manner is a manner of partitioning the fourth prediction unit within the largest coding unit LCU, the third prediction unit is partitioned within the fourth prediction unit, and the size of the third prediction unit is smaller than the size of the fourth prediction unit.
7. The method according to claim 6, wherein the partitioning manner of the coding unit in the third prediction unit corresponding to the third partitioning manner in the second partitioning manner when the second partitioning manner is better than the third partitioning manner comprises:
and when the second division mode is better than the third division mode, generating a second flag bit corresponding to the second division mode for the third prediction unit, wherein the second flag bit is used for indicating that the decoded block obtained by dividing the third prediction unit in the second division mode is decoded in the decoding process.
8. The method according to claim 6, wherein the determining the partition mode of the coding unit according to the third partition mode and the fourth partition mode comprises:
when the third partition mode is better than the fourth partition mode, the third partition mode is used as the partition mode of the coding unit in the fourth prediction unit corresponding to the fourth partition mode;
otherwise, in the largest coding unit LCU, the fourth division mode is used as the division mode of the coding unit.
9. The method according to claim 8, wherein when the third partition is better than the fourth partition, the dividing the coding unit in the fourth prediction unit corresponding to the fourth partition by the third partition comprises:
when the third division mode is better than the fourth division mode, generating a third flag bit corresponding to the third division mode for the fourth prediction unit, wherein the third flag bit is used for indicating that the decoded blocks obtained by dividing the fourth prediction unit in the third division mode are decoded in the decoding process.
10. The method according to claim 6, wherein n second prediction units are obtained by dividing the third prediction unit in the second division manner, where n is a positive integer;
when the second partition is better than the third partition, before the second partition is used as the partition of the coding unit in a third prediction unit corresponding to the third partition, the method further includes:
predicting the n second prediction units to obtain n second coding costs;
predicting the third prediction unit to obtain a third coding cost;
and when the sum of the n second coding costs is less than the third coding cost, determining that the second division mode is superior to the third division mode.
11. The method of claim 10, wherein the predicting the n second prediction units comprises:
determining n second angle prediction modes corresponding to the n second prediction units, wherein the n second angle prediction modes correspond to at least two second prediction angles;
for the second prediction unit, predicting in the corresponding second angle prediction mode;
the predicting the third prediction unit includes:
determining a second prediction angle range according to the at least two second prediction angles;
determining a third angular prediction mode of the third prediction unit within the second prediction angular range;
predicting the third prediction unit in the third angular prediction mode.
12. The method according to claim 11, wherein determining a second prediction angle range from the at least two second prediction angles comprises:
determining a second maximum angle and a second minimum angle of the at least two second predicted angles;
determining an angle range between the second minimum angle and the second maximum angle as the second predicted angle range.
13. The method according to claim 6, wherein before determining the fourth partition mode when the size of the third prediction unit is smaller than the size of the largest coding unit LCU, the method further comprises:
and when the third coding costs of the m third prediction units are not more than the sum of the corresponding n second coding costs, determining that the third division mode is superior to the second division mode, wherein the m third prediction units are continuously arranged, and m is a positive integer.
14. The method according to any one of claims 6 to 13, wherein sizes of the first prediction unit, the second prediction unit, the third prediction unit, and the fourth prediction unit are in a relationship of gradually increasing in a set ratio.
15. The method of claim 14,
the first prediction unit is a prediction unit with a size of 4 × 4, the second prediction unit is a prediction unit with a size of 8 × 8, the third prediction unit is a prediction unit with a size of 16 × 16, the fourth prediction unit is a prediction unit with a size of 32 × 32, n takes a value of 4, the second prediction unit is divided in the first division manner to obtain 4 first prediction units arranged in a square, the third prediction unit is divided in the second division manner to obtain 4 second prediction units arranged in a square, and the fourth prediction unit is divided in the third division manner to obtain 4 third prediction units arranged in a square.
16. An apparatus for encoding intra prediction, the apparatus comprising:
the device comprises an acquisition module, a decoding module and a decoding module, wherein the acquisition module is used for acquiring an image frame to be encoded, and the image frame to be encoded is an image frame encoded in an intra-frame prediction mode;
the dividing module is configured to divide a largest coding unit LCU in the image frame to be encoded in a first dividing manner and a second dividing manner, where the first dividing manner is a manner of dividing a first prediction unit in the largest coding unit LCU, the second dividing manner is a manner of dividing a second prediction unit in the largest coding unit LCU, the second prediction unit is divided in the first dividing manner to obtain n first prediction units, and n is a positive integer;
a prediction module, configured to determine n first angle prediction modes corresponding to n first prediction units, where the n first angle prediction modes correspond to at least two first prediction angles; for the first prediction unit, predicting in the corresponding first angle prediction mode to obtain n first coding costs; determining a first prediction angle range according to the at least two first prediction angles; determining a second angular prediction mode of the second prediction unit within the first prediction angular range; predicting the second prediction unit in the second angle prediction mode to obtain a second coding cost;
a determining module, configured to, when the second coding cost is not greater than a sum of n first coding costs, divide the largest coding unit LCU in a third division manner, and determine a division manner of a coding unit to be coded according to the second division manner and the third division manner, where the third division manner is a manner of dividing a third prediction unit in the largest coding unit LCU, and the second prediction unit is obtained by dividing in the third prediction unit;
and the coding module is used for coding the largest coding unit LCU in the image frame to be coded according to the dividing mode of the coding unit.
17. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement an intra-predicted encoding method as claimed in any one of claims 1 to 15.
18. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement an encoding method for intra prediction according to any one of claims 1 to 15.
CN201910926492.XA 2019-09-27 2019-09-27 Method, device and equipment for coding intra-frame prediction and readable storage medium Active CN110572679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910926492.XA CN110572679B (en) 2019-09-27 2019-09-27 Method, device and equipment for coding intra-frame prediction and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910926492.XA CN110572679B (en) 2019-09-27 2019-09-27 Method, device and equipment for coding intra-frame prediction and readable storage medium

Publications (2)

Publication Number Publication Date
CN110572679A CN110572679A (en) 2019-12-13
CN110572679B true CN110572679B (en) 2022-04-26

Family

ID=68783008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910926492.XA Active CN110572679B (en) 2019-09-27 2019-09-27 Method, device and equipment for coding intra-frame prediction and readable storage medium

Country Status (1)

Country Link
CN (1) CN110572679B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294591B (en) * 2020-05-09 2020-09-25 腾讯科技(深圳)有限公司 Video information processing method, multimedia information processing method and device
CN113242429B (en) * 2021-05-11 2023-12-05 杭州网易智企科技有限公司 Video coding mode decision method, device, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023241B (en) * 2014-05-29 2017-08-04 华为技术有限公司 The method for video coding and video coding apparatus of intraframe predictive coding
CN105812797B (en) * 2014-12-31 2019-03-26 浙江大华技术股份有限公司 A kind of coding unit selection method and device
CN105812795B (en) * 2014-12-31 2019-02-12 浙江大华技术股份有限公司 A kind of determination method and apparatus of the coding mode of maximum coding unit
US10448011B2 (en) * 2016-03-18 2019-10-15 Mediatek Inc. Method and apparatus of intra prediction in image and video processing
CN110213576B (en) * 2018-05-03 2023-02-28 腾讯科技(深圳)有限公司 Video encoding method, video encoding device, electronic device, and storage medium
CN110198443B (en) * 2018-05-10 2022-09-13 腾讯科技(深圳)有限公司 Video frame coding unit dividing method and device, storage medium and electronic device
CN110234008B (en) * 2019-03-11 2020-06-16 杭州海康威视数字技术股份有限公司 Encoding method, decoding method and device

Also Published As

Publication number Publication date
CN110572679A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN112118450B (en) Method and device for decoding and encoding prediction mode
CN108391127B (en) Video encoding method, device, storage medium and equipment
CN109168032B (en) Video data processing method, terminal, server and storage medium
CN111935542A (en) Video processing method, video playing method, device, equipment and storage medium
CN110572679B (en) Method, device and equipment for coding intra-frame prediction and readable storage medium
CN110049326B (en) Video coding method and device and storage medium
CN113891074B (en) Video encoding method and apparatus, electronic apparatus, and computer-readable storage medium
CN110177275B (en) Video encoding method and apparatus, and storage medium
CN114302137B (en) Time domain filtering method and device for video, storage medium and electronic equipment
CN110087077A (en) Method for video coding and device, storage medium
CN111770339B (en) Video encoding method, device, equipment and storage medium
CN112437304B (en) Video decoding method, encoding method, device, equipment and readable storage medium
CN114268797B (en) Method, device, storage medium and electronic equipment for time domain filtering of video
CN114900704A (en) Video processing method and device, electronic equipment and storage medium
CN115811615A (en) Screen video coding method and device, computer equipment and storage medium
CN114360555A (en) Audio processing method and device, electronic equipment and storage medium
CN116074512A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN113938689A (en) Quantization parameter determination method and device
CN113079372A (en) Method, device and equipment for coding inter-frame prediction and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018849

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant