CN112449187B - Video decoding method, video encoding device, video encoding medium, and electronic apparatus - Google Patents

Video decoding method, video encoding device, video encoding medium, and electronic apparatus Download PDF

Info

Publication number
CN112449187B
CN112449187B CN201910802836.6A CN201910802836A CN112449187B CN 112449187 B CN112449187 B CN 112449187B CN 201910802836 A CN201910802836 A CN 201910802836A CN 112449187 B CN112449187 B CN 112449187B
Authority
CN
China
Prior art keywords
scanning
scanning mode
syntax element
target
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910802836.6A
Other languages
Chinese (zh)
Other versions
CN112449187A (en
Inventor
崔静
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Tencent Technology Shenzhen Co Ltd
Original Assignee
Peking University
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Tencent Technology Shenzhen Co Ltd filed Critical Peking University
Priority to CN201910802836.6A priority Critical patent/CN112449187B/en
Publication of CN112449187A publication Critical patent/CN112449187A/en
Application granted granted Critical
Publication of CN112449187B publication Critical patent/CN112449187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Abstract

The embodiment of the application provides a video decoding method, a video encoding device, a video encoding medium and electronic equipment. The video decoding method includes: acquiring coded data corresponding to a video frame to be decoded, wherein the coded data comprise a coding result of a target syntax element, and the target syntax element is used for indicating a scanning mode of a video frame coding block; performing first decoding processing based on the encoding result of the target syntax element to obtain the value of the target syntax element; determining the scanning mode according to the value of the target syntax element; and performing second decoding processing on the encoded data based on the scanning mode. According to the technical scheme of the embodiment of the application, the most appropriate scanning mode can be selected for coding and decoding according to the coefficient distribution characteristics of the video frame coding block, and the coding efficiency and the decoding efficiency of the video frame coding block are improved.

Description

Video decoding method, video encoding device, video encoding medium, and electronic apparatus
Technical Field
The present application relates to the field of computer and communication technologies, and in particular, to a video decoding method, an encoding method, an apparatus, a medium, and an electronic device.
Background
In the video coding standard, the coefficient coding is that the whole coding block is coded one by one in a certain scanning mode, and the scanning mode determines the range of the coefficient coding. However, none of the scanning methods proposed in the related art can effectively encode the coding blocks with various coefficient distribution characteristics, and further, the coding efficiency and the decoding efficiency of the coding blocks are affected.
Disclosure of Invention
Embodiments of the present application provide a video decoding method, an encoding method, an apparatus, a medium, and an electronic device, so that a most appropriate scanning mode can be selected at least to a certain extent according to a coefficient distribution characteristic of a video frame coding block to perform encoding and decoding processing, thereby improving encoding efficiency and decoding efficiency of the video frame coding block.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a video decoding method including: acquiring coded data corresponding to a video frame to be decoded, wherein the coded data comprise a coding result of a target syntax element, and the target syntax element is used for indicating a scanning mode of a video frame coding block; performing first decoding processing based on the encoding result of the target syntax element to obtain the value of the target syntax element; determining the scanning mode according to the value of the target syntax element; and performing second decoding processing on the encoded data based on the scanning mode.
According to an aspect of an embodiment of the present application, there is provided a video encoding method, including: respectively scanning the video frame coding blocks through a plurality of scanning modes to determine the coefficient coding number corresponding to each scanning mode; determining a target scanning mode with the minimum number of corresponding coefficient codes according to the number of the coefficient codes corresponding to each scanning mode; and coding the video frame coding block based on the target scanning mode, and coding a syntax element for indicating the target scanning mode.
According to an aspect of an embodiment of the present application, there is provided a video decoding apparatus including: the device comprises an acquisition unit, a decoding unit and a decoding unit, wherein the acquisition unit is used for acquiring coded data corresponding to a video frame to be decoded, the coded data comprises a coding result of a target syntax element, and the target syntax element is used for indicating a scanning mode of a video frame coding block; a first decoding unit, configured to perform first decoding processing based on a coding result of the target syntax element to obtain a value of the target syntax element; a determining unit, configured to determine the scanning manner according to the value of the target syntax element; and a second decoding unit configured to perform a second decoding process on the encoded data based on the scanning scheme.
In some embodiments of the present application, based on the foregoing scheme, the first decoding unit is configured to: and decoding the target syntax element according to the context of the target syntax element to obtain the value of the target syntax element.
In some embodiments of the present application, based on the foregoing scheme, the first decoding unit is further configured to: determining a context of the target syntax element from information of a neighboring block of the video frame encoding block, the information of the neighboring block comprising at least one of: the scanning mode of the adjacent blocks, and the number of nonzero coefficients contained in the adjacent blocks.
In some embodiments of the present application, based on the foregoing scheme, the target syntax element comprises 6 contexts, and the information of the neighboring block comprises a scanning manner of the neighboring block;
the first decoding unit is configured to: determining a context of the target syntax element based on the following formula in accordance with information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein the ctx _ index represents an index number corresponding to a context of the target syntax element; ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; offset is an integer variable determined from information of the neighboring blocks.
In some embodiments of the present application, based on the foregoing scheme, the value of offset is determined by the following formula:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
wherein, scan _ index _ L represents the scanning mode of the adjacent block located at the left of the video frame coding block; scan _ index _ a indicates the scanning of neighboring blocks located above the video frame coding block.
In some embodiments of the present application, based on the foregoing scheme, the scanning mode indicated by the target syntax element is a scanning mode with the smallest number of corresponding coefficient codes in a plurality of scanning modes, wherein the video frame coding block scans the video frame coding block through the plurality of scanning modes respectively before coding to determine the number of coefficient codes corresponding to each scanning mode.
In some embodiments of the present application, based on the foregoing scheme, the determining unit is configured to: if the value of the target syntax element is a first value, determining that the scanning mode is a Zig-zag scanning mode; and if the value of the target syntax element is a second value, determining that the scanning mode is a scanning mode based on an effective region.
In some embodiments of the present application, based on the foregoing scheme, the second decoding unit is configured to: and if the scanning mode is a Zig-zag scanning mode, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode.
In some embodiments of the present application, based on the foregoing scheme, the second decoding unit is configured to: and if the scanning mode is based on the effective area, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the scanning mode based on the effective area.
According to an aspect of an embodiment of the present application, there is provided a video encoding apparatus including: the scanning unit is used for respectively scanning the video frame coding blocks through a plurality of scanning modes to determine the coefficient coding number corresponding to each scanning mode; the determining unit is used for determining a target scanning mode with the minimum number of corresponding coefficient codes according to the number of the coefficient codes corresponding to each scanning mode; and the coding unit is used for coding the video frame coding block based on the target scanning mode and coding a syntax element for indicating the target scanning mode.
In some embodiments of the present application, based on the foregoing scheme, the plurality of scanning modes include: a Zig-zag scanning mode and an effective area-based scanning mode; the encoding unit is configured to: if the target scanning mode is a Zig-zag scanning mode, sequentially coding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode; and if the target scanning mode is an effective area-based scanning mode, sequentially encoding the coefficients in the video frame encoding blocks according to the scanning sequence of the effective area-based scanning mode.
According to an aspect of embodiments of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a video decoding method or a video encoding method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a video decoding method or a video encoding method as described in the above embodiments.
In the technical solutions provided in some embodiments of the present application, a plurality of scanning modes respectively scan a video frame coding block to determine the number of coefficient codes corresponding to each scanning mode, a target scanning mode with the minimum number of corresponding coefficient codes is determined according to the number of coefficient codes corresponding to each scanning mode, and then the video frame coding block is encoded based on the target scanning mode, and a syntax element indicating the target scanning mode is encoded, so that the most suitable scanning mode can be selected for the coefficient distribution characteristics of the video frame coding block for encoding, and then the number of coefficient codes can be reduced by the selected scanning mode, thereby effectively improving the encoding efficiency.
In the technical solutions provided in some embodiments of the present application, after obtaining encoded data corresponding to a video frame to be decoded, a value of a target syntax element for indicating a scanning mode of a video frame encoding block is obtained by performing a first decoding process based on an encoding result of the target syntax element included in the encoded data, a scanning mode of the video frame encoding block is determined according to the value of the target syntax element, and a second decoding process is performed on the encoded data according to the determined scanning mode, so that a decoding end can determine, according to the value of the target syntax element, a scanning mode that is adopted by an encoding end during encoding, so as to adopt the same scanning mode as an encoding segment, and further, a most appropriate scanning mode can be selected for a coefficient distribution characteristic of the video frame encoding block to perform an encoding and decoding process, which is beneficial to improving decoding efficiency.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 is a schematic diagram illustrating placement of video encoding and decoding devices in a streaming environment;
FIG. 3 shows a flow diagram of a video encoding method according to an embodiment of the present application;
FIG. 4 shows a schematic view of the scanning process of the Zig-zag scanning mode;
FIG. 5 is a schematic diagram of a scanning process based on a scanning pattern of an active area;
FIG. 6 is a schematic diagram showing a comparison of the scanning results of the Zig-zag scanning mode and the active area based scanning mode in one embodiment;
FIG. 7 is a schematic diagram showing a comparison of the scanning results of the Zig-zag scanning mode and the active area based scanning mode in one embodiment;
FIG. 8 shows a flow diagram of a video decoding method according to an embodiment of the present application;
FIG. 9 shows a schematic diagram of neighboring blocks of a current coding block according to one embodiment of the present application;
FIG. 10 shows a block diagram of a video decoding apparatus according to an embodiment of the present application;
FIG. 11 shows a block diagram of a video encoding apparatus according to an embodiment of the present application;
FIG. 12 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The technical terms in the present application will first be briefly described below:
syntax elements: information representing the codestream that needs to be encoded and transmitted in video coding.
Entropy coding: a data compression method is a key technology for realizing information compression in a video coding process, and entropy coding is carried out on the basis of an entropy principle without losing any information in the coding process.
Context (buffer): in the process of entropy coding, each syntax element reads and writes information in the corresponding buffer for calculation and updating.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture 100 includes a plurality of end devices that may communicate with each other over, for example, a network 150. For example, the system architecture 100 may 100 include a first end device 110 and a second end device 120 interconnected by a network 150. In the embodiment of fig. 1, the first terminal device 110 and the second terminal device 120 perform unidirectional data transmission. For example, the first end device 110 may encode video data (e.g., a video picture stream captured by the end device 110) for transmission over the network 150 to the second end device 120. The encoded video data is transmitted in the form of one or more encoded video streams. The second terminal device 120 may receive the encoded video data from the network 150, decode the encoded video data to restore the video data, and display a video picture according to the restored video data.
In one embodiment of the present application, system architecture 100 may include a third end device 130 and a fourth end device 140 that perform bi-directional transmission of encoded video data, which may occur, for example, during a video conference. For bi-directional data transmission, each of third end device 130 and fourth end device 140 may encode video data (e.g., a stream of video pictures captured by the end device) for transmission over network 150 to the other of third end device 130 and fourth end device 140. Each of third terminal device 130 and fourth terminal device 140 may also receive encoded video data transmitted by the other of third terminal device 130 and fourth terminal device 140, and may decode the encoded video data to recover the video data, and may display video pictures on an accessible display device according to the recovered video data.
In the embodiment of fig. 1, the first terminal device 110, the second terminal device 120, the third terminal device 130, and the fourth terminal device 140 may be a server, a personal computer, and a smart phone, but the principles disclosed herein may not be limited thereto. Embodiments disclosed herein are applicable to laptop computers, tablet computers, media players, and/or dedicated video conferencing equipment. Network 150 represents any number of networks that communicate encoded video data between first end device 110, second end device 120, third end device 130, and fourth end device 140, including, for example, wired and/or wireless communication networks. The communication network 150 may exchange data in circuit-switched and/or packet-switched channels. The network may include a telecommunications network, a local area network, a wide area network, and/or the internet. For purposes of this application, the architecture and topology of the network 150 may be immaterial to the operation of the present disclosure, unless explained below.
In one embodiment of the present application, fig. 2 illustrates the placement of video encoding devices and video decoding devices in a streaming environment. The subject matter disclosed herein is equally applicable to other video-enabled applications including, for example, video conferencing, digital TV, storing compressed video on digital media including CDs, DVDs, memory sticks, and the like.
The streaming system may include an acquisition subsystem 213, and the acquisition subsystem 213 may include a video source 201, such as a digital camera, that creates an uncompressed video picture stream 202. In an embodiment, the video picture stream 202 includes samples taken by a digital camera. The video picture stream 202 is depicted as a thick line to emphasize a high data amount video picture stream compared to the encoded video data 204 (or the encoded video codestream 204), the video picture stream 202 can be processed by an electronic device 220, the electronic device 220 comprising a video encoding device 203 coupled to a video source 201. The video encoding device 203 may comprise hardware, software, or a combination of hardware and software to implement or embody aspects of the disclosed subject matter as described in greater detail below. The encoded video data 204 (or encoded video codestream 204) is depicted as a thin line compared to the video picture stream 202 to emphasize the lower data amount of the encoded video data 204 (or encoded video codestream 204), which may be stored on the streaming server 205 for future use. One or more streaming client subsystems, such as client subsystem 206 and client subsystem 208 in fig. 2, may access streaming server 205 to retrieve copies 207 and 209 of encoded video data 204. Client subsystem 206 may include, for example, video decoding device 210 in electronic device 230. Video decoding device 210 decodes incoming copies 207 of the encoded video data and generates an output video picture stream 211 that may be presented on a display 212 (e.g., a display screen) or another presentation device. In some streaming systems, encoded video data 204, video data 207, and video data 209 (e.g., video streams) may be encoded according to certain video encoding/compression standards. Examples of such standards include ITU-T H.265. In an embodiment, the Video Coding standard under development is informally referred to as next generation Video Coding (VVC), and the present application may be used in the context of the VVC standard.
It should be noted that electronic devices 220 and 230 may include other components not shown in the figures. For example, electronic device 220 may include a video decoding device not shown, and electronic device 230 may also include a video encoding device not shown.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 3 shows a flow chart of a video encoding method according to an embodiment of the present application, which may be performed by the video encoding apparatus described in the foregoing embodiments. Referring to fig. 3, the video encoding method at least includes steps S310 to S330, which are described in detail as follows:
in step S310, the video frame coding blocks are scanned by a plurality of scanning methods, respectively, to determine the number of coefficient codes corresponding to each of the scanning methods.
In an embodiment of the present application, the plurality of scanning modes may include: zig-zag scanning and active area based scanning. As shown in fig. 4, the Zig-zag scanning is a zigzag scanning, and the coefficients in the coding block are coded one by one in the manner shown in fig. 4. As shown in fig. 5, the scanning manner based on the effective scanning area is to determine the effective scanning area by pos _ x and pos _ y, the gray area shown in fig. 5 is the effective scanning area, the scanning is performed in the inverse zig-zag manner in the scanning area, and then the coefficients are encoded one by one in this order.
When different scanning modes are used for scanning the coding blocks with different coefficient distribution characteristics, the obtained coefficient codes are different in number. For example, for the coding block shown in fig. 6, the number of coefficient codes obtained by the Zig-zag scanning method is obviously greater than that obtained by the scanning method based on the effective region. For the coding block shown in fig. 7, the number of coefficient codes obtained by the Zig-zag scanning method is the same as the number of coefficient codes obtained by the scanning method based on the effective region, but the scanning method based on the effective region needs to increase the identifications of the regions pos _ x and pos _ y. It can be seen that, for coding blocks with different coefficient distribution characteristics, the number of coefficient codes obtained by different scanning methods is different.
In step S320, a target scanning mode with the minimum number of corresponding coefficient codes is determined according to the number of coefficient codes corresponding to each scanning mode.
In step S330, the video frame coding block is coded based on the target scanning mode, and a syntax element indicating the target scanning mode is coded.
In an embodiment of the present application, if the target scanning mode is a Zig-zag scanning mode, the coefficients in the video frame coding blocks may be sequentially encoded according to the scanning order of the Zig-zag scanning mode; if the target scanning mode is the scanning mode based on the effective area, the coefficients in the video frame coding blocks can be sequentially coded according to the scanning sequence of the scanning mode based on the effective area.
The technical scheme of the embodiment shown in fig. 3 enables the most suitable scanning mode to be selected for the coefficient distribution characteristics of the video frame coding blocks for coding processing, and further the number of coefficient codes can be reduced through the selected scanning mode, thereby effectively improving the coding efficiency.
Fig. 8 shows a flowchart of a video decoding method according to an embodiment of the present application, which can be performed by the video decoding apparatus described in the foregoing embodiments. Referring to fig. 8, the video decoding method at least includes steps S810 to S840, which are described in detail as follows:
in step S810, encoded data corresponding to a video frame to be decoded is obtained, where the encoded data includes an encoding result of a target syntax element, and the target syntax element is used to indicate a scanning manner of an encoding block of the video frame.
In one embodiment of the present application, the video frame to be decoded may be a video frame or the like that needs to be transmitted during a video conference. Optionally, the scanning mode indicated by the target syntax element is the scanning mode with the minimum number of corresponding coefficient codes in the multiple scanning modes, so that the number of coefficients to be coded and decoded can be reduced, and the coding efficiency and the decoding efficiency are improved.
In an embodiment of the present application, before encoding, a video frame encoding block may respectively scan the video frame encoding block through a plurality of scanning modes to determine the number of coefficient encodings corresponding to each scanning mode, and then select the scanning mode with the smallest number of corresponding coefficient encodings.
With continued reference to fig. 8, in step S820, a first decoding process is performed based on the encoding result of the target syntax element, so as to obtain the value of the target syntax element.
In an embodiment of the present application, the first decoding process may be decoding the target syntax element according to a context of the target syntax element to obtain a value of the target syntax element. In embodiments of the present application, the context of a target syntax element may be determined from information of neighboring blocks of a video frame encoding block. Wherein the information of the neighboring block includes at least one of: the scanning mode of the adjacent blocks and the number of nonzero coefficients contained in the adjacent blocks.
In one embodiment of the application, taking the example that the target syntax element includes 6 contexts and the information of the neighboring blocks includes the scanning manner of the neighboring blocks, the context of the target syntax element can be determined based on the following formula:
ctx_index=offset+(ch_type==Luma)?0:3
wherein, ctx _ index represents an index number corresponding to a context of the target syntax element; ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; offset is an integer variable determined from information of neighboring blocks.
In one embodiment of the application, the value of offset may be determined by the following equation:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
wherein, scan _ index _ L represents the scanning mode of the adjacent block positioned at the left side of the video frame coding block; scan _ index _ a indicates the scanning of neighboring blocks located above a coding block of a video frame.
With continued reference to fig. 8, in step S830, the scanning manner is determined according to the value of the target syntax element.
In an embodiment of the present application, assuming that the scanning manner includes a Zig-zag scanning manner and a scanning manner based on an active region, if the value of the target syntax element is a first value, determining that the scanning manner is the Zig-zag scanning manner; and if the value of the target syntax element is the second value, determining the scanning mode to be a scanning mode based on the effective area.
Continuing to refer to fig. 8, in step S840, a second decoding process is performed on the encoded data based on the scanning mode.
In an embodiment of the present application, for the second decoding process, if the scanning manner is a Zig-zag scanning manner, the coefficients in the coding blocks of the video frame may be sequentially decoded according to the scanning order of the Zig-zag scanning manner. And if the scanning mode is based on the effective area, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the scanning mode based on the effective area.
The technical solution of the embodiment shown in fig. 8 enables the decoding end to determine the scanning mode adopted by the encoding end during encoding according to the value of the target syntax element, so as to adopt the same scanning mode as the encoding segment, and further enable the most appropriate scanning mode to be selected for the coefficient distribution characteristics of the video frame encoding block for encoding and decoding, which is beneficial to improving the decoding efficiency.
The following describes the technical solution of the embodiment of the present application in detail by taking scanning modes of video frame coding blocks including a Zig-zag scanning mode and an active area-Based scanning mode (i.e., a Region-Based scanning mode) as examples:
in an embodiment of the present application, maxzigbee and maxRegionBased may be defined to respectively represent the number of coefficients that need to be encoded in two scanning modes, width and height are defined as the width and height of an encoding block, scanPos () is a scanning function, and pos _ x and posx _ y are the abscissa and ordinate of an effective scanning area, respectively.
Initially, maxZigZag is 0, and then the value of maxZigZag may be determined in a traversal manner, for example, the value of maxZigZag may be determined in a for loop manner as follows:
in the above for loop, coeff [ pos ] represents the coefficient value at the pos position, i.e., if the coefficient value at the pos position is not 0 during traversal, let maxzigbee zag be i + 1.
Similarly, initially, maxregionbase ═ 0, and then the value of maxregionbase can be determined in a traversal manner, such as in a for-loop manner as follows:
in this for loop, coeff [ pos ] represents the coefficient value at the pos position, i.e., if the coefficient value at the pos position is not 0 during traversal, let maxRegionBased be i + 1.
After maxZigZag and maxRegionBased are determined, if maxZigZag ≦ maxRegionBased, the syntax element scan _ index may be made 0; otherwise, let syntax element scan _ index be 1.
In an embodiment of the present application, when encoding a coding block, if scan _ index is 0, on one hand, the scan _ index is encoded based on a context, and on the other hand, coefficients in the coding block are sequentially encoded according to a scanning order of a Zig-zag scanning manner. If scan _ index is 1, on the one hand, scan _ index is encoded based on the context, and on the other hand, coefficients in the encoding block are sequentially encoded in the scanning order based on the scanning method of the effective region.
In one embodiment of the present application, when decoding an encoded block, scan _ index may be decoded according to a video frame stream and a context. If scan _ index is equal to 0, sequentially decoding the coefficients in the coding block according to the scanning sequence of the Zig-zag scanning mode; if scan _ index is 1, the coefficients in the coding block are decoded sequentially in the scanning order based on the scanning method of the effective region.
In an embodiment of the present application, a scanning manner adopted by a current coding block has a direct correlation with a coefficient distribution of the current coding block, but the coefficient distribution of the current coding block cannot be known at a decoding end. Therefore, the context of the scanning mode of the current coding block can be designed by considering the scanning mode of the adjacent block of the current coding block in the embodiment of the application. In one example, as shown in fig. 9, the scanning modes of the neighboring blocks to the left of the current coding block and the scanning modes of the neighboring blocks to the top of the current coding block may be considered, where the scanning modes of the current coding block may have 6 contexts, and each of the contexts includes 3 in terms of luma (luminance) and chroma (chrominance), and the specific design is as follows:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
ctx_index=offset+(ch_type==Luma)?0:3
wherein offset and ctx _ index are integer variables; ctx _ index represents an index number corresponding to a context of a syntax element for indicating a scanning manner; ch _ type represents the scanning channel type of the video frame coding block; scan _ index _ L represents the scanning mode of the neighboring block located on the left of the current coding block; scan _ index _ a indicates the scanning manner of the neighboring blocks located above the current coding block.
It should be noted that, in other embodiments of the present application, information of neighboring blocks located at other positions of the current coding block may also be considered, and information of more neighboring blocks of the current coding block may also be considered, such as the number of nonzero coefficients.
According to the technical scheme of the embodiment of the application, the coefficient distribution characteristics in different coding blocks can be fully utilized to select a more favorable scanning mode, the optimal number of the coefficients needing to be coded can be found, and the coding efficiency can be effectively improved.
Embodiments of the apparatus of the present application are described below, which may be used to perform the methods of the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method described above in the present application.
Fig. 10 shows a block diagram of a video decoding apparatus according to an embodiment of the present application.
Referring to fig. 10, a video decoding apparatus 1000 according to an embodiment of the present application includes: an acquisition unit 1002, a first decoding unit 1004, a determination unit 1006, and a second decoding unit 1008.
The acquiring unit 1002 is configured to acquire encoded data corresponding to a video frame to be decoded, where the encoded data includes an encoding result of a target syntax element, and the target syntax element is used to indicate a scanning mode of a video frame encoding block; the first decoding unit 1004 is configured to perform a first decoding process based on the encoding result of the target syntax element, so as to obtain a value of the target syntax element; a determining unit 1006 is configured to determine the scanning manner according to a value of the target syntax element; second decoding section 1008 is configured to perform second decoding processing on the encoded data based on the scanning method.
In some embodiments of the present application, based on the foregoing scheme, the first decoding unit 1004 is configured to: and decoding the target syntax element according to the context of the target syntax element to obtain the value of the target syntax element.
In some embodiments of the present application, based on the foregoing scheme, the first decoding unit 1004 is further configured to: determining a context of the target syntax element from information of a neighboring block of the video frame encoding block, the information of the neighboring block comprising at least one of: the scanning mode of the adjacent blocks, and the number of nonzero coefficients contained in the adjacent blocks.
In some embodiments of the present application, based on the foregoing scheme, the target syntax element comprises 6 contexts, and the information of the neighboring block comprises a scanning manner of the neighboring block;
the first decoding unit 1004 is configured to: determining a context of the target syntax element based on the following formula in accordance with information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein the ctx _ index represents an index number corresponding to a context of the target syntax element; ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; offset is an integer variable determined from information of the neighboring blocks.
In some embodiments of the present application, based on the foregoing scheme, the value of offset is determined by the following formula:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
wherein, scan _ index _ L represents the scanning mode of the adjacent block located at the left of the video frame coding block; scan _ index _ a indicates the scanning of neighboring blocks located above the video frame coding block.
In some embodiments of the present application, based on the foregoing scheme, the scanning mode indicated by the target syntax element is a scanning mode with the smallest number of corresponding coefficient codes in a plurality of scanning modes, wherein the video frame coding block scans the video frame coding block through the plurality of scanning modes respectively before coding to determine the number of coefficient codes corresponding to each scanning mode.
In some embodiments of the present application, based on the foregoing scheme, the determining unit 1006 is configured to: if the value of the target syntax element is a first value, determining that the scanning mode is a Zig-zag scanning mode; and if the value of the target syntax element is a second value, determining that the scanning mode is a scanning mode based on an effective region.
In some embodiments of the present application, based on the foregoing scheme, the second decoding unit 1008 is configured to: and if the scanning mode is a Zig-zag scanning mode, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode.
In some embodiments of the present application, based on the foregoing scheme, the second decoding unit 1008 is configured to: and if the scanning mode is based on the effective area, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the scanning mode based on the effective area.
Fig. 11 shows a block diagram of a video encoding apparatus according to an embodiment of the present application.
Referring to fig. 11, a video encoding apparatus 1100 according to an embodiment of the present application includes: a scanning unit 1102, a determining unit 1104 and an encoding unit 1106.
The scanning unit 1102 is configured to scan the video frame coding blocks through a plurality of scanning modes, respectively, to determine the number of coefficient codes corresponding to each scanning mode; the determining unit 1104 is configured to determine, according to the number of coefficient codes corresponding to each of the scanning modes, a target scanning mode with the smallest number of corresponding coefficient codes; the encoding unit 1106 is configured to perform encoding processing on the video frame encoding block based on the target scanning manner, and perform encoding processing on a syntax element indicating the target scanning manner.
In some embodiments of the present application, based on the foregoing scheme, the plurality of scanning modes include: a Zig-zag scanning mode and an effective area-based scanning mode; the encoding unit 1106 is configured to: if the target scanning mode is a Zig-zag scanning mode, sequentially coding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode; and if the target scanning mode is an effective area-based scanning mode, sequentially encoding the coefficients in the video frame encoding blocks according to the scanning sequence of the effective area-based scanning mode.
FIG. 12 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1200 of the electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 12, the computer system 1200 includes a Central Processing Unit (CPU)1201, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data necessary for system operation are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other by a bus 1204. An Input/Output (I/O) interface 1205 is also connected to bus 1204.
The following components are connected to the I/O interface 1205: an input section 1206 including a keyboard, a mouse, and the like; an output section 1207 including a Display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1208 including a hard disk and the like; and a communication section 1209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. A driver 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1210 as necessary, so that a computer program read out therefrom is mounted into the storage section 1208 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1209, and/or installed from the removable medium 1211. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1201.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A video decoding method, comprising:
acquiring coded data corresponding to a video frame to be decoded, wherein the coded data comprise a coding result of a target syntax element, and the target syntax element is used for indicating a scanning mode of a video frame coding block;
determining a context of the target syntax element based on the following formula in accordance with information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein the ctx _ index represents an index number corresponding to a context of the target syntax element; the ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; the offset is an integer variable determined according to the information of the adjacent blocks;
decoding the target syntax element according to the context of the target syntax element to obtain the value of the target syntax element;
determining the scanning mode according to the value of the target syntax element;
and performing second decoding processing on the encoded data based on the scanning mode.
2. The video decoding method of claim 1, wherein the information of the neighboring blocks comprises at least one of: the scanning mode of the adjacent blocks, and the number of nonzero coefficients contained in the adjacent blocks.
3. The video decoding method of claim 1, wherein the target syntax element comprises 6 contexts, and the information of the neighboring block comprises a scanning manner of the neighboring block; the value of offset is determined by the following equation:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
wherein, scan _ index _ L represents the scanning mode of the adjacent block located at the left of the video frame coding block; scan _ index _ a indicates the scanning of neighboring blocks located above the video frame coding block.
4. The video decoding method of claim 1, wherein the scanning mode indicated by the target syntax element is a scanning mode with the least number of corresponding coefficient encodings among a plurality of scanning modes, and wherein the video frame coding block scans the video frame coding block through the plurality of scanning modes respectively before encoding to determine the number of coefficient encodings corresponding to each of the scanning modes.
5. The video decoding method of any of claims 1 to 4, wherein determining the scanning mode according to the value of the target syntax element comprises:
if the value of the target syntax element is a first value, determining that the scanning mode is a Zig-zag scanning mode;
and if the value of the target syntax element is a second value, determining that the scanning mode is a scanning mode based on an effective region.
6. The video decoding method according to claim 5, wherein performing the second decoding process on the encoded data based on the scanning manner includes:
and if the scanning mode is a Zig-zag scanning mode, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode.
7. The video decoding method according to claim 5, wherein performing the second decoding process on the encoded data based on the scanning manner includes:
and if the scanning mode is based on the effective area, sequentially decoding the coefficients in the video frame coding block according to the scanning sequence of the scanning mode based on the effective area.
8. A video encoding method, comprising:
respectively scanning the video frame coding blocks through a plurality of scanning modes to determine the coefficient coding number corresponding to each scanning mode;
determining a target scanning mode with the minimum number of corresponding coefficient codes according to the number of the coefficient codes corresponding to each scanning mode;
coding the video frame coding block based on the target scanning mode, and coding a syntax element for indicating the target scanning mode;
wherein the process of encoding the syntax element for indicating the target scanning mode comprises: determining a context of a syntax element of the target scan pattern based on the following formula according to information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein, the ctx _ index represents an index number corresponding to a context of the syntax element of the target scanning mode; the ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; the offset is an integer variable determined according to the information of the adjacent blocks;
and coding the syntax element of the target scanning mode according to the context of the syntax element of the target scanning mode.
9. The video coding method of claim 8, wherein the syntax element of the target scanning mode comprises 6 contexts, and the information of the neighboring block comprises scanning modes of the neighboring block; the value of offset is determined by the following equation:
offset=(scan_index_L==0&&scan_index_A==0)?0:
(scan_index_L==1&&scan_index_A==1)?1:2
wherein, scan _ index _ L represents the scanning mode of the adjacent block located at the left of the video frame coding block; scan _ index _ a indicates the scanning of neighboring blocks located above the video frame coding block.
10. The video coding method of claim 8, wherein the plurality of scanning modes comprises: a Zig-zag scanning mode and an effective area-based scanning mode;
the encoding processing of the video frame encoding block based on the target scanning mode comprises the following steps:
if the target scanning mode is a Zig-zag scanning mode, sequentially coding the coefficients in the video frame coding block according to the scanning sequence of the Zig-zag scanning mode;
and if the target scanning mode is an effective area-based scanning mode, sequentially encoding the coefficients in the video frame encoding blocks according to the scanning sequence of the effective area-based scanning mode.
11. A video decoding apparatus, comprising:
the device comprises an acquisition unit, a decoding unit and a decoding unit, wherein the acquisition unit is used for acquiring coded data corresponding to a video frame to be decoded, the coded data comprises a coding result of a target syntax element, and the target syntax element is used for indicating a scanning mode of a video frame coding block;
a first decoding unit, configured to perform first decoding processing based on a coding result of the target syntax element to obtain a value of the target syntax element;
a determining unit, configured to determine the scanning manner according to the value of the target syntax element;
a second decoding unit configured to perform second decoding processing on the encoded data based on the scanning system;
wherein the first decoding unit is configured to: determining a context of the target syntax element based on the following formula in accordance with information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein the ctx _ index represents an index number corresponding to a context of the target syntax element; the ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; the offset is an integer variable determined according to the information of the adjacent blocks;
and decoding the target syntax element according to the context of the target syntax element to obtain the value of the target syntax element.
12. A video encoding apparatus, comprising:
the scanning unit is used for respectively scanning the video frame coding blocks through a plurality of scanning modes to determine the coefficient coding number corresponding to each scanning mode;
the determining unit is used for determining a target scanning mode with the minimum number of corresponding coefficient codes according to the number of the coefficient codes corresponding to each scanning mode;
the encoding unit is used for encoding the video frame encoding block based on the target scanning mode and encoding a syntax element for indicating the target scanning mode;
wherein the encoding unit is configured to: determining a context of a syntax element of the target scan pattern based on the following formula according to information of neighboring blocks of the video frame encoding block:
ctx_index=offset+(ch_type==Luma)?0:3
wherein, the ctx _ index represents an index number corresponding to a context of the syntax element of the target scanning mode; the ch _ type represents the scanning channel type of the video frame coding block; luma denotes brightness; the offset is an integer variable determined according to the information of the adjacent blocks;
and coding the syntax element of the target scanning mode according to the context of the syntax element of the target scanning mode.
13. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out a video decoding method according to one of claims 1 to 7, or carries out a video encoding method according to one of claims 8 to 10.
14. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the video decoding method of any one of claims 1 to 7 or the video encoding method of any one of claims 8 to 10.
CN201910802836.6A 2019-08-28 2019-08-28 Video decoding method, video encoding device, video encoding medium, and electronic apparatus Active CN112449187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910802836.6A CN112449187B (en) 2019-08-28 2019-08-28 Video decoding method, video encoding device, video encoding medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910802836.6A CN112449187B (en) 2019-08-28 2019-08-28 Video decoding method, video encoding device, video encoding medium, and electronic apparatus

Publications (2)

Publication Number Publication Date
CN112449187A CN112449187A (en) 2021-03-05
CN112449187B true CN112449187B (en) 2022-02-25

Family

ID=74741059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910802836.6A Active CN112449187B (en) 2019-08-28 2019-08-28 Video decoding method, video encoding device, video encoding medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN112449187B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1162872A (en) * 1996-01-25 1997-10-22 三星电子株式会社 Method and apparatus for varied length coding and decoding
CN1777290A (en) * 2005-12-07 2006-05-24 浙江大学 Adaptive scanning method and device in video or image compression
WO2013001755A1 (en) * 2011-06-29 2013-01-03 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method and program, image decoding apparatus, image decoding method, and program
CN103297779A (en) * 2013-05-29 2013-09-11 北京大学 Method and device for adaptively adjusting coefficients of image blocks
CN108605133A (en) * 2016-02-12 2018-09-28 华为技术有限公司 The method and apparatus for selecting scanning sequency

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1162872A (en) * 1996-01-25 1997-10-22 三星电子株式会社 Method and apparatus for varied length coding and decoding
CN1777290A (en) * 2005-12-07 2006-05-24 浙江大学 Adaptive scanning method and device in video or image compression
WO2013001755A1 (en) * 2011-06-29 2013-01-03 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method and program, image decoding apparatus, image decoding method, and program
CN103297779A (en) * 2013-05-29 2013-09-11 北京大学 Method and device for adaptively adjusting coefficients of image blocks
CN108605133A (en) * 2016-02-12 2018-09-28 华为技术有限公司 The method and apparatus for selecting scanning sequency

Also Published As

Publication number Publication date
CN112449187A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN1112045C (en) Carry out video compression with error information coding method repeatedly
KR101122861B1 (en) Predictive lossless coding of images and video
JP4700491B2 (en) Adaptive coefficient scan ordering
Kim et al. Hierarchical prediction and context adaptive coding for lossless color image compression
KR100809354B1 (en) Apparatus and method for up-converting frame-rate of decoded frames
Westwater et al. Real-time video compression: techniques and algorithms
US20100166073A1 (en) Multiple-Candidate Motion Estimation With Advanced Spatial Filtering of Differential Motion Vectors
US20070036222A1 (en) Non-zero coefficient block pattern coding
WO2022062880A1 (en) Video decoding method and apparatus, computer readable medium, and electronic device
KR100510756B1 (en) Image decoding apparatus and method and image reproducing apparatus
CN111836046A (en) Video encoding method and apparatus, electronic device, and computer-readable storage medium
WO2022174660A1 (en) Video coding and decoding method, video coding and decoding apparatus, computer-readable medium, and electronic device
CN112449187B (en) Video decoding method, video encoding device, video encoding medium, and electronic apparatus
WO2021263251A1 (en) State transition for dependent quantization in video coding
CN110324639B (en) Techniques for efficient entropy encoding of video data
EP1368748A2 (en) Method and system to encode a set of input values into a set of coefficients using a given algorithm
CN112449185B (en) Video decoding method, video encoding device, video encoding medium, and electronic apparatus
CN112449188B (en) Video decoding method, video encoding device, video encoding medium, and electronic apparatus
CN112449184B (en) Transform coefficient optimization method, encoding and decoding method, device, medium, and electronic device
Wien Video coding fundamentals
CN110603811A (en) Residual transform and inverse transform in video coding systems and methods
CN113259671B (en) Loop filtering method, device, equipment and storage medium in video coding and decoding
CN110784719B (en) Efficient encoding of video data in the presence of video annotations
CN114079773B (en) Video decoding method and device, computer readable medium and electronic equipment
US11394987B2 (en) Chroma samples from luma samples prediction for video coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant