CN110505488B - Image coding or decoding method for expanding prediction pixel array - Google Patents

Image coding or decoding method for expanding prediction pixel array Download PDF

Info

Publication number
CN110505488B
CN110505488B CN201910946605.2A CN201910946605A CN110505488B CN 110505488 B CN110505488 B CN 110505488B CN 201910946605 A CN201910946605 A CN 201910946605A CN 110505488 B CN110505488 B CN 110505488B
Authority
CN
China
Prior art keywords
decoding
pixel
matching
pixel sample
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910946605.2A
Other languages
Chinese (zh)
Other versions
CN110505488A (en
Inventor
林涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tianhe Electronic Information Co ltd
Original Assignee
Shanghai Tianhe Electronic Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tianhe Electronic Information Co ltd filed Critical Shanghai Tianhe Electronic Information Co ltd
Priority to CN201910946605.2A priority Critical patent/CN110505488B/en
Publication of CN110505488A publication Critical patent/CN110505488A/en
Application granted granted Critical
Publication of CN110505488B publication Critical patent/CN110505488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image compression method. When a current block is coded or decoded in a 2-dimensional matching mode, when a new reconstructed reference pixel sample is generated, namely a new reference area is generated and a new reference area boundary with a new appearance is generated, firstly, the area of the reconstructed reference pixel sample is expanded, namely, pixel values obtained according to a preset rule are filled in a certain range outside the boundary (at least including the new appearance reference area boundary with the new appearance and also allowing to include part or all of the original boundary), so that in the process of 2-dimensional matching coding or decoding, a matching reference string block is allowed to extend to the outside of the area of the reconstructed reference pixel sample.

Description

Image coding or decoding method for expanding prediction pixel array
The present application is a divisional application of the following original applications:
application date of the original application: 2015-03-18
Application No. of the original application: 2015101185917
The invention of the original application is named: image coding or decoding method for expanding reference pixel sample value set
Technical Field
The invention relates to a digital video compression coding and decoding system, in particular to a method for coding and decoding computer screen images and videos.
Background
The natural form of a digital video signal of an image is a sequence of images. A frame of image is usually a rectangular area composed of several pixels, and a digital video signal is a video image sequence composed of tens of frames to thousands of frames of images, sometimes also referred to simply as a video sequence or sequence. Encoding a digital video signal is to encode a frame-by-frame image in a certain order. At any one time, the frame being encoded is referred to as the current encoded frame. Similarly, decoding the video stream of the compressed digital video signal is to decode the video stream of one frame of image in the same order. At any one time, the frame being decoded is referred to as the currently decoded frame. Either the current encoded frame or the current decoded frame is collectively referred to as the current frame.
In almost all international standards for Video image Coding, such as MPEG-1/2/4, h.264/AVC, and the latest international Video compression standard hevc (high Efficiency Video Coding), when a frame of image is encoded (and correspondingly decoded), the frame of image is divided into a plurality of sub-images of MxM pixels, which are called "Coding Units (CU)", and the sub-images are encoded one by one with the CU as a basic Coding Unit. Commonly used M sizes are 8, 16, 32, 64. Thus, encoding a sequence of video images is the sequential encoding of the individual coding units of each frame. Similarly, in decoding, each coding unit of each frame is sequentially decoded in the same order, and finally the entire video image sequence is reconstructed.
In order to adapt to the difference of the content and the property of each part of image in a frame of image, the most effective coding is carried out in a targeted mode, and the sizes of CUs in a frame of image can be different, namely 8x8, 64x64 and the like. In order to enable CUs of different sizes to be seamlessly spliced, a frame of image is always divided into "Largest Coding Units (LCUs)" having NxN pixels and the same size, and then each LCU is further divided into a plurality of CUs of different sizes in a tree structure. Accordingly, an LCU is also referred to as a "Coding Tree Unit (CTU)". A CU of the same size as a CTU is referred to as a CU having a depth D of 0 (D = 0). A quarter-sized CU quartered by a CU with D =0 is referred to as a CU with a depth D of 1 (D = 1). A CU with a smaller size obtained by further quartering a CU with D =1 is referred to as a CU with a depth D of 2 (D = 2). A CU with a smaller size obtained by further quartering a CU with D =2 is referred to as a CU with a depth D of 3 (D = 3). A CU with a smaller size obtained by further quartering a CU with D =3 is referred to as a CU with a depth D of 4 (D = 4). The concept of "depth" and "number" is used to simply represent the shape and size of a CU. For example, one frame image is first divided into LCUs of 64 × 64 pixels having the same size and depth D =0 (N ═ 64). One LCU may be configured of 2 CUs of 32x32 pixels D =1 (CUs numbered 0 and 15 in fig. 1), 6 CUs of 16x16 pixels D =2 (CUs numbered 1, 2, 3, 4, 9, and 10 in fig. 1), and 8 CUs of 8x8 pixels D =3 (CUs numbered 5, 6, 7, 8, 11, 12, 13, and 14 in fig. 1) as shown in fig. 1. Such 16 arborescent CUs constitute one CTU. As shown in fig. 2, one LCU may be configured by 3 CUs with D =1 of 32x32 pixels (CU numbered 0, 5, and 6 in fig. 2) and 4 CUs with D =2 of 16x16 pixels (CU numbered 1, 2, 3, and 4 in fig. 2), and thus 7 CUs in a tree structure may also configure one CTU. Therefore, under a predetermined tree structure division rule (such as the simplest simple quartering), the depth and the number themselves have no special meaning, and are simply an equivalent representation of the shape and size of a CU of a tree structure. A frame of picture is coded, i.e. a CU in a CTU is coded in sequence. At any one time, the CU being encoded is referred to as the current encoding CU. One frame of picture is decoded, and one CU in one CTU is also decoded sequentially in the same order. At any one time, the CU being decoded is referred to as the currently decoded CU. The current coded CU or the current decoded CU are collectively referred to as a current CU. The order of CU numbers in fig. 1 and 2 is also the order in which one CU is encoded or decoded.
All CUs within a CTU have a depth D and a sequence number. As shown in fig. 3, only one CU with depth D =0 is numbered 0. As shown in fig. 4, 4 CUs having a depth D =1 are numbered 0 to 3. As shown in fig. 5, 16 CUs having a depth D =2 are numbered 0 to 15. As shown in fig. 6, 64 CUs having a depth D =3 are numbered 0 to 63. As shown in fig. 7, 256 CUs having a depth D =4 are numbered from 0 to 255.
More generally, FIGS. 3-7 also illustrate tree splitting of CTUs. Fig. 3 shows the CTU divided into segments with depth D =0, and only one segment is numbered 0. Fig. 4 shows the total of 4 divisions of CTU depth D =1, and the numbers thereof are 0 to 3. Fig. 5 shows 16 total divisions with a CTU depth D =2, and the numbers thereof are 0 to 15. Fig. 6 shows a total of 64 divisions of CTU depth D =3, and the numbers thereof are 0 to 63. Fig. 7 shows a total of 256 divisions of CTU depth D =4, and the numbers thereof are 0 to 255.
In the prior art represented by MPEG-1/2/4, h.264/AVC, HEVC, and the like, in order to improve coding efficiency, one CU (the largest CU is also allowed to be a frame image itself) is further divided into smaller sub-regions in general. The sub-regions include, but are not limited to: prediction Unit (PU), Transform Unit (TU), area of asymmetric partitioning (AMP), area resulting from bi-partitioning, macroblock, block, micro-block, bar (area of width or height of one pixel or one pixel component), rectangular area of variable size, pixel string (segment) or pixel component string (segment) or pixel index string (segment) of variable size. Encoding (and corresponding decoding) a CU is encoding (and corresponding decoding) a sub-region. In encoding, a sub-region is referred to as an encoded sub-region, and in decoding, a sub-region is referred to as a decoded sub-region. The coding sub-region and the decoding sub-region are collectively referred to as a coding sub-region. In the prior art, the sub-regions (especially in case of prediction units, transform units, asymmetrically divided regions, macroblocks, blocks, micro-blocks, slices) are often still referred to as "blocks". Therefore, the coding sub-region and the decoding sub-region are often referred to as a coding block and a decoding block, respectively, in many cases, collectively referred to as a coding/decoding block. Accordingly, in the process of further dividing a CU into smaller codec sub-regions, also called codec blocks, the depth of division is further increased and new sequence numbers are generated, which is no longer simple, and particularly in the case of non-uniform division, the shape and size of a codec block are actually complicated by the depth and sequence numbers, and thus, the use of depth and sequence numbers is not as convenient, clearer and simpler as the direct use of the shape and size of a codec block.
One color pixel is composed of 3 components (components). The two most commonly used pixel color formats are the GBR color format, which consists of a green component, a blue component, a red component, and the generic YUV color format, which consists of one luminance (luma) component and two chrominance (chroma) components, such as the YCbCr color format. Therefore, when encoding one CU, one CU may be divided into 3 component planes (G plane, B plane, R plane, or Y plane, U plane, V plane), and the 3 component planes may be encoded separately; it is also possible to bundle 3 components of a pixel into one 3-tuple and encode the whole CU consisting of these 3-tuples. The former arrangement of pixels and their components is called planar format (planar format) of the image (and its CUs), while the latter arrangement of pixels and their components is called packed format (packed format) of the image (and its CUs).
Taking the GBR color format of a pixel p [ x ] [ y ] ═ G [ x ] [ y ], B [ x ] [ y ], R [ x ] [ y ] }, a planar format is arranged in such a manner that all WxH G components of one frame image (or one CU) having a width of W pixels and a height of H pixels are arranged first, then all WxH B components are arranged, and finally all WxH R components are arranged:
g[1][1],g[2][1],…,g[W-1][1],g[W][1],
g[1][2],g[2][2],…,g[W-1][2],g[W][2],
………………………………………,
………………………………………,
g[1][H],g[2][H],…,g[W-1][H],g[W][H],
b[1][1],b[2][1],…,b[W-1][1],b[W][1],
b[1][2],b[2][2],…,b[W-1][2],b[W][2],
………………………………………,
………………………………………,
b[1][H],b[2][H],…,b[W-1][H],b[W][H],
r[1][1],r[2][1],…,r[W-1][1],r[W][1],
r[1][2],r[2][2],…,r[W-1][2],r[W][2],
………………………………………,
………………………………………,
r[1][H],r[2][H],…,r[W-1][H],r[W][H]。
and, a packing-stack format is arranged in such a manner that the G component of the first pixel is arranged first, then the B component and the R component thereof are arranged, then the G component, the B component and the R component of the second pixel are arranged, and so on, and finally the G component, the B component and the R component of the last (WxH-th) pixel are arranged:
g[1][1],b[1][1],r[1][1], g[2][1],b[2][1],r[2][1], …………, g[W][1],b[W][1],r[W][1],
g[1][2],b[1][2],r[1][2], g[2][2],b[2][2],r[2][2], …………, g[W][2],b[W][2],r[W][2],
………………………………………………………………………………………,
………………………………………………………………………………………,
g[1][H],b[1][H],r[1][H], g[2][H],b[2][H],r[2][H], ………, g[W][H],b[W][H],r[W][H]。
this arrangement of the bundle format can also be represented in simplified form as:
p[1][1],p[2][1],……,p[W-1][1],p[W][1],
p[1][2],p[2][2],……,p[W-1][2],p[W][2],
…………………………………………,
…………………………………………,
p[1][H],p[2][H],……,p[W-1][H],p[W][H]。
besides the arrangement mode of the plane format and the arrangement mode of the pack format, other arrangement modes of various plane formats and arrangement modes of pack formats can be provided according to different sequences of the three components.
The YUV color format can be further subdivided into a plurality of seed formats depending on whether down-sampling is performed on the color components: 1 pixel is in YUV4:4:4 pixel color format consisting of 1Y component, 1U component and 1V component; the left and right adjacent 2 pixels are in YUV4:2:2 pixel color format consisting of 2Y components, 1U component and 1V component; the 4 pixels which are arranged at the spatial position of 2x2 and are adjacent left, right, upper and lower are in YUV4:2:0 pixel color format which consists of 4Y components, 1U component and 1V component. One component is generally represented by 1 number of 8-16 bits. The YUV4:2:2 pixel color format and the YUV4:2:0 pixel color format are obtained by down-sampling the chrominance components of the YUV4:4:4 pixel color format. A pixel component is also referred to as a pixel sample (sample) or simply a sample (sample). A sample may be an 8-bit number, i.e. a sample occupies one byte. A sample may also be a 10 bit number or a 12 bit number or a 14 bit number or a 16 bit number.
When any coding or decoding block is coded or decoded, reconstruction pixels are generated and are divided into partial reconstruction pixels with different degrees generated in the coding or decoding process and full reconstruction pixels generated after the coding or decoding process is completely finished. If the fully reconstructed pixel sample has an equal value to the original input pixel sample before encoding, the encoding and decoding process that is undergone is referred to as lossless encoding and decoding. If the fully reconstructed pixel sample has an unequal value to the original input pixel sample before encoding, the encoding and decoding process that is experienced is referred to as lossy encoding and decoding. When a codec block is sequentially encoded or decoded, the resulting reconstructed pixel samples are usually stored as history data and used as reference pixel samples for encoding or decoding of a subsequent codec block. Since the only use of the reference pixel samples is as predicted pixel samples in the encoding or decoding of subsequent codec blocks, the reference pixel samples are also referred to as predicted pixel samples or as matched pixel samples. The storage space in which the reconstructed pixel history data is stored as reference pixel samples is called a reference pixel sample storage space, also called a reference pixel sample set or simply a reference region, or also called a prediction pixel sample array or called a matching pixel sample array. The reference pixel sample storage space is limited and only a portion of the historical data is stored. The historical data in the reference pixel sample storage space may also include reconstructed pixel samples of the current codec block.
With the development and popularization of a new generation cloud computing and information processing mode and platform taking a remote desktop as a typical representation form, the interconnection among multiple computers, a computer host, an intelligent television, a smart phone, a tablet personal computer and other digital devices and various digital devices becomes a reality and becomes a mainstream trend. This makes real-time screen transmission from the server side (cloud) to the user side an urgent need. Due to the large amount of screen video data that needs to be transmitted, efficient high quality data compression of computer screen images is necessary.
The method fully utilizes the characteristics of the computer screen image to carry out ultra-high efficiency compression on the computer screen image, and is also a main target of the latest international video compression standard HEVC.
A significant feature of computer screen images is that there are usually many similar or even identical pixel patterns (pixel patterns) within the same frame image. For example, Chinese characters or foreign language characters frequently appearing in computer screen images are composed of a few basic strokes, and many similar or identical strokes can be found in the same frame of image. Menus, icons, etc. that are commonly found in computer screen images also have many similar or identical patterns. Various matching modes are used in the existing screen image and video compression technology, including an intra-frame prediction mode, a block matching (also called intra-frame motion compensation or intra-frame block copying) mode, a micro-block matching mode, a micro-block string matching mode, a fine division matching mode, a palette matching mode, a 1-dimensional string matching mode, and a 2-dimensional shape-preserving string matching (referred to as 2-dimensional shape-preserving matching for short) to find the matching of various sizes and shapes, so as to realize the high-efficiency coding of the screen image. Among the various matching methods, the block matching method has few matching parameters so as to achieve high coding efficiency in some cases, and the more general 2-dimensional shape-preserving matching method including the block matching method can find quite accurate matching in a large range so as to achieve high coding efficiency in some cases.
The block matching method is to use a plurality of suitable pixel sample blocks (called matching reference blocks or reference blocks for short) in the reference pixel sample storage space to approximately or accurately match (i.e. represent) the pixel sample block (called matching current block or current block for short) in the current CU and to record and transmit the relationship between the matching reference block and the matching current block through the video code stream by using the partition mode and/or the matching position (collectively called matching relationship parameters), so that the matching current block can be obtained at the decoding end by using the matching reference blocks and the matching relationship parameters in the reference pixel sample storage space. The matching relation parameters usually only occupy a few bit numbers, which are far less than the bit numbers occupied by the matching current block, so that a good data compression effect can be achieved.
In the block matching scheme, as shown in fig. 8, the matching current block and the matching reference block generally have the same size and 2-dimensional shape (width and height).
In the block matching method, a block is further subdivided into micro blocks, and the micro block matching method is obtained.
In the block matching mode, the blocks are finely divided, so that the fine division matching mode is realized.
The 2-dimensional shape-preserving matching mode is to use a plurality of proper pixel samples (called matching reference samples) in the reference pixel sample value set to approximately or accurately match (i.e. represent) the pixel samples (called matching current samples) in the current CU, use the matching position, matching length and unmatched samples (called matching relation parameters collectively) to record and transmit the relation between the matching reference samples and the matching current samples through the video code stream, and thus use the matching reference samples and the matching relation parameters in the reference pixel sample value set at the decoding end to obtain the matching current samples. The matching relation parameter usually only occupies a few bits, which is far less than the bits occupied by matching the current sample value, so that a good data compression effect can be achieved.
In the 2-dimensional shape-preserving matching method, as shown in fig. 8, the pixel samples in the current CU are divided into a segment of pixel sample strings, and these pixel sample strings (referred to as matching current strings or simply current strings) have corresponding matching reference strings or simply reference strings in the reference pixel sample set. Pixel samples in the current CU that do not have a corresponding reference pixel sample within the set of reference pixel samples are referred to as unmatched samples. The 2-dimensional shape-preserving matching is characterized in that the matching current string and the matching reference string have the identical 2-dimensional shape. In the vertical 2-dimensional shape-preserving matching, the heights of all matching strings are the same, namely the height of the current CU. In horizontal 2-dimensional shape-preserving matching, the widths of all matching strings are the same, namely the width of the current CU.
The 2-dimensional shape-preserving matching mode is a string matching mode.
Since encoding and decoding are performed on a CU-by-CU basis, when a current CU is encoded or decoded, regions of reconstructed pixel samples and regions of pixel samples that have not been encoded or decoded are interleaved as shown in fig. 9. The matching reference string (block) must retain exactly the same shape as the matching current string (block), and therefore the matching reference string (block) is likely to extend beyond the region of the reconstructed pixel sample (i.e., the set of reference pixel values) (but the matching reference string or block may not be entirely outside the region of the reconstructed pixel sample), as shown by the 5 matching reference strings and 3 matching reference blocks in FIG. 9, the leftmost matching reference string having a 4x2 pixel extension, the left-two matching reference string having a 8x1 pixel extension, the middle matching reference string having a 1x10 pixel extension, the matching reference block below the middle matching reference string having a 2x8 pixel extension, the right-two matching reference string having a 2x1 pixel extension, the rightmost matching reference string having a 3x2 pixel extension, the uppermost matching reference block having a 8x1 pixel extension beyond the upper boundary of the current image, the rightmost matching reference block has 1x8 pixels extending beyond the right boundary of the current image. However, in the conventional 2-dimensional shape-preserving matching method and block matching method, the matching reference string (block) cannot extend beyond the region of the reconstructed pixel sample, so the matching reference string (block) must be truncated at the region boundary, and the matching current string (block) must also be truncated. Therefore, the existing 2-dimensional shape-preserving matching mode and block matching mode have to have a complex out-of-bound processing mechanism, whether the matching reference string (block) is out of bound or not is checked at every moment, and special processing is required if the matching reference string (block) is out of bound, so that the operation complexity is very high, and the superiority of the 2-dimensional shape-preserving matching mode and the block matching mode cannot be fully exerted.
In the present patent application, for the sake of brevity of description, 2-dimensional shape-preserving matching, micro-block matching, and block matching are collectively referred to as 2-dimensional matching, matching strings, matching micro-blocks, and matching blocks are collectively referred to as matching string blocks, matching reference strings, matching reference micro-blocks, and matching reference blocks are collectively referred to as matching reference string blocks, matching current strings, matching current micro-blocks, and matching current blocks are collectively referred to as matching current string blocks, reference strings, reference micro-blocks, and reference blocks are collectively referred to as reference string blocks, and current strings, current micro-blocks, and current blocks are collectively referred to as current string blocks.
It should be noted that "matching" is an encoding operation, and the corresponding reconstruction and decoding operations are often "copying". Therefore, various matching methods such as a block matching method, a micro-block matching method, a fine division matching method, a string matching method, a 2-dimensional shape-preserving matching method, and the like are also referred to as a block copying method, a micro-block copying method, a fine division copying method, a string copying method, a 2-dimensional shape-preserving copying method, and the like. In both encoding and decoding, "matching" essentially uses reference pixels to predict pixels in the current codec, and thus, matching codecs are also conventionally referred to as predictive codecs.
Disclosure of Invention
The main technical feature of the present invention is that when a current block (such as a current PU or a current CU or a current CTU or a current slice or a current image) is coded or decoded in a 2-dimensional matching manner, whenever a new reconstructed reference pixel sample is generated, also called a predicted pixel sample or called a matched pixel sample, i.e. a new reference region is generated, also called a predicted pixel sample array or called a matched pixel sample array, therebyWith new appearance of newly formedReference areaBoundary ofAlso known as an array of predicted pixel samplesBoundary ofOr called matched pixel sample arrayWhen the boundary isFirst, for the region of the reconstructed reference pixel sampleCarry out the expansionThat is, the pixel values obtained according to the predetermined rule are filled in a certain range outside the boundary (at least including the newly formed reference region boundary is also called as a predicted pixel sample value array boundary or called as a matched pixel sample value array boundary, and is also allowed to include a part or all of the original boundary), so that in the process of 2-dimensional matching encoding or decoding, the matched reference string block is allowed to extend out of the reconstructed reference pixel sample value region, whether the matched reference string block is out of bounds or not is not required to be checked at any moment, and a more complete matched reference string block can be obtained, thereby improving the encoding performance. A secondary technical feature derived from this main technical feature is to allow multiple different extensions to the same reference region and its boundaries at different stages in the overall encoding or decoding process (e.g. when encoding or decoding a current CU of different sequence number), i.e. multiple different extensions to the same reference region and its boundariesThe values of the expanded pixel samples are not necessarily the same from time to time as each expansion.
Fig. 10 shows an example of a minimum extension area required for avoiding a complicated out-of-bounds processing scheme for matching a reference sequence block and improving encoding performance when encoding or decoding a CU with a depth 2 and a sequence number 2 of a CTU with a sequence number n. Depth D =2 (16 x16 pixels) of the current CU, with a sequence number of 2. The new reference region boundaries are the right and lower boundaries of the CU with depth 2 and number 1 among CTUs of number n. Since the matching reference string block is at least partially within the region of reconstructed pixel samples, and the width and height of the matching reference string block itself cannot exceed the width and height of the current CU, the width of the minimum extension region of the right boundary of the region of reconstructed pixel samples is the width minus one (in pixel samples) of the current CU, and the height of the minimum extension region of the lower boundary of the region of reconstructed pixel samples is the height minus one (in pixel samples) of the current CU. Fig. 10 also illustrates that the expansion may be performed multiple times for the same reference region and its boundaries, and that the values of the pixel samples for each expansion are not necessarily the same. For example, in fig. 10, the upper boundary of the current CTU of number n is first expanded when encoding or decoding a CU with depth 2 number 1, and is second expanded when encoding or decoding a CU with depth 2 number 2, and the numerical values of the expanded pixel samples are calculated by different rules for the two expansions. For another example, in fig. 10, for the upper boundary of the image, the first expansion is performed when the reference pixel is a reconstructed pixel that is not subjected to the deblocking filtering, and the second expansion is performed when the reference pixel is a reconstructed pixel that is subjected to the deblocking filtering, and the values of the pixel samples expanded twice are obviously not necessarily the same.
In the encoding method of the present invention, the most basic characteristic feature is that, every time a current block (such as a current PU or a current CU or a current CTU or a current slice or a current picture) is encoded, a new reconstructed reference pixel sample is generated, i.e. a new reference region is generated so as to haveNewly formedReference areaWhen the boundary isFirst, according to the depth and sequence number of the current block or directlyAccording to the shape and size of the current blockFor 2-dimensional reference pixel sample set regionGo outside the boundary ExtensionFilling pixel values obtained according to a preset rule in a certain range outside a 2-dimensional reference pixel sample value set area boundary (at least comprising the newly formed reference area boundary and allowing to comprise part or all of the original boundary) also called a prediction pixel sample value array boundary to form an expanded reference pixel sample value set, then carrying out 2-dimensional matching coding also called prediction coding in the expanded reference pixel sample value set, and allowing a matching reference string block to go out of bound without checking whether the matching reference string block goes out of bound or not in the coding process; in the encoding process, the pixel values of the extended area may also be updated according to a predetermined rule.
In the decoding method of the present invention, the most basic characteristic technical feature is that the depth and sequence number of the current block or the shape and size of the current block are calculated according to the information read from the video code stream data or according to the decoding sequence, and each time a current block (such as the current PU or the current CU or the current CTU or the current slice or the current image) is decoded, a new reconstructed reference pixel sample value is generated, that is, a new reference area is generated so as to generate the reference areaWith new appearance of newly formedReference areaBoundary of Time of flightFirst, according to the depth and sequence number of the current block or directlyAccording to the shape and size of the current blockFor 2-dimensional reference pixel sample set regionPerforming boundary extensionFilling pixel values obtained according to a preset rule in a certain range outside a 2-dimensional reference pixel sample value set region boundary (at least including the newly formed reference region boundary which at least comprises a part or all of the original boundary) also called a prediction pixel sample value array boundary to form an expanded reference pixel sample value set, and then carrying out 2-dimensional matching decoding also called prediction decoding in the expanded reference pixel sample value set, wherein in the decoding process, the matched reference string block is allowed to go out of bound without checking whether the matched reference string block goes out of bound; in the decoding process, the pixel values of the extended area may also be updated according to a predetermined rule.
It goes without saying that encoding and decoding must be extended with mutually consistent pixel values.
The technical features of the present invention are explained above by specific embodiments. Other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
According to an aspect of the present invention, there is provided an image encoding method including at least one of the following steps:
1) generating new reference pixel samples, also called predicted pixel samples, during encoding, i.e. generating new reference regions, also called arrays of predicted pixel samples, therebyWith new appearance of newly formedReference areaBoundary ofAlso known as an array of predicted pixel samplesWhen the boundary isFor reference pixel sample value set regions also called prediction pixel sample value arraysCarry out the expansionAccording to a predetermined rule, assigning pixel values to a certain range of region (such as the aforementioned minimum extension region or the region of pixels immediately adjacent to the boundary) outside the boundary of the reference pixel sample value set region (at least including the newly formed reference region boundary and also allowing to include part or all of the original boundary), also called the predicted pixel sample value array region, to form an extended reference pixel sample value set, also called the predicted pixel sample value array;
2) and at least using the extended reference pixel sample value set, also called prediction pixel sample value number, to continuously encode the encoding block, and writing the encoding result into the video code stream.
According to another aspect of the present invention, there is also provided an image decoding method including at least one of the following steps:
1) generating new reference pixel samples, also called predicted pixel samples, in decoding, i.e. generating new reference regions, also called arrays of predicted pixel samples, therebyNew formation ofReference area ofBoundary ofAlso known as an array of predicted pixel samplesWhen the boundary isFor reference pixel sample value set regions also called prediction pixel sample value arraysCarry out the expansionAccording to a predetermined rule, assigning pixel values to a certain range of region (such as the aforementioned minimum extension region or the region of pixels immediately adjacent to the boundary) outside the boundary of the reference pixel sample value set region (at least including the newly formed reference region boundary and also allowing to include part or all of the original boundary), also called the predicted pixel sample value array region, to form an extended reference pixel sample value set, also called the predicted pixel sample value array;
2) and analyzing the video code stream, and continuously decoding the decoded block by using at least the expanded reference pixel sample value set, namely the prediction pixel sample value number.
In the present invention, the encoding block or the decoding block is a coding region or a decoding region of an image, and includes at least one of the following: the coding unit comprises a maximum coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-area of the CU, a sub-coding unit SubCU, a coding sub-block, a prediction sub-block, a prediction unit PU, a sub-area of the PU, a sub-prediction unit SubPU and a macro-block.
Drawings
FIG. 1 shows an example of a CU partition and tree structure of a CTU
FIG. 2 shows an example of another CU partition and tree structure of a CTU
Fig. 3 shows the sequence numbers of CUs or partitions with depth D =0 in one CTU
Fig. 4 shows the numbers of 4 CUs or partitions having a depth D =1 in one CTU
Fig. 5 shows the numbers of 16 CUs or partitions having a depth D =2 in one CTU
Fig. 6 shows the numbers of 64 CUs or partitions with depth D =3 in one CTU
Fig. 7 shows the numbers of 256 CUs or partitions having a depth D =4 in one CTU
FIG. 8 is an example of matching a current string (block) and a matching reference string (block) for 2-dimensional matching
FIG. 9 is an example of a case where a matching reference string (block) of 2-dimensional matches extends outside the reconstructed reference pixel sample value set
FIG. 10 is a minimum expansion area required to avoid the cumbersome out of bounds handling mechanism of matching reference strings (blocks)
FIG. 11 is a flow chart of an encoding method according to an embodiment of the invention
FIG. 12 is a flowchart illustrating a decoding method according to an embodiment of the present invention
Detailed Description
An embodiment of the encoding method of the present invention, a flow chart of which is shown in fig. 11, includes at least one of the following steps:
1) analyzing and evaluating the characteristics of the coding blocks, preprocessing and selecting a coding mode; analyzing and evaluating the pixel sample value characteristics of the current coding block and the adjacent coding blocks, including performing necessary pretreatment on the pixel sample value, and judging whether a 2-dimensional matching coding mode is suitable for coding the current coding block in advance, wherein the step is optional, namely the step can be skipped and the next step can be directly carried out; examples of the approach for the analytical evaluation: calculating the number of pixels with different colors in the current coding block according to or by referring to the coding result of the adjacent coding block; examples of the pretreatment: sample quantization, color quantization and color-based pixel clustering, representing the color of the input original pixel with a palette and indices;
2) a step of expanding a reference pixel sample value set; expanding the 2-dimensional reference pixel sample value set region according to the depth and the serial number of the current coding block or directly according to the shape and the size of the current coding block, namely filling a pixel value obtained according to a preset rule in a certain range outside the boundary of the 2-dimensional reference pixel sample value set region, and combining the reference pixel sample value set and the expanded region to form an expanded reference pixel sample value set; for the right boundary of the reference pixel sample value set region, the minimum width of the expansion (in pixel sample value) is the width of the current coding block minus one; for the lower boundary of the reference pixel sample value set region, the minimum height of the expansion (in pixel sample value unit) is the width of the current coding block minus one; this is the minimum extended area required to avoid the cumbersome out of bounds processing mechanisms that match the reference string blocks; the actual extension area may be larger than the minimum extension area; the extended padding is usually performed before step 3), but may also be performed again in step 3); the expanded area can fully or partially cover the area of the current coding block;
3) 2-dimensional matching coding; performing 2-dimensional matching coding operation on the current coding block by using a 2-dimensional matching coding mode and the extended reference pixel sample value set; in the encoding process, the pixel value of the expansion area can be updated according to a preset rule; the input of the 2-dimensional matching code is input original pixels or pixels which are preprocessed; the output of the 2-dimensional matching code is a matching position, an optional matching length, an optional unmatched sample value and a matching residual error; the matching position is a variable representing where within the set of reference pixel sample values a matching reference sample that matches a current sample in the current coding block is located; alternatively, the 2-dimensional matching coding mode performs matching coding in units of pixel sample strings (called matching current strings, the position of which can be represented by a 2-dimensional coordinate or a linear address) with variable lengths; the matching reference sample forms a matching string block in the reference pixel sample value set, which is called matching reference string block, and the position of the matching string block can be represented by a 2-dimensional coordinate or a linear address, so that in the 2-dimensional matching coding mode, the matching position can be represented by the difference between the 2-dimensional coordinate of the matching reference string block and the 2-dimensional coordinate of the matching current string block or the difference between the linear address of the matching reference string block and the linear address of the matching current string block, which is called displacement vector; optionally, since the length of the matching reference string (equal to the length of the matching current string) is variable, another variable called matching length is optionally required to represent its length; optionally, the unmatched sample is an input original pixel sample for which no match is found within the set of reference pixel sample values according to a predetermined matching criterion; matching the current string block with the corresponding matching reference string block to have the same 2-dimensional shape; optionally, the unmatched sample is an input original pixel sample, and therefore can also be represented by its position in the current coding block; alternatively, the unmatched sample may be approximated by a lossy dummy sample obtained by calculation; the matching residual is the difference between the input original pixel sample and the matching reference sample, the matching residual is zero if the predetermined matching criterion of the 2-dimensional matching coding mode is absolutely exact lossless matching, that is, the 2-dimensional matching encoding method has no matching residual as output, if the predetermined matching criterion of the 2-dimensional matching encoding method is approximate lossy matching, the matching residual may not be zero, another lossy matching scenario is to first sample quantize, color quantize, or pre-process color-based pixel clustering of the input original pixel samples, then perform 2-dimensional matching coding, in this case, since sample quantization, color quantization or color-based pixel clustering is lossy, even if the 2-dimensional matching code itself is lossless, the matching residual (i.e. the difference between the input original pixel sample and the matching reference sample) may not be zero; the result of 2-dimensional matching coding of the current coding block is I (I is more than or equal to 0) matching string blocks and optional J (J is more than or equal to 0) unmatched pixel samples, and I pairs (displacement vectors and optional matching lengths) and optional J unmatched pixel samples are output, wherein I and J cannot be zero at the same time;
4) the other various common coding and reconstruction operation steps are used for completing all the other coding and reconstruction operations of the current coding block, and various common technologies such as intra-frame prediction, inter-frame prediction, block matching, palette matching, Sample value imitation prediction interpolation, transformation, quantization, inverse transformation, inverse quantization, compensation corresponding to prediction residual and matching residual (namely inverse operation of residual operation), prediction and residual solving, DPCM, first-order and high-order difference, mapping, run length, index, deblocking filtering, Sample value Adaptive compensation (Sample Adaptive Offset), coding and reconstruction operations and entropy coding operations are carried out on input original pixels, various parameters and variables; the input of the step is the output and input original pixel of the step 3) and the reference pixel from the reference pixel sample value storage space and the reference pixel sample value set; the output of the step is reconstructed pixels (including complete reconstructed pixels and partial reconstructed pixels with different degrees) and a video code stream containing 2-dimensional matching coding results and other coding results; the reconstructed pixel is placed in a reference pixel sample value storage space and is used as a reference pixel required by the subsequent 2-dimensional matching coding operation and other various common coding and reconstruction operation steps; the video code stream is the final output of the encoding method, and includes all syntax elements required by the corresponding decoding method for decoding and reconstructing, in particular syntax elements such as matching position (namely displacement vector), matching length, unmatched sample value or position thereof.
An embodiment of the decoding method of the present invention, a flow chart of which is shown in fig. 12, includes at least one of the following steps:
1) analyzing and partially decoding video code stream data; carrying out entropy decoding on input compressed data containing a matching position, a matching length and an unmatched sample value (or the position thereof) and video code streams of all other syntax element compressed data, and analyzing the significance of various data obtained by the entropy decoding; matching relation parameters such as matching positions (namely displacement vectors), matching lengths, unmatched sample values (or positions thereof) and the like obtained after analysis and partial decoding (such as transform decoding, prediction and compensation, namely inverse operation of residual operation, DPCM decoding, first-order and high-order differential decoding, mapping decoding, run length decoding and index decoding) are output to a subsequent 2-dimensional matching decoding step; outputting the depth and the serial number of the current decoding block obtained by analysis to a subsequent extended reference pixel sample value set step; in particular, a new reference area and a new boundary thereof are determined according to information analyzed from video code stream data or information analyzed from video code stream data and results of analyzing and evaluating the characteristics of a current decoding block and adjacent decoding blocks; examples of the approach for the analytical evaluation: according to or referring to a plurality of decoding results of adjacent decoding blocks, a 2-dimensional matching decoding mode and other decoding modes are used for carrying out partial pre-decoding on a current decoding block in a first wheel way and evaluating partial pre-decoding results;
2) a step of expanding a reference pixel sample value set; expanding the 2-dimensional reference pixel sample value set region according to the depth and the sequence number of the current decoding block or directly according to the shape and the size of the current decoding block and the information and the result obtained in the step 1), namely filling the pixel values obtained according to the preset rule in a certain range outside the boundary (including the new boundary and part or all of the original boundary) of the 2-dimensional reference pixel sample value set region, and combining the reference pixel sample value set and the expanded region to form the expanded reference pixel sample value set; for the right boundary of the reference pixel sample value set region, the minimum width of the expansion (in pixel sample values) is the width of the current decoding block minus one; for the lower boundary of the reference pixel sample value set region, the minimum height of expansion (in pixel sample values) is the width of the current decoding block minus one; this is the minimum extended area required to avoid the cumbersome out of bounds processing mechanisms that match the reference string blocks; the actual extension area may be larger than the minimum extension area; the extended padding is usually performed before step 3), but may also be performed again in step 3); the extended area may cover the area of the currently decoded block in whole or in part;
3) 2-dimensional matching decoding; performing 2-dimensional matching decoding operation on the current decoding block by using a 2-dimensional matching decoding mode and the extended reference pixel sample value set; in the decoding process, the pixel value of the extended area can also be updated according to a preset rule; the input of the 2-dimensional matching decoding operation is I (I is more than or equal to 0) pairs (matching position, optional matching length) and optional J (J is more than or equal to 0) unmatched sample values (or positions thereof) obtained by analyzing and decoding from video code stream data in the step 1), wherein I and J cannot be zero at the same time; the match position is a position of the matching current sample used to represent from what position within the set of reference pixel sample values the matching reference sample is copied and pasted to the current decoded block; it is apparent that matching the current sample is a duplicate of matching the reference sample, both being numerically equal; alternatively, the 2-dimensional matching decoding method performs 2-dimensional matching decoding in units of variable-length pixel sample strings (called matching current strings, the position of which can be represented by either a 2-dimensional coordinate or a linear address); the matching reference sample forms a matching string block in the reference pixel sample value set, which is called matching reference string block, and the position of the matching string block can be represented by a 2-dimensional coordinate or a linear address, so that in a 2-dimensional matching decoding mode, the matching position can be represented by the difference between the 2-dimensional coordinate of the matching reference string block and the 2-dimensional coordinate of the matching current string block or the difference between the linear address of the matching reference string block and the linear address of the matching current string block, which is called displacement vector, and optionally, since the length of the matching reference string (equal to the length of the matching current string) is variable, another variable called matching length is needed to represent the length of the matching reference string block; matching the current string block with the corresponding matching reference string block to have the same 2-dimensional shape; optionally, the unmatched sample is a pixel sample directly parsed and decoded from the video stream data and pasted to a currently decoded pixel sample position of a currently decoded block, and the unmatched sample is not usually present in the reference pixel sample value set; alternatively, if the position of the unmatched sample is not the unmatched sample itself but the position of the unmatched sample obtained by parsing and decoding the video code stream data, the position of the unmatched sample is output to the subsequent step 4) to calculate a dummy matched sample; the output of the 2-dimensional match decode operation is the matching current sample (equal in value to the matching reference sample) plus optionally the unmatched sample (or its position); the matching current sample and the unmatched sample (or its location) if present, all together form the complete 2-dimensional match decoded output of the current decoded block.
The invention is suitable for encoding and decoding the image or the CU in the pack folding format. The present invention is also equally applicable to encoding and decoding of component plane format images or CUs. The invention is suitable for lossless 2-dimensional matching encoding and decoding. The invention is equally applicable to lossy 2-dimensional matching encoding and decoding.
The drawings provided above are only schematic illustrations of the basic idea of the present invention, and the drawings only show the components directly related to the present invention rather than the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
Further implementation details and variants of the invention follow.
Example 1 expansion of reference pixel sample value set
The pixel value given by the pixel value obtained by expanding and filling the boundary of the reference pixel sample value area, namely the pixel outside the boundary of the prediction pixel sample value array is the copy of the pixel value of the adjacent pixel in the boundary in a preset mode; for example, a repetition of horizontal directions or a repetition of vertical directions or a repetition of directions of other angles.
Example 2 expansion of reference pixel sample value set
The pixel values filled outside the boundary of the reference pixel sample set region are natural extensions of the specific pattern formed by some pixels in the boundary; for example, a natural extension of one or several straight lines within the border, an extrapolation of one or several curves within the border, a natural extension of several differently colored areas within the border with distinct dividing lines.
Example 3 expansion of reference pixel sample value set
The pixel value filled outside the boundary of the reference pixel sample value set region is the natural extension of the matching reference string block corresponding to the matching current string block adjacent to the boundary in the reference pixel sample value set region; for example, in the case of horizontal 2-dimensional matching, three matching current strings adjacent to a boundary within a vertical boundary are shown as three matching current strings in CU m +1 in fig. 8, and their corresponding matching reference strings may all extend naturally to the right within the reference pixel sample set region, and these naturally extending pixels to the right are copied and used as the filled pixel values.
Example 4 expansion of reference pixel sample value set
The pixel values filled outside the boundary of the reference pixel sample value set region are pixel values of a single color; for example, a black pixel value, or a gray pixel value, or a color pixel value that appears most frequently in a frame preceding the current frame, or a color pixel value that appears most frequently at the boundary of the reference pixel sample set region.
Example 5 expansion of reference Pixel sample value set
Dividing the reference pixel sample set region boundary into a plurality of segments, and determining the pixel value filled outside the segment boundary by using different preset rules on each segment.
Example 6 expansion of reference pixel sample value set
The 2-dimensional reference pixel sample value set region, consisting of reconstructed pixel samples, may be a step-wise transition from a region of uncoded or decoded pixel samples; at the start of encoding or decoding of the current frame, the regions of the entire frame intended to store reconstructed pixel samples are in fact regions of uncoded or decoded pixel samples; during encoding or decoding, each time a portion of reconstructed pixel samples is generated, a portion of the region of unencoded or decoded pixel samples is transformed into a portion of the region of the set of reference pixel samples; therefore, the step of expanding the reference pixel sample set is not necessarily performed each time a current block is encoded or decoded, but may be performed each time a current CTU is encoded or decoded; at the start of encoding or decoding the current CTU, a certain range of the current CTU inside and around (typically the right and lower part) is filled with pixel values according to a predetermined rule, so that the step of expanding the reference pixel sample set is not necessary for each block when encoding or decoding a block in the current CTU.
Example 7 expansion of reference pixel sample value set
The step of expanding the reference pixel sample set may be performed not every time a current block or a current CTU starts to be encoded or decoded but every time a current frame starts to be encoded or decoded, and a certain range of the inside and the periphery (right and lower parts) of the entire current frame is filled with pixel values obtained according to a predetermined rule.
Example 8 expansion of reference Pixel sample value set
For different CTUs in the current frame, different pre-established rules can be used to obtain pixel values for filling of the extended region; different pre-established rules may be used to derive the pixel values for the filling of the extended area for different blocks within the current frame or within the current CTU.
Example 9 expansion of reference pixel sample value set
The new reference region boundary is at least a reference region boundary generated in one or a combination of the following cases:
1) in the process of coding or decoding a coding block or a decoding block;
2) after one coding block or one decoding block is coded or decoded;
3) in encoding or decoding a slice;
4) after one slice is encoded or decoded;
5) in the process of encoding or decoding a group of encoding blocks or a group of decoding blocks;
6) after a group of coding blocks or a group of decoding blocks are coded or decoded;
7) in the process of encoding or decoding a set of CTUs;
8) after a group of CTUs has been encoded or decoded;
9) in the process of encoding or decoding one or more rows of CTUs;
10) after encoding or decoding of one or more rows of CTUs is completed;
11) in the process of encoding or decoding one or more columns of CTUs;
12) after one or more columns of CTUs have been encoded or decoded;
13) in the process of encoding or decoding a CTU array;
14) after encoding or decoding of one CTU array is completed;
15) in the process of encoding or decoding a frame of image;
16) after encoding or decoding of one frame of image is completed.
Example 10 expansion of reference Pixel sample value set
The new reference region boundary is at least a reference region boundary generated in one of the following cases:
1) the reference pixels are partial reconstructed pixels of different degrees;
2) the reference pixel is a fully reconstructed pixel;
3) the reference pixel is a reconstructed pixel that is predicted but has not undergone other reconstruction steps;
4) the reference pixel is a reconstructed pixel that is copied but has not undergone other reconstruction steps;
5) the reference pixel is a reconstructed pixel that has not undergone a deblocking filtering and/or Sample Adaptive Offset (SAO) step;
6) the reference pixels are reconstructed pixels that have undergone a deblocking filtering and/or Sample Adaptive Offset (SAO) step.
Example 11 expansion of reference Pixel sample value set
In encoding or decoding, the same region and its boundary in a frame of image are expanded at least twice. Example of at least quadratic expansion:
example 1) first expansion is performed at the time of encoding or decoding of depth D ═ 1, and second expansion is performed at the time of encoding or decoding of depth D ═ 2;
example 2) a first expansion is performed when the reference pixel is a reconstructed pixel that has not been subjected to deblocking filtering and/or SAO, and a second expansion is performed when the reference pixel is a reconstructed pixel that has been subjected to deblocking filtering and/or SAO.
Example 12 expansion of reference Pixel sample value set
One embodiment of the extension of the reference region and its boundaries is implemented using at least one or a combination of the following methods:
1) assigning values of pixel samples of the extended area to an array representing the extended area using the array;
2) using a rule for determining the value of the pixel sample at the coordinates of the extended region from the coordinates of the pre-extension region and the value of the pixel sample thereof and the coordinates of the extended region, using said rule to "extend" the reference region indirectly, i.e. not directly using the array representing the extended region;
3) the reference region is extended "indirectly, i.e. not directly, using an array representing the extended region" using a mapping that converts the coordinates of the extended region to the coordinates of the region before extension, and then using the values of the pixel samples in the coordinates of the region before extension to determine the values of the pixel samples in the coordinates of the extended region.
Method 3) is actually a special case of method 2).
One example of method 2) and method 3):
the value range of the coordinate (x, y) of the reference area (area before expansion) is 0 ≦ x < W and 0 ≦ y < H, and the value of the pixel sample value on the coordinate is P (x, y); for the area (expanded area) outside the reference area boundary, the value range of the coordinates (x, y) is x <0 or x ≧ W or y <0 or y ≧ H; using the following bounded Clip operation
Figure DEST_PATH_IMAGE001
Converting (mapping) the coordinates (x, y) of the expanded region into the coordinates (x) of the pre-expanded region0,y0) Wherein:
x0=Clip(0,W-1,x)
y0=Clip(0,H-1,y)
then, the values of the pixel sample values on the coordinates of the area before and after the expansion are obtained by the following calculation rule:
P(Clip(0,W-1,x),Clip(0,H-1,y))。

Claims (10)

1. an image encoding method characterized by comprising at least the steps of:
1) expanding the array of the predicted pixel sample values: when a new prediction pixel sample value is generated in the coding, namely a new prediction pixel sample value array is generated, so that a new occurrence and/or new formation of a prediction pixel sample value array boundary exists, according to a preset rule, a pixel value is given to an area in a certain range outside the new occurrence and/or new formation of the prediction pixel sample value array boundary, and the prediction pixel sample value array and an expanded area are combined to form an expanded prediction pixel sample value array;
2) and continuing the encoding step: continuously coding the current coding block by using at least a 2-dimensional matching coding mode, also called predictive coding, and the extended prediction pixel sample value group; during the encoding process, updating the expanded prediction pixel sample value array according to a preset mode; and writing the coding result into the video code stream.
2. An image decoding method characterized by comprising at least the steps of:
1) video code stream data analysis and entropy decoding: carrying out entropy decoding on an input video code stream, and analyzing the significance of various data obtained by the entropy decoding; determining a new predicted pixel sample value array according to information obtained by analyzing the video code stream data or information obtained by analyzing the video code stream data and the result of at least partially decoding the current decoding block so as to generate a new appearing and/or newly formed predicted pixel sample value array boundary;
2) expanding the array of the predicted pixel sample values: when a new predicted pixel sample value is generated in decoding, namely a new predicted pixel sample value array is generated so that a new appearing and/or newly formed predicted pixel sample value array boundary exists, boundary external expansion is carried out according to a preset rule, a pixel value is given to an area in a certain range outside the new appearing and/or newly formed predicted pixel sample value array boundary, and the predicted pixel sample value array and the expanded area are combined to form an expanded predicted pixel sample value array;
3) continuing the decoding step; continuously decoding the current decoding block by using at least a 2-dimensional matching decoding mode which is also called predictive coding and the extended predicted pixel sample value group; the extended array of predicted pixel samples is also allowed to be updated in a predetermined manner during decoding.
3. The decoding method according to claim 2, wherein the predetermined rule is to extend the array of predicted pixel samples out of bounds at least according to the depth and sequence number of the current block or directly according to the shape and size of the current block.
4. The decoding method according to claim 2, wherein the newly occurring and/or newly formed predicted pixel sample array boundary is at least a predicted pixel sample array boundary generated in one or a combination of the following cases:
1) in the process of decoding a decoding block;
2) after decoding of one decoding block is completed;
3) in decoding a slice;
4) after decoding of one slice is completed;
5) in the process of decoding a group of decoding blocks;
6) after a group of decoding blocks is decoded;
7) in the process of decoding a group of CTUs;
8) after decoding a set of CTUs is completed;
9) in the process of decoding one or more rows of CTUs;
10) after decoding is completed for one or more rows of CTUs;
11) in the process of decoding one or more columns of CTUs;
12) after decoding of one or more columns of CTUs is completed;
13) in the process of decoding a CTU array;
14) after decoding of one CTU array is completed;
15) in the process of decoding a frame of image;
16) after decoding is completed for one frame of picture.
5. The decoding method according to claim 2, wherein the newly occurring and/or newly formed predicted pixel sample array boundary is at least a predicted pixel sample array boundary generated in one or a combination of the following cases:
1) the predicted pixels are partial reconstructed pixels of varying degrees;
2) the predicted pixel is a fully reconstructed pixel;
3) the predicted pixel is a reconstructed pixel which is predicted but not subjected to other reconstruction steps;
4) the predicted pixel is a reconstructed pixel that is replicated but has not undergone other reconstruction steps;
5) the predicted pixel is a reconstructed pixel that has not undergone a deblocking filtering and/or Sample Adaptive compensation (SAO) step;
6) the predicted pixels are reconstructed pixels that have undergone a deblocking filtering and/or Sample Adaptive Offset (SAO) step.
6. The decoding method according to claim 2, characterized in that:
one embodiment of the extension to the array of predicted pixel samples and their boundaries is implemented using at least one or a combination of the following methods:
1) assigning values of pixel samples of the extended area to an array representing the extended area using the array;
2) using a rule for determining the value of the pixel sample at the coordinates of the extended region from the coordinates of the pre-extension region and the values of its pixel samples and the coordinates of the extended region, using the rule to extend the array of predicted pixel samples;
3) the array of predicted pixel samples is expanded using a mapping that converts the coordinates of the expanded region to the coordinates of the pre-expanded region, and then using the values of the pixel samples at the coordinates of the pre-expanded region to determine the values of the pixel samples at the coordinates of the expanded region.
7. The decoding method according to claim 6, wherein:
the value range of the predicted pixel sample value array, namely the coordinate (x, y) of the area before expansion is 0-x < W and 0-y < H, and the value of the pixel sample value on the coordinate is P (x, y); for the area outside the boundary of the prediction pixel sample value array, the value range of the coordinate (x, y) is x <0 or x ≧ W or y <0 or y ≧ H; using the following bounded Clip operation
Figure DEST_PATH_IMAGE002
Converting, i.e. mapping, the coordinates (x, y) of the expanded area to the coordinates (x) of the pre-expanded area0,y0) Wherein:
x0=Clip(0,W-1,x)
y0=Clip(0,H-1,y)
then, the values of the pixel sample values on the coordinates of the area before and after the expansion are obtained by the following calculation rule:
P(Clip(0,W-1,x),Clip(0,H-1,y))。
8. the decoding method according to claim 2, characterized in that:
the pixel values assigned to the extended padding of the immediate neighbors outside the boundary of the array of predicted pixel samples are a copy of the pixel values of the immediate neighbors inside the boundary.
9. The decoding method according to claim 2, characterized in that:
different predefined rules are allowed to be used for different decoding blocks to obtain the pixel values for the filling of the extension area.
10. The decoding method according to any one of claims 2 to 9, wherein:
the decoded block or the current block is a decoded region of the picture, including at least one of: the coding unit comprises a maximum coding unit LCU, a coding tree unit CTU, a coding unit CU, a sub-area of the CU, a sub-coding unit SubCU, a coding sub-block, a prediction sub-block, a prediction unit PU, a sub-area of the PU, a sub-prediction unit SubPU and a macro-block.
CN201910946605.2A 2014-03-18 2015-03-18 Image coding or decoding method for expanding prediction pixel array Active CN110505488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910946605.2A CN110505488B (en) 2014-03-18 2015-03-18 Image coding or decoding method for expanding prediction pixel array

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201410099776 2014-03-18
CN201910946605.2A CN110505488B (en) 2014-03-18 2015-03-18 Image coding or decoding method for expanding prediction pixel array
CN201510118591.7A CN104935945B (en) 2014-03-18 2015-03-18 The image of extended reference pixel sample value collection encodes or coding/decoding method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201510118591.7A Division CN104935945B (en) 2014-03-18 2015-03-18 The image of extended reference pixel sample value collection encodes or coding/decoding method

Publications (2)

Publication Number Publication Date
CN110505488A CN110505488A (en) 2019-11-26
CN110505488B true CN110505488B (en) 2022-01-07

Family

ID=54122859

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910946605.2A Active CN110505488B (en) 2014-03-18 2015-03-18 Image coding or decoding method for expanding prediction pixel array
CN201510118591.7A Active CN104935945B (en) 2014-03-18 2015-03-18 The image of extended reference pixel sample value collection encodes or coding/decoding method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201510118591.7A Active CN104935945B (en) 2014-03-18 2015-03-18 The image of extended reference pixel sample value collection encodes or coding/decoding method

Country Status (1)

Country Link
CN (2) CN110505488B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3975559A1 (en) * 2016-10-04 2022-03-30 B1 Institute of Image Technology, Inc. Image data encoding/decoding method and apparatus
IT201700024221A1 (en) * 2017-03-03 2018-09-03 Sisvel Tech S R L METHODS AND APPARATUSES FOR ENCODING AND DECODING SUPERPIXEL BORDERS
CN112055219B (en) * 2020-08-05 2021-08-31 浙江大华技术股份有限公司 String matching prediction method and device and computer readable storage medium
CN115119046B (en) * 2022-06-02 2024-04-16 绍兴市北大信息技术科创中心 Image coding and decoding method, device and system for reference pixel set

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589027A (en) * 2004-07-29 2005-03-02 联合信源数字音视频技术(北京)有限公司 Image boundarg pixel extending system and its realizing method
CN104244007A (en) * 2013-06-13 2014-12-24 上海天荷电子信息有限公司 Image compression method and device based on arbitrary shape matching
CN104378644A (en) * 2013-08-16 2015-02-25 上海天荷电子信息有限公司 Fixed-width variable-length pixel sample value string matching strengthened image compression method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060123939A (en) * 2005-05-30 2006-12-05 삼성전자주식회사 Method and apparatus for encoding and decoding video
CN101291436B (en) * 2008-06-18 2011-02-16 北京中星微电子有限公司 Video coding/decoding method and device thereof
EP2615832A1 (en) * 2012-01-13 2013-07-17 Thomson Licensing Method and device for encoding a block of an image and corresponding reconstructing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589027A (en) * 2004-07-29 2005-03-02 联合信源数字音视频技术(北京)有限公司 Image boundarg pixel extending system and its realizing method
CN104244007A (en) * 2013-06-13 2014-12-24 上海天荷电子信息有限公司 Image compression method and device based on arbitrary shape matching
CN104378644A (en) * 2013-08-16 2015-02-25 上海天荷电子信息有限公司 Fixed-width variable-length pixel sample value string matching strengthened image compression method and device

Also Published As

Publication number Publication date
CN110505488A (en) 2019-11-26
CN104935945B (en) 2019-11-08
CN104935945A (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN111800640B (en) Method and device for encoding and decoding image by alternately changing direction and back-and-forth scanning string matching
JP6659586B2 (en) Image encoding / decoding method and apparatus
CN104378644B (en) Image compression method and device for fixed-width variable-length pixel sample string matching enhancement
CN105704491B (en) Image encoding method, decoding method, encoding device, and decoding device
CN105491376B (en) Image coding and decoding method and device
CN104754362B (en) Image compression method using fine-divided block matching
CN113812155B (en) Interaction between multiple inter-frame coding and decoding methods
CN110830803B (en) Image compression method combining block matching and string matching
WO2015078422A1 (en) Image encoding and decoding method and device
CN110505488B (en) Image coding or decoding method for expanding prediction pixel array
KR102532391B1 (en) Video encoding method and apparatus and video decoding method and apparatus
CN106254878B (en) Image encoding and decoding method and image processing equipment
CN104811731A (en) Multilayer sub-block matching image compression method
WO2016202189A1 (en) Image coding and decoding methods, image processing device, and computer storage medium
CN106303535B (en) Image compression method and device with reference pixels taken from different-degree reconstruction pixels
CN106303534B (en) Image compression method and device for multiple index string and pixel string fusion copy modes
CN104918050B (en) Use the image coding/decoding method for the reference pixel sample value collection that dynamic arrangement recombinates
US20200252646A1 (en) Method and Device for Image Coding and Method and Device for Image Decoding
WO2016197893A1 (en) Image encoding and decoding method, image processing device, and computer storage medium
TW202404370A (en) Decoding method, encoding method, decoder, and encoder
TW202147850A (en) Methods and systems for combined lossless and lossy coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant