CN108566551B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108566551B
CN108566551B CN201810351046.6A CN201810351046A CN108566551B CN 108566551 B CN108566551 B CN 108566551B CN 201810351046 A CN201810351046 A CN 201810351046A CN 108566551 B CN108566551 B CN 108566551B
Authority
CN
China
Prior art keywords
boundary
block
image
pixel values
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810351046.6A
Other languages
Chinese (zh)
Other versions
CN108566551A (en
Inventor
陈柏钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shunjiu Electronic Technology Co ltd
Original Assignee
Shanghai Shunjiu Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shunjiu Electronic Technology Co ltd filed Critical Shanghai Shunjiu Electronic Technology Co ltd
Priority to CN201810351046.6A priority Critical patent/CN108566551B/en
Publication of CN108566551A publication Critical patent/CN108566551A/en
Application granted granted Critical
Publication of CN108566551B publication Critical patent/CN108566551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device, relates to the field of image processing, and solves the problem that the filtering effect on blocking effects of different degrees is not ideal due to the adoption of a globally uniform filtering mode in the prior art. The image processing method comprises the following steps: determining the degree of image detail intensity of the reference block along the direction vertical to the first boundary according to the pixel values in the reference block crossing the first boundary; determining the image detail strength of the reference line along the direction vertical to the first boundary according to the pixel values on two sides of the first boundary in the reference line, wherein the reference line is a line of pixel values arranged vertical to the first boundary in the reference block; and determining a filtering mode of the reference line according to the image detail strength of the reference block and the image detail strength of the reference line so as to filter the pixel values in the reference line.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method and apparatus.
Background
Transform coding based on blocks is widely applied to image compression coding, and in the process of image compression coding, compression coding is performed by taking image blocks as units, so that discontinuous jumping occurs at the boundary of each image block in a decoded and reconstructed image, and an obvious block boundary, namely a blocking effect (also called block noise), is formed. Due to the block effect in the image after the video stream decoding, the reconstructed image has obvious defects, and the visual perception of the image by human eyes is influenced.
In the prior art, in order to remove the blocking artifacts, an appropriate deblocking filtering strategy (i.e., loop filtering) may be added at a decoding end to improve image quality, or a deblocking filtering process may be performed on an image after decoding, that is, the image is subjected to deblocking filtering by a filter after passing through a decoder, but in the related art, the filtering effect on the blocking artifacts after decoding is not ideal because block noises with different degrees of strength exist in the decoded image, and the existing filtering method has disadvantages in a globally unified filtering mode, for example, for strong block noises, filtering may be insufficient (called under-filtering), and for weak block noises, filtering may be excessive (called over-filtering), and thus the filtered image still has the defect of block noises.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, and aims to solve the problem that the filtering effect on blocking effects of different degrees is not ideal due to the fact that a globally uniform filtering mode is adopted in the prior art.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an image processing method is provided, where a current frame image includes a plurality of decoding blocks distributed in an array, each of the decoding blocks includes m rows by n columns of pixel values, and the image processing method includes: determining the degree of image detail intensity of a reference block along the direction perpendicular to a first boundary according to pixel values in the reference block crossing the first boundary, wherein the first boundary is the boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix forming a pixel value matrix of the two adjacent rows or two columns of decoding blocks, the length a of the reference block along the extending direction of the first boundary is more than or equal to 2 and less than or equal to n or more than or equal to 2 and less than or equal to m, and the length b of the reference block along the direction perpendicular to the first boundary is more than or equal to b and more than or equal to 3; determining the degree of image detail intensity of a reference line in a direction perpendicular to the first boundary according to pixel values of the reference line on two sides of the first boundary, wherein the reference line is a line of pixel values of the reference block which is arranged perpendicular to the first boundary; and determining a filtering mode of the reference line according to the image detail intensity degree of the reference block and the image detail intensity degree of the reference line so as to filter the pixel values in the reference line.
In a second aspect, there is provided an image processing apparatus comprising: the first processing unit is used for determining the image detail intensity degree of a reference block in the direction perpendicular to a first boundary according to pixel values in the reference block crossing the first boundary, wherein a current frame image comprises a plurality of decoding blocks distributed in an array, each decoding block comprises m rows by n columns of pixel values, the first boundary is the boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix forming the pixel value matrix of the two adjacent rows or two columns of decoding blocks, the length a of the reference block in the extending direction of the first boundary satisfies 2-a-n or 2-a-m, and the length b of the reference block in the direction perpendicular to the first boundary satisfies b-3; the second processing unit is used for determining the degree of image detail intensity of a reference line in the direction perpendicular to the first boundary according to pixel values on two sides of the first boundary in the reference line, wherein the reference line is a line of pixel values in the reference block, and the line of pixel values is perpendicular to the first boundary; and the third processing unit is used for determining a filtering mode of the reference line according to the image detail strength of the reference block and the image detail strength of the reference line so as to filter the pixel values in the reference line.
According to the image processing method and device provided by the embodiment of the invention, the image detail strength of the reference block and the reference line across the boundary in the image is determined, the filtering mode of the reference line is determined according to the image detail strength of the reference block and the reference line, and the pixel values in the reference line are filtered, namely, the image detail strength is different, and the adopted filtering mode is different.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram of dividing a decoded image according to an embodiment of the present invention;
fig. 2 is a schematic diagram of division in the horizontal direction based on fig. 1;
fig. 3 is a schematic diagram of division in the vertical direction based on fig. 1;
FIG. 4 is a diagram of a reference block provided in accordance with an embodiment of the present invention;
FIG. 5 is a block flow diagram of a method of performing image processing based on the results of reference block data analysis;
FIG. 6 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 7 is a schematic diagram showing a horizontal boundary between two adjacent rows of decoded blocks and a reference block crossing the horizontal boundary;
FIG. 8 is a schematic diagram of a reference block crossing a horizontal boundary based on FIG. 7;
FIG. 9 is a schematic diagram of a reference row in the reference block based on FIG. 8;
FIG. 10 is a flowchart of a method for processing horizontal boundary blocking artifacts according to an embodiment of the present invention;
FIG. 11 schematically illustrates a reference row crossing a horizontal boundary;
fig. 12 is a schematic diagram showing a vertical boundary between two adjacent columns of decoded blocks and a reference block crossing the vertical boundary;
FIG. 13 is a schematic diagram of a reference block crossing a vertical boundary based on FIG. 12;
FIG. 14 is a schematic diagram of a reference row in the reference block based on FIG. 13;
FIG. 15 is a schematic diagram of vertical boundary blockiness processing and intra-block data analysis provided by an embodiment of the present invention;
FIG. 16 is a flowchart of a vertical boundary blocking artifacts processing method according to an embodiment of the present invention;
FIG. 17 is a diagram illustrating an image detail level definition of a reference block according to an embodiment of the present invention;
FIG. 18 is a flowchart illustrating an overall method for removing an image blocking effect according to an embodiment of the present invention;
fig. 19 is a flowchart of a process of calculating the image-level block boundary strength according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, the words "first", "second", and the like are used to distinguish the same items or similar items with basically the same functions or actions, and those skilled in the art can understand that the words "first", "second", and the like do not limit the quantity and execution order.
In order to provide a better visual effect at the display end, on the premise that whether a video stream has a function of removing block noise in a decoder is uncertain, a process of removing the block noise (namely block noise) from a displayed image at the display end according to image content is needed.
The first embodiment is as follows: image processing method
In order to remove blocking artifacts in an image, an embodiment of the present invention provides an image processing method for performing deblocking processing on blocking artifacts (including horizontal boundary blocking artifacts and vertical boundary blocking artifacts) of an image. Embodiments of the present invention may perform back-end processing (i.e., processing of the image after decoding), illustratively processing the image on a display terminal. It should be noted that the processing of the image in the encoding/decoding stage may be referred to as front-end processing, and the processing of the image after decoding may be referred to as back-end processing, where the back-end processing is referred to as front-end processing.
Image segmentation
And dividing the decoded image. Specifically, in the scan-in process of the input image from left to right and from top to bottom, the input current frame image may be divided in the horizontal direction and the vertical direction, and specifically, the image may be divided by the size of the decoded block according to the known width bw and height bh of the decoded block, and the vertical offset _ v and horizontal offset _ h of the decoded block in the image (for example, offset _ v is 1 and offset _ h is 1). The divided current frame image may include a plurality of decoding blocks distributed in an array, each decoding block may include m rows x n columns of pixel values, m may be referred to as a height of the decoding block, and n may be referred to as a width of the decoding block.
As shown in fig. 1, for example, assuming that the current frame image includes 801 × 1601 pixel values, each decoding block includes 8 × 8 pixel values, the vertical offset _ v is 1, and the horizontal offset _ h is 1, the current frame image includes 8000 decoding blocks after being divided, where the decoding block width bw is 8 and the decoding block height bh is 8. Further, the position of each pixel value in the image can be known from the width and height of the decoding block and the offset. Note that, here, the decoding block height and the decoding block width may change accordingly as the image is enlarged, and are not fixed to 8.
Further, taking a decoding block in a frame image as an example, as shown in fig. 1, the area surrounded by the dotted line shows the upper edge, the lower edge, the left edge and the right edge of the decoding block, which respectively include the pixel values of the row at the top, the row at the bottom, the column at the left, and the column at the right of the decoding block. Further, horizontal and vertical boundaries, i.e., horizontal and vertical boundary lines (excluding pixel values), of the decoding block are also shown in fig. 1.
Specifically, the following description will be made with reference to fig. 2 and 3 in terms of two divisions in the horizontal direction and the vertical direction, respectively.
Fig. 2 shows a schematic diagram of horizontal division, where a row of decoded blocks obtained after division sequentially includes, from left to right, Block0, Block1, Block2 …, Block n, and the like, where blk _ lft _ cur refers to a number of a left edge of a current decoded Block, blk _ lft _ pre refers to a number of a left edge column of a previous decoded Block, blk _ lft _ nxt refers to a number of a left edge of a next decoded Block, offset _ h is a distance (for example, offset _ h is 1) by which a decoded Block is shifted (for short, horizontally shifted) in the horizontal direction after an image is detected according to an algorithm for detecting an image Block boundary, and bw is a decoded Block width (for example, bw is 8). For convenience of description, a column reference numeral of a first column of the image (i.e., a first column counted from the left end) is denoted by 0.
Illustratively, referring to FIG. 2, if the current pixel is located within Block1 in the horizontal direction, the left edge of the corresponding current decoded Block is blk _ lft _ cur and the right edge of the decoded Block is blk _ lft _ nxt-1. The left edge index of the current decoding Block n may be calculated by substituting a specific value into offset _ h + n × bw, for example, for the current decoding Block1, offset _ h is 1, bw is 8, n is 1, blk _ lft _ cur is offset _ h + n × bw is 1+ 8 is 9, which indicates that the left edge index of the current decoding Block1 is 9. Correspondingly, a range of horizontal direction filtering processes (for example, filtering processes of consecutive three pixels on both sides of a vertical boundary using a horizontal filter) may be performed on both sides of the boundary (i.e., the vertical boundary) of the decoded block to remove the vertical boundary blocking artifacts of the decoded block, and a specific filtering process will be described in detail later.
Fig. 3 shows a schematic diagram of division in the vertical direction, and a column of decoded blocks obtained after division includes, in order from top to bottom, Block0, Block1, Block2 … Block n, and the like, where blk _ top _ cur refers to a number of an upper edge of a current decoded Block, blk _ top _ pre refers to a number of an upper edge of a previous decoded Block, blk _ top _ nxt refers to a number of an upper edge of a next decoded Block, offset _ v is a distance (for example, offset _ v 1) by which a decoded Block is shifted in the vertical direction (vertical shift for short) after an image is detected according to an algorithm for detecting an image Block boundary, and bh is a decoded Block height (for example, bh) 8. For convenience of description, a line number of a first line of the image (i.e., a first line counted from the top) is denoted by 0.
For example, referring to fig. 3, if the current line is located between the decoding blocks Block1, the upper edge of the corresponding decoding Block is blk _ top _ cur and the lower edge of the decoding Block is blk _ top _ nxt-1. The specific value may be substituted into offset _ h + n × bh to calculate the top edge index of the current decoding Block n, for example, if the current processing is the 20 th row (i.e., the current row is the 20 th row and is located in the decoding Block 2), offset _ v is 1, bh is 8, and n is 2, blk _ top _ cur is offset _ v +2 bh is 1+2, 8 is 17, i.e., the index of the top edge of the decoding Block where the current row is located is 17.
It should be noted that if the current row decodes the upper edge or the lower edge of the block (i.e., the row adjacent to the horizontal boundary of the decoded block), filtering processing in the vertical direction (for example, filtering processing is performed on several pixel values on both sides of the horizontal boundary by using a vertical filter) needs to be performed on the pixel values on both sides of the horizontal boundary to remove the horizontal boundary blocking effect between the decoded blocks, and a specific filtering processing procedure will be described in detail below. It should be noted that, depending on the limitation of the actually required resource row, we temporarily use 3 rows of data as available data resources. Here, the number is not limited to 3, and for the sake of uniform expression, the following will be exemplified by 3. For example, in the case of 3 rows of data, if the current row is the 17 th row, the data resources utilized may be the 16 th to 18 th row pixel values, for a total of 3 rows of pixel values; in the case of using 5 rows of data, if the current row is the 17 th row, the data resource used may be the pixel values of the 15 th to 19 th rows, and the pixel values of the 5 th rows in total, which is not specifically limited in the embodiment of the present invention.
Performing reference block data analysis in advance
Fig. 4 is a schematic diagram of a reference block, which illustratively includes 3 rows and bw of pixel values, for example, the reference block includes 3 × 8 pixel values, and the intra-block data analysis needs to be performed on the reference block in advance before the filtering process is performed. It should be noted that the intra-block data analysis of the reference block (referred to as reference block data analysis for short) and the blocking effect processing may be processed in parallel, and for example, the intra-block data analysis of the next reference block is performed while the blocking effect processing is performed using the result of the current reference block data analysis.
Specifically, for the horizontal boundary blocking artifact processing, the analysis result of the reference block data to be utilized may include:
Figure BDA0001633293210000061
Figure BDA0001633293210000062
Figure BDA0001633293210000071
where avg _ blk is the average of all pixel values within the reference block, dif _ up is the average difference in the vertical direction of the upper two rows of pixel values within the reference block, and dif _ dn is the average difference in the vertical direction of the lower two rows of pixel values within the reference block. These data will be used in the process of determining the image detail intensity level local _ bs _ h of the reference block in the horizontal boundary blocking effect processing, and the specific application will be described in detail below.
For vertical boundary blockiness processing, the reference block data analysis results to be utilized may include:
Figure BDA0001633293210000072
Figure BDA0001633293210000073
Figure BDA0001633293210000074
where avg _ blk is the average value of all pixel values in the reference block, dif _ hor is the average difference value in the horizontal direction of the pixel value of each row in the reference block, and var _ hor is the average difference value in the horizontal direction of the two pixel values before and after each row in the reference block. These data will be used in the process of determining the image detail intensity level local _ bs _ v of the reference block in the vertical boundary blocking effect processing, and the specific application will be described in detail below.
It should be noted that the pixel value described herein refers to a luminance value of a pixel in an image, and may refer to, for example, a Y value representing luminance in YUV or a Y value representing luminance in YCbCr, which is not limited in this embodiment of the present invention.
Fig. 5 is a flowchart block diagram of a method of performing image processing based on the result of the above-described reference block data analysis. According to the input image data, firstly, the data in the reference block is analyzed in advance, and then the calculated data is stored. If the scanned current pixel is in the reference block of which the analysis of the reference block data is finished, the data is called to judge the blocking effect degree of the block boundary, and the filter is determined to be adopted for filtering processing (namely, the horizontal boundary blocking effect processing and the vertical boundary blocking effect processing are respectively executed).
Detailed implementation of image processing
Referring to fig. 6, an image processing method provided by an embodiment of the present invention may include:
s101, determining the image detail strength of a reference block in the direction perpendicular to a first boundary according to pixel values in the reference block crossing the first boundary, wherein the first boundary is the boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix of a pixel value matrix forming the two adjacent rows or two columns of decoding blocks, the length a of the reference block in the extending direction of the first boundary is more than or equal to 2 and less than or equal to n or more than or equal to 2 and less than or equal to m, and the length b of the reference block in the direction perpendicular to the first boundary is more than or equal to b and more than or equal to 3.
The image detail intensity degree of the reference block is characterized by the change degree of the pixel values in the reference block, and if the change degree of the pixel values in the reference block is smaller, the image detail is less; if the pixel values in the reference block vary to a greater extent, more detail of the image is indicated.
S102, determining the image detail intensity of the reference line in the direction perpendicular to the first boundary according to the pixel values on the two sides of the first boundary in the reference line, wherein the reference line is a line of pixel values in the reference block, and the line of pixel values is perpendicular to the first boundary.
S103, determining a filtering mode of the reference line according to the image detail intensity of the reference block and the image detail intensity of the reference line, so as to filter the pixel values in the reference line.
As an alternative, the reference block comprises at least one reference sub-block, each reference sub-block comprises s rows x n columns of pixel values, wherein s < m is greater than or equal to 3, and the edge of the reference sub-block in the column direction is located on the boundary between two adjacent columns of decoding blocks. In the case that the first boundary is the boundary between two adjacent rows of decoded blocks, the reference block comprises one reference sub-block; in the case where the first boundary is a boundary between two adjacent columns of decoded blocks, the reference block includes two consecutive reference sub-blocks arranged along a direction perpendicular to the first boundary.
Alternatively, s is an odd number, and in the case where the first boundary is a boundary between two adjacent columns of decoded blocks, the reference row is a row of pixel values located in the middle of the reference block.
For clarity and understanding, the embodiments of the present invention describe the horizontal boundary blocking process and the vertical boundary blocking process separately.
Horizontal boundary blockiness processing
In the case where the first boundary is a horizontal boundary, step S101 may include:
s101(h), according to pixel values in the reference block crossing the horizontal boundary, determining the image detail strength local _ bs _ h of the reference block along the direction vertical to the horizontal boundary.
The horizontal boundary is the boundary between two adjacent rows of decoding blocks, the reference block is a sub-matrix of a pixel value matrix forming the two adjacent rows of decoding blocks, the length a of the reference block along the extension direction of the horizontal boundary satisfies that a is more than or equal to 2 and less than or equal to n, and the length b of the reference block along the direction perpendicular to the horizontal boundary satisfies that b is more than or equal to 3.
As an alternative, the reference block comprises a reference sub-block comprising s rows x n columns of pixel values, where 3 ≦ s < m, and the edge of the reference sub-block in the column direction is located on the boundary between two adjacent columns of decoded blocks.
Referring to fig. 7, schematically, a horizontal boundary between two adjacent rows of decoded blocks is shown (as indicated by a bold solid line in fig. 7), and a reference block crossing the horizontal boundary is also shown (as indicated by a hatched area), and since the reference block includes one reference sub-block, the reference sub-block is identical to the reference block in the case of the horizontal boundary blocking effect processing. The size of the reference sub-block (i.e. the reference block) is determined according to the value of s (3 ≦ s < m), s may be 3, 4, 5, 6, 7, and accordingly, the reference sub-block may contain 3 × n, 4 × n, 5 × n, 6 × n, 7 × n pixel values. Further, since the reference subblock and the decoding block each contain n columns of pixel values, the edge of the reference subblock in the column direction coincides with the boundary between two adjacent columns of decoding blocks. Illustratively, referring to fig. 7 and 8, m is 8, n is 8, s is 3, each decoded block contains 8 × 8 pixel values, and the reference sub-block (i.e., reference block) contains 3 × 8 pixel values. For example, fig. 9 shows a schematic diagram of a reference line in the reference block, and as shown in fig. 9, the reference lines p1, p0, and q0 are lines of pixel values arranged perpendicular to a horizontal boundary in the reference block. As can be seen from fig. 9, the reference line is one of 8 reference lines in the reference block. The horizontal boundary blocking process will be described below by taking an example in which the reference sub-block includes 3 × 8 pixel values.
Referring to fig. 10, the horizontal boundary blocking effect processing method may include: an input image f (n); determining the image detail intensity degree local _ bs _ h of the reference block (S201); simultaneously, adjusting local _ bs _ h of the current frame image F (n) according to global block effect strength global _ bs _ h of the previous frame image F (n-1), wherein global _ bs _ h is determined in step S204 based on the statistic value obtained in step S203; determining the degree of the image detail intensity of the reference line according to the magnitude relation between the pixel values at the two sides of the horizontal boundary, and determining the filtering strength (i.e. determining whether filtering is needed) according to the degree of the image detail intensity of the reference block and the degree of the image detail intensity of the reference line (S202); performing a filtering process on a horizontal boundary; and outputting the image. The above steps will be described in detail below.
I (h), and determining the image detail intensity degree local _ bs _ h of the reference block
In the case where the first boundary is a horizontal boundary, the step S101(h) of determining the degree of image detail of the reference block in the direction perpendicular to the horizontal boundary may include:
and determining the image detail strength local _ bs _ h of the reference block in the direction vertical to the horizontal boundary according to the average difference value dif _ blk of two continuous lines of pixel values on the side adjacent to the horizontal boundary in the reference sub-block in the vertical direction and the average value avg _ blk of all pixel values in the reference sub-block.
As an alternative, the steps may include: and determining an image detail strength level of the reference block corresponding to a first interval in which the dif _ blk is located, as an image detail strength level local _ bs _ h of the reference block, wherein the first interval is one of at least two intervals, and the at least two intervals are obtained by dividing at least one node determined by the avg _ blk.
The image detail intensity level of the reference block can be divided into a plurality of levels according to the requirement, and since only three rows of pixel values are utilized, the image detail intensity level of the reference block local _ bs _ h can be divided into two levels, namely 1 and 0, wherein 1 represents that the image detail of the reference block is weaker or less, accordingly, the filtering processing may be required for the area with weaker or less image detail, while 0 represents that the image detail of the reference block is stronger or more, accordingly, the filtering processing may be selected not to be carried out for the area with stronger or more image detail, so as to retain the image detail. It should be noted that "the image details are less" and "the image details are more" are relatively speaking, and for convenience of description only, the degree of the detail of the image can be determined according to the specific situation.
As an alternative, avg _ blk may be used to determine a node a, which may be divided into the following two intervals: the interval (- ∞, a) and the interval [ a, + ∞) can be used to determine the image detail intensity level of the corresponding reference block according to which interval dif _ blk is located in. Specifically, if dif _ blk is in the interval (— ∞, a), which indicates that the image detail of the reference block is less, the image detail intensity level local _ bs _ h is 1; if dif _ blk is within the interval [ a, + ∞), indicating that the image detail of the reference block is more, the image detail intensity level local _ bs _ h takes 0.
That is, local _ bs _ h may be determined according to a size relationship between dif _ blk and avg _ blk. The following exemplary procedure for determining local _ bs _ h from avg _ blk and dif _ blk is given.
Figure BDA0001633293210000101
Where dif _ blk may be an average difference dif _ up of pixel values of two upper lines of the reference sub-block, or an average difference dif _ dn of pixel values of two lower lines. Specifically, in the case where the reference subblock includes 3 rows × bw of pixel values, if the current row is the lower edge of the decoded block, let dif _ blk be dif _ up; if the current line is the upper edge of the decoded block, let dif _ blk be dif _ dn. The following equations for calculating avg _ blk, dif _ up, and dif _ dn are given (see fig. 4):
Figure BDA0001633293210000111
Figure BDA0001633293210000112
Figure BDA0001633293210000113
the local _ bs _ h is 1, which represents that the image details of the reference block are less, and accordingly, a filtering process may need to be performed on a region with less image details (whether the filtering process needs to be performed, and further, the comprehensive judgment needs to be performed in combination with the weak intensity of the image details of the reference line), and the local _ bs _ h is 0, which represents that the image details of the reference block are more, and accordingly, the filtering process may be selected not to be performed on the region with more image details, so as to retain the image details.
II (h), determining global blockiness strength global _ bs _ h
The difference values on both sides of the horizontal boundary are calculated when the image data is processed line by line. Referring to fig. 11, the difference values of the pixel values on both sides of the horizontal boundary (i.e., p0 and p1, q0 and q1) are counted, and a block boundary strength value global _ bs _ h based on the full image is calculated, mainly for the influence on the block boundary strength local _ bs _ h at the block level in the image of the next frame. The following specifically describes the generation process of global _ bs _ h.
Counting the sum of all first difference values pq0_ static _ ph, the sum of second difference values p01_ static _ ph, and the sum of third difference values q01_ static _ ph in the current frame image, wherein the first difference values abs (p0-q0) are inter-block difference values between two consecutive pixel values respectively located at both sides of the horizontal boundary and both adjacent to the horizontal boundary, the second difference values abs (p0-p1) are intra-block difference values between two consecutive pixel values located at one side of the horizontal boundary and adjacent to the horizontal boundary, and the third difference values (q0-q1) are inter-block difference values between two consecutive pixel values located at the other side of the horizontal boundary and adjacent to the horizontal boundary. Where abs () represents the absolute value.
The blocking effect strength global _ bs _ h of the current frame image is determined according to the first difference sum pq0_ static _ ph, the second difference sum p01_ static _ ph, and the third difference sum q01_ static _ ph.
As an optional solution, the step of determining the blocking effect strength of the current frame image may specifically include: the blocking strength of the current frame image is determined based on the magnitude relation of the first difference sum pq0_ static _ ph with respect to the maximum max (p01_ static _ ph, q01_ static _ ph) of the second difference sum and the third difference sum, and the magnitude relation of a fourth difference dif _ ph, which is the difference between the first difference sum pq0_ static _ ph and the intra-block average difference avg _ static _ ph, which is the average of the second difference sum and the third difference sum, to a fifth threshold value, which is determined by the intra-block average difference avg _ static _ ph. Where max () denotes taking the maximum value.
The following exemplary process for determining the global blockiness strength global _ bs _ h is given.
As shown in fig. 11, p0 and p1 are two pixels of the reference sub-block on the upper side of the horizontal boundary, and q0 and q1 are two pixels of the reference sub-block on the lower side.
If the current line is the lower edge of the upper decoded block, the pixel difference in the upper reference sub-block and the inter-block difference of the upper and lower two reference sub-blocks are counted.
pq0_static_ph=pq0_static_ph+abs(p0-q0);
p01_static_ph=p01_static_ph+abs(p0-p1);
If the current row is the upper edge of the lower side decoding block, the pixel difference values within the lower side reference sub-block are counted.
q01_static_ph=q01_static_ph+abs(q0-q1);
Here, pq0_ static _ ph, p01_ static _ ph, and q01_ static _ ph are reinitialized to 0 at the start of each frame image.
At the end of image scanning, the sum of the difference values of the whole frame image with respect to the block boundary is obtained through statistics, and then the magnitude relation of the difference values is judged to determine the global parameter global _ bs _ s of the current frame image with respect to the block boundary strength.
Average of difference values within two blocks: avg _ static _ ph ═ p01_ static _ ph + q01_ static _ ph)/2
Difference between inter-block difference and intra-block average difference: dif _ ph ═ pq0_ static _ ph-avg _ static _ ph
Figure BDA0001633293210000121
Figure BDA0001633293210000131
Then, the pic _ bs _ tmp value of several previous (bnr _ pics-1) frames is counted as sum _ pic _ bs _ h:
Figure BDA0001633293210000132
the bnr _ globlal _ bs _ h is a global variable representing a global block edge effect strength value of a previous frame, and the pic _ bs _ h is a block effect strength value of a current frame image, wherein the values 2, 1 and 0 of the pic _ bs _ h represent that the block effect is strong, the block effect is medium and the block effect is weak respectively. And after the image scanning is finished, updating the value of the bnr _ global _ bs _ h, and making the bnr _ global _ bs _ h equal to pic _ bs _ h for reference of judgment of the next frame image.
III (h) adjusting local _ bs _ h by global _ bs _ h
Before determining the filtering mode filter _ mode _ h of the reference line according to the image detail intensity level local _ bs _ h of the reference block and the image detail intensity level of the reference line, the image processing method further comprises: and adjusting the image detail intensity degree local _ bs _ h of the reference block of the current frame image according to the blockiness intensity global _ bs _ h of the previous frame image.
As an alternative, the local _ bs _ h of the current frame image is adjusted according to the global _ bs _ h of the previous frame image. The process of adjusting local _ bs _ h according to global _ bs _ h is schematically given below:
if(global_bs_h==0)
local_bs_h=0;
the global _ bs _ h is 0, which indicates that the blocking effect strength of the previous frame image is weak or no blocking effect exists, and the level of image detail local _ bs _ h of the reference block of the current frame image is degraded to 0 (indicating that the image detail is more), and accordingly, the filtering process may be selected not to be performed for the region with more image detail, so as to retain the image detail.
IV (h) determining the image detail intensity degree of the reference row
And S102(h), determining the image detail intensity degree of the reference line in the direction vertical to the horizontal boundary according to the pixel values on the two sides of the horizontal boundary in the reference line, wherein the reference line is a line of pixel values arranged vertical to the horizontal boundary in the reference block.
The step of determining the degree of detail of the image of the reference line may specifically include:
under the condition that a horizontal boundary is a boundary between two adjacent rows of decoding blocks, determining the magnitude relation between the difference value between the first blocks and a first threshold value and the magnitude relation between the difference value in the first blocks and a second threshold value as the image detail intensity of a reference row; the first intra-block difference value is a difference value between two pixel values respectively located on two sides of the horizontal boundary in the reference row and both adjacent to the horizontal boundary, the first intra-block difference value is a difference value between two consecutive pixel values located on one side of the horizontal boundary in the reference row and adjacent to the horizontal boundary, and the first threshold value and the second threshold value are two different values determined by avg _ blk and dif _ blk.
Fig. 11 schematically shows a schematic diagram of reference lines crossing a horizontal boundary, wherein for the left picture the current line is the lower edge of the last decoded block and the corresponding reference line contains the pixel values p1, p0, q0, and for the right picture the current line is the upper edge of the next decoded block and the corresponding reference line contains the pixel values p0, q0, q 1. Specifically, the degree of the detail of the image of the reference line may be determined according to the magnitude relationship between the pixel values of the reference lines. Schematically, a specific determination process is given below.
First, a first threshold value alpha and a second threshold value beta are determined according to avg _ blk and dif _ blk:
alpha=max(0,(avg_blk*0.31-diff_blk)*8/bh
beta=max(0,(avg_blk*0.08-diff_blk)*8/bh;
next, the difference between the pixel values of the reference line is compared with the above threshold to determine the degree of the detail of the image of the reference line. Specifically, referring to fig. 11, in the case where the current row is the lower edge of the decoding block, if | p0-q0| < alpha & & | p0-p1| < beta, it is explained that the image detail of the reference line is less; and, in case that the current line is the upper edge of the decoding block, if | p0-q0| < alpha & | q0-q1| < beta, it is stated that the image details of the reference line are less, and accordingly the filtering process may be required for the region having less image details. The other cases illustrate that the image details of the reference line are more, and accordingly, the filtering process may be selected not to be performed for the region with more image details, so as to preserve the image details.
V (h), determining filtering mode filter _ mode _ h
And S103(h), determining a filtering mode filter _ mode _ h of the reference line according to the image detail strength degree local _ bs _ h of the reference block and the image detail strength degree of the reference line, wherein the horizontal boundary blocking effect processing process only adopts 3 rows of pixel values schematically, and the selection of the vertical filter (filter _ mod _ h) only has 0 (filtering is not needed) and 1 (filtering is needed). That is, the pixel values of 3 rows above and below the reference sub-block are combined to determine whether filtering is required. Illustratively, a specific determination process is given below.
Figure BDA0001633293210000151
If the current line is the upper edge or the lower edge of the decoded block, if the image details of the reference block are small (local _ bs _ h is 1) and the image details of the reference line are small, then filter _ mode _ h is 1, and the image data needs to be filtered. If the image details of the reference block are more (local _ bs _ h is 0) or the image details of the reference line are more, then the filter _ mode _ h is 0, and the filtering process is not needed to be performed on the image data.
VI (h), performing filtering
And according to the determined filtering mode filter _ mode _ h of the reference line, filtering the pixel values in the reference line in the vertical direction to remove the horizontal boundary blocking effect.
Vertical boundary blockiness processing
In the case where the first boundary is a vertical boundary, the step S101 may include:
s101(v), according to pixel values in the reference block crossing the vertical boundary, determining the image detail strength local _ bs _ v of the reference block along the direction vertical to the vertical boundary.
The vertical boundary is the boundary between two adjacent columns of decoding blocks, the reference block is a sub-matrix of a pixel value matrix forming the two adjacent columns of decoding blocks, the length a of the reference block in the extending direction of the vertical boundary is more than or equal to 2 and less than or equal to m, and the length b of the reference block in the direction perpendicular to the vertical boundary is more than or equal to b and more than or equal to 3.
As an alternative, the reference block comprises two reference sub-blocks, each reference sub-block comprises s rows x n columns of pixel values, wherein s < m is greater than or equal to 3, and the edge of the reference sub-block in the column direction is located on the boundary between two adjacent columns of decoding blocks.
Referring to fig. 12, schematically, a vertical boundary between two adjacent columns of decoded blocks is shown (as indicated by a bold solid line in fig. 12), and a reference block (as indicated by a hatched area) crossing the vertical boundary is also shown, the reference block including two reference sub-blocks. The reference sub-blocks are sized according to the value of s (3 ≦ s < m), which may be 3, 4, 5, 6, 7, and accordingly, the reference sub-blocks may contain 3 × n, 4 × n, 5 × n, 6 × n, 7 × n pixel values. Further, since the reference subblock and the decoding block each contain n columns of pixel values, the edge of the reference subblock in the column direction coincides with the boundary between two adjacent columns of decoding blocks. Illustratively, referring to fig. 12 and 13, m is 8, n is 8, s is 3, each decoded block contains 8 × 8 pixel values, and the reference block contains two reference sub-blocks, each containing 3 × 8 pixel values. Exemplarily, fig. 14 shows a schematic diagram of a reference row in the reference block, and as shown in fig. 14, reference rows p2, p1, p0, q0, q1, q2 are rows of pixel values arranged perpendicular to a vertical boundary in the reference block. As can be seen from fig. 14, the reference line is the middle row of pixel values in the reference block. The vertical boundary blocking process is described below by taking an example in which the reference sub-block includes 3 × 8 pixel values.
FIG. 15 is a schematic diagram of vertical boundary blockiness processing and intra-block data analysis. While processing the vertical boundary between the left reference sub-block and the current reference sub-block, intra-block data analysis is performed concurrently for the right reference sub-block. Referring to fig. 16, the vertical boundary blocking process method may include: an input image f (n); and judging the boundary strength of the block level according to the pre-calculated dif _ hor and avg _ blk of the left reference sub-block and the current reference sub-block to obtain local _ bs _ v (S301). Then, adjusting local _ bs _ v of the current frame image F (n) according to global block effect strength global _ bs _ v of the previous frame image F (n-1), wherein global _ bs _ v is determined in step S304 based on the statistic value obtained in step S303; determining the image detail intensity degree of a reference line according to the magnitude relation between pixel values at two sides of a vertical boundary, and determining the filtering strength according to the image detail intensity degree local _ bs _ v of a reference block and the image detail intensity degree of the reference line (S302); performing a filtering process on a vertical boundary; and outputting the image. The above steps will be described in detail below.
I (v), and determining the image detail intensity degree local _ bs _ v of the reference block
In the case where the first boundary is a vertical boundary, the step S101(v) of determining the degree of image detail of the reference block in the direction perpendicular to the vertical boundary may specifically include:
the degree of the detail of the image of the reference block in the direction perpendicular to the vertical boundary is determined based on the average difference value dif _ hor in the horizontal direction of the pixel values in each of the reference sub-blocks arranged in the direction perpendicular to the vertical boundary and the average value avg _ blk of all the pixel values in each of the reference sub-blocks.
Referring to fig. 14, the reference block includes two consecutive reference sub-blocks, which are a first reference sub-block and a second reference sub-block located at both sides of a vertical boundary.
The step of determining the degree of image detail of the reference block in the direction perpendicular to the vertical boundary may specifically include: and determining the image detail strength level of the reference block corresponding to a second interval in which the dif _ hor of the first reference sub-block is located and a third interval in which the dif _ hor of the second reference sub-block is located as the image detail strength and weakness degree of the reference block, wherein the second interval is one of at least two intervals, the third interval is one of at least two intervals, the at least two intervals corresponding to the second interval are obtained by dividing at least one node determined by the avg _ blk of the first reference sub-block, and the at least two intervals corresponding to the third interval are obtained by dividing at least one node determined by the avg _ blk of the second reference sub-block.
In which the image detail intensity levels of the reference blocks can be divided into a plurality of levels as needed, and referring to fig. 17, five levels (i.e., 0, 1, 2, 3, 4) are schematically shown, where from 0 to 4, the image details of the reference blocks on both sides of the vertical boundary become smaller (weaker), 0 indicates that the image details of the reference blocks on both sides of the vertical boundary are more, 4 indicates that the image details of the reference blocks are less, and accordingly, a filtering process may be required for a region with less image details (i.e., a weak detail region), and a filtering process may be selected not to be performed for a region with more image details (i.e., a strong detail region) so as to retain the image details.
As an alternative, in a similar manner to the horizontal boundary blocking process, the avg _ hor _ lft of the left reference sub-block may be used to determine a node b1, and the node b1 may divide the node b to obtain two intervals: the interval (- ∞, b1) and the interval [ b1, + ∞), and one node b2 may be determined using avg _ hor _ cur of the second reference sub-block, divided by this node b2 into two intervals: the bin (- ∞, b2) and the bin [ b2, + ∞) may in turn determine the image detail intensity level for the reference block corresponding to the bin in which dif _ hor _ lft of the left reference sub-block is located and the bin in which dif _ hor _ cur of the current reference sub-block is located.
Of course, the avg _ hor _ lft of the left reference sub-block may also be used to determine two nodes b3 and b4, which may be divided into three sections: the intervals (- ∞, b3), [ b3, b4], and the interval [ b4, + ∞ ]), and also the avg _ hor _ cur of the current reference subblock, can be used to determine two nodes b5 and b6, which can be divided into three intervals: the intervals (- ∞, b5), [ b5, b6], and [ b6, + ∞) may in turn determine the image detail intensity level for the reference block corresponding to the interval in which dif _ hor _ lft of the left reference sub-block is located and the interval in which dif _ hor _ cur of the current reference sub-block is located.
The following exemplary procedure for determining local _ bs _ v is given.
Calculation process of local _ bs _ v
In the following variable names, dif _ hor _ lft represents a horizontal direction difference value in the left reference sub-block, and dif _ hor _ cur represents a horizontal direction difference value in the right reference sub-block.
Figure BDA0001633293210000181
Figure BDA0001633293210000191
The following gives the formula for calculating avg _ blk, dif _ hor (including dif _ hor _ lft and dif _ hor _ cur) in the case of a reference sub-block containing 3 rows × bw of pixel values (see fig. 4):
Figure BDA0001633293210000192
Figure BDA0001633293210000193
as described above, local _ bs _ v ═ 4 indicates that the image details of the reference block are small, local _ bs _ v ═ 3, 2, and 1 indicates that the image details of the reference block are increased, and accordingly, the filtering process may be required for the areas with small image details, while local _ bs _ v ═ 0 indicates that the image details of the reference block are large, and the filtering process may be selected not to be performed for the areas with large image details, so as to retain the image details.
II (v), determining global blockiness intensity global _ bs _ v
Like the process of determining the global blockiness intensity global _ bs _ h, the difference on both sides of the vertical boundary is calculated when processing the image data line by line. Referring to fig. 14, the difference values of the pixel values on both sides of the vertical boundary (i.e., p0 and p1, q0 and q1) are counted, and a block boundary strength value global _ bs _ v based on the full image is calculated, mainly for the influence on the block boundary strength local _ bs _ v at the block level in the image of the next frame. The following specifically describes the generation process of global _ bs _ v.
Counting all of a first difference sum pq0_ static _ pv, a second difference sum p01_ static _ pv, and a third difference sum q01_ static _ pv in the current frame image, wherein the first difference abs (p0-q0) is an inter-block difference between two consecutive pixel values respectively located at both sides of the horizontal boundary and each adjacent to the horizontal boundary, the second difference abs (p0-p1) is an intra-block difference between two consecutive pixel values located at one side of the horizontal boundary and adjacent to the horizontal boundary, and the third difference (q0-q1) is an inter-block difference between two consecutive pixel values located at the other side of the horizontal boundary and adjacent to the horizontal boundary. Where abs () represents the absolute value.
And determining the block effect strength global _ bs _ v of the current frame image according to the first difference sum pq0_ static _ pv, the second difference sum p01_ static _ pv and the third difference sum q01_ static _ pv.
Optionally, the step of determining the blocking effect strength of the current frame image may specifically include: the blocking strength of the current frame image is determined according to a magnitude relation of the first difference sum pq0_ static _ pv with respect to a maximum value max (p01_ static _ pv, q01_ static _ pv) of the second difference sum and the third difference sum, and a magnitude relation of a fourth difference dif _ pv with respect to a fifth threshold value, wherein the fourth difference dif _ pv is a difference between the first difference sum pq0_ static _ pv and an intra-block average difference avg _ static _ pv, the intra-block average difference is an average of the second difference sum and the third difference sum, and the fifth threshold value is determined by the intra-block average difference avg _ static _ pv. Where max () denotes taking the maximum value.
The following exemplary process for determining the global blockiness strength global _ bs _ v is given.
For example, as shown in fig. 14, p0 and p1 are two pixels of the reference sub-block on the left side of the vertical boundary, and q0 and q1 are two pixels of the reference sub-block on the right side.
pq0_static_pv=pq0_static_pv+abs(p0-q0);
p01_static_pv=p01_static_pv+abs(p0-p1);
q01_static_pv=q01_static_pv+abs(q0-q1);
Here, pq0_ static _ pv, p01_ static _ pv and q01_ static _ pv will be reinitialized to 0 at the start of each frame image.
At the end of image scanning, the sum of the difference values of the whole frame image with respect to the block boundary is obtained through statistics, and then the magnitude relation of the difference values is judged to determine the global parameter global _ bs _ v of the current frame image with respect to the block boundary strength.
Average of difference values within two blocks: avg _ static _ pv ═ p01_ static _ pv + q01_ static _ pv)/2
Difference between inter-block difference and intra-block average difference: dif _ pv ═ pq0_ static _ pv-avg _ static _ pv
Figure BDA0001633293210000201
Then, the pic _ bs _ tmp value of several previous (bnr _ pics-1) frames is counted as sum _ pic _ bs _ v:
Figure BDA0001633293210000211
the bnr _ globlal _ bs _ v is a global variable and represents a global block edge effect strength value of a previous frame, and the pic _ bs _ v represents a block effect strength value of a current frame image, wherein the values 2, 1 and 0 of the pic _ bs _ v represent that the block effect is strong, the block effect is medium and the block effect is weak respectively. And after the image scanning is finished, updating the value of the bnr _ global _ bs _ v, and making the bnr _ global _ bs _ v equal to pic _ bs _ v for reference of judging the blocking effect strength of the next frame image.
III (v) adjusting local _ bs _ v by global _ bs _ v
The image detail degree local _ bs _ v of the reference block of the current frame image is adjusted according to the global blockiness intensity value bnr _ global _ bs _ v (i.e., global _ bs _ v) calculated above, and schematically, a specific adjustment process is given below.
Figure BDA0001633293210000212
Wherein, if the blocking effect is weak, let local _ bs _ v be 0; if the blocking effect is strong, directly utilizing local _ bs _ v; otherwise, the value of local _ bs _ v is degraded by 1.
IV (v) determining the image detail intensity degree of the reference row
And S102(v), determining the image detail intensity degree of the reference line in the direction vertical to the horizontal boundary according to the pixel values on the two sides of the horizontal boundary in the reference line, wherein the reference line is a line of pixel values arranged vertical to the horizontal boundary in the reference block.
The step of determining the degree of detail of the image of the reference line may specifically include: under the condition that the first boundary is the boundary between two adjacent columns of decoding blocks, determining the magnitude relation between the difference value between the second blocks and a third threshold value and the magnitude relation between the difference value in the second blocks and a fourth threshold value as the image detail intensity degree of a reference row; the second inter-block difference is a difference between two pixel values respectively located on two sides of the first boundary in the reference row and both adjacent to the first boundary, the second intra-block difference is a difference between one pixel value located on one side of the first boundary in the reference row and adjacent to the first boundary and at least one pixel value located on the same side of the first boundary and continuously arranged with the pixel values, and the third threshold and the fourth threshold are two different values determined by avg _ blk and dif _ hor.
Fig. 14 schematically shows a schematic diagram of a reference row across a vertical boundary, the reference row comprising pixel values p2, p1, p0, q0, q1, q 2. Specifically, the degree of the detail of the image of the reference line may be determined according to the magnitude relationship between the pixel values of the reference lines. It should be noted that, here, it is described that the reference row includes the above 6 pixel values as an example, and the reference row may also include other numbers of pixel values, which is not specifically limited in the embodiment of the present invention. Schematically, a specific determination process is given below.
First, illustratively, based on the block average value (avg _ blk _ lft, avg _ blk _ cur) and the average difference value (var _ hor _ lft, var _ hor _ cur) of the current reference sub-block and the left reference sub-block, the third threshold value alpha and the fourth threshold value beta are calculated by the following calculation formulas:
alpha=max(0,(avg_blk_lft+avg_blk_cur)*0.16-(var_hor_lft+var_hor_cur)/2)*8/bw;
beta=max(0,(avg_blk_lft+avg_blk_cur)*0.04-(var_hor_lft+var_hor_cur)*8/bw;
wherein var _ hor _ lft is dif _ hor _ lft/bw-1. It should be noted that bw here is the actual size of the detected coding block in the image, and bw may be greater than 8 after the image is enlarged.
Next, the difference between the pixel values of the reference line is compared with the above threshold to determine the degree of the detail of the image of the reference line. Specifically, if | p0-q0| < (alpha/2) +2& & | p1-p0| < beta & & | q1-q0| < beta & & | p2-p0| < beta & | q2-q0| < beta, then it is stated that the image details of the reference row are less; and, if | p0-q0| < alpha & & | p1-p0| < beta & | q1-q0| < beta and | p2-p0| < beta & | q2-q0| < beta, it is stated that the image details of the reference line are less, and accordingly, the filtering process may be required for the area with less image details. The other cases illustrate that the image details of the reference line are more, and accordingly, the filtering process may be selected not to be performed for the region with more image details, so as to preserve the image details.
V (v), determining filtering mode filter _ mode _ v
And S103(v), under the condition that the boundary between the current reference sub-block and the left reference sub-block is not a real object boundary (namely, flag _ real _ edg is equal to 0), determining a filtering mode filter _ mode _ v according to the image detail strength degree of the reference block, namely the local _ bs _ v and the image detail strength degree of the reference row.
Illustratively, the horizontal filter (filter _ mode _ v) may be defined as 4 stages: 3 indicates that the highest filtering is needed, 2 indicates that a medium strength filtering is needed, 1 indicates that a weak filtering is needed, and 0 indicates that no filtering is needed. The specific filter to be used may be defined according to the situation, and the embodiment of the present invention is not particularly limited.
Schematically, the process of determining the filtering mode is given below.
Figure BDA0001633293210000231
It should be noted that, if an object boundary in the image content exists between the current reference sub-block and the left reference sub-block, the object boundary does not need to be filtered, and filter _ mode _ v is set to be 0. Schematically, the determination process of flag _ real _ edg is given below.
flag _ real _ edg judgment process
Figure BDA0001633293210000241
Where thd _ real _ edg is a threshold value for determining whether the boundary is an object boundary in the image. The flag _ real _ edg being 1 indicates that the boundary is an object boundary in the image, and the flag _ real _ edg being 0 indicates that the boundary is not an object boundary in the image. That is, if the difference between the average values of the current reference sub-block and the left reference sub-block, or the difference between two adjacent pixels across the boundary is larger than the threshold, the boundary is determined to be the object boundary in the image.
VI (v), performing filtering
And according to the determined filtering mode filter _ mode _ v of the reference line, filtering the pixel values in the reference line in the horizontal direction to remove the vertical boundary blocking effect.
General flow chart of image processing method
Fig. 18 shows an overall flowchart of an image processing method of removing an image block effect. Referring to fig. 18, an image processing method according to an embodiment of the present invention may include:
s401, initializing the width and height of a decoding block and an offset based on a register.
S402, performing intra-block data analysis on the right reference sub-block (i.e., the reference sub-block subsequent to the current reference sub-block) in advance to obtain data values that can be used when determining the degree of image detail of the reference block.
S403, the scanning process of the line-by-line and each block vertical boundary is started.
S404a, it is determined whether the boundary is a vertical boundary, and if the boundary is a vertical boundary, S405a is executed.
S405a, based on the image-level blocking effect strength BS, the image detail strength BS of the block level is adjusted.
S406a, determining a filtering mode based on the image detail degree BS of the block level and the image detail degree BS of the reference line.
S407a, a filtering process on the vertical boundary is performed.
And S408a, obtaining a horizontal filtering result.
Accordingly, the steps of the blocking process for the horizontal boundary are similar to the steps of the vertical boundary processing steps S404a-S408a, and are not described herein again.
And S409, mixing the horizontal filtering result with the vertical filtering result.
S410, judging whether the frame is finished or not, if so, executing S411, otherwise, executing S401.
And S411, updating the image level statistical data. Then S412 is performed.
And S412, obtaining the blocking effect strength BS of the image level based on the image statistical data of the previous frame. Then S405a or S405b is performed, and the degree of image detail BS at the block level may be adjusted based on the blocking effect strength BS at the image level.
Image-level block boundary strength calculation process
Fig. 19 shows a flowchart of the calculation process of the image-level block boundary strength. Referring to fig. 19, the process of calculating the image-level block boundary strength may include:
s501, resetting pq0_ statistics _ ph, p01_ statistics _ ph, q01_ statistics _ ph to '0' for horizontal boundaries; for vertical boundaries, pq0_ statistics _ pv, p01_ statistics _ pv, q01_ statistics _ pv were reset to '0'
For the horizontal boundary:
s502a, scanning each horizontal boundary in the current reference subblock.
S503a, judging whether flag _ real _ edg is equal to 0, if yes, executing S504a, otherwise executing S505.
S504a, accumulation pq0_ statistics _ ph, p01_ statistics _ ph, q01_ statistics _ ph. Then, S505 is executed.
For a vertical boundary:
s502b, scanning each vertical boundary between the left reference sub-block and the current reference sub-block.
S503b, judging whether flag _ real _ edg is equal to 0, if yes, executing S504b, otherwise executing S505.
S504b, accumulation pq0_ statistics _ pv, p01_ statistics _ pv, q01_ statistics _ pv. Then, S505 is executed.
S505, judging whether the frame is finished or not, and if the frame is finished, executing S506; otherwise, S502a and S502b are executed.
And S506, analyzing the image level blocking effect strength BS of the next frame image.
The image processing method provided by the embodiment of the invention filters the pixel values in the reference line by determining the image detail strength of the reference block and the reference line across the boundary in the image and determining the filtering mode of the reference line according to the image detail strength of the reference block and the reference line, namely, the image detail strength is different, and the adopted filtering mode is different.
It should be noted that, for simplicity, the above-mentioned method embodiments are described as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described action sequence, and those skilled in the art should understand that the embodiments described in the specification belong to the preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.
Example two: image processing apparatus
Accordingly, an embodiment of the present invention provides an image processing apparatus, which may include: the device comprises a first processing unit, a second processing unit and a third processing unit.
The first processing unit may be configured to determine, according to pixel values in a reference block crossing a first boundary, a degree of image detail of the reference block in a direction perpendicular to the first boundary, where the current frame image includes a plurality of decoding blocks distributed in an array, each decoding block includes m rows × n columns of pixel values, the first boundary is a boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix of a pixel value matrix constituting the two adjacent rows or two columns of decoding blocks, a length a of the reference block in an extending direction of the first boundary satisfies 2 ≦ a ≦ n or 2 ≦ a ≦ m, and a length b of the reference block in the direction perpendicular to the first boundary satisfies b ≧ 3.
The second processing unit may be configured to determine a degree of image detail of the reference line in a direction perpendicular to the first boundary according to pixel values of the reference line located on both sides of the first boundary, where the reference line is a line of pixel values of the reference block that is arranged perpendicular to the first boundary.
The third processing unit may be configured to determine a filtering mode of the reference line according to the degree of image detail strength of the reference block and the degree of image detail strength of the reference line, so as to filter the pixel values in the reference line.
As an alternative, the reference block may include at least one reference subblock, each reference subblock including s rows x n columns of pixel values, where s < m is greater than or equal to 3, and an edge of the reference subblock in the column direction is located on a boundary between two adjacent columns of the decoded blocks. In the case that the first boundary is the boundary between two adjacent rows of decoded blocks, the reference block comprises one reference sub-block; in the case where the first boundary is a boundary between two adjacent columns of decoded blocks, the reference block includes two consecutive reference sub-blocks arranged along a direction perpendicular to the first boundary.
Alternatively, s may be an odd number, and in the case where the first boundary is a boundary between two adjacent columns of decoded blocks, the reference row is a row of pixel values located in the middle of the reference block.
As an alternative, for a horizontal boundary, the first processing unit may be configured to: and determining the image detail strength of the reference block in the direction perpendicular to the first boundary according to the average difference value dif _ blk of two continuous lines of pixel values on the side adjacent to the first boundary in the reference sub-block in the vertical direction and the average value avg _ blk of all pixel values in the reference sub-block.
As an alternative, for a horizontal boundary, the first processing unit may be further configured to: and determining the image detail strength level of the reference block corresponding to a first interval in which the dif _ blk is located as the image detail strength level of the reference block, wherein the first interval is one of at least two intervals, and the at least two intervals are obtained by dividing at least one node determined by the avg _ blk.
As an alternative, for a horizontal boundary, the second processing unit may be configured to: under the condition that the first boundary is the boundary between two adjacent rows of decoding blocks, determining the magnitude relation between the difference value between the first blocks and a first threshold value and the magnitude relation between the difference value in the first blocks and a second threshold value as the image detail intensity of a reference row; the first inter-block difference value is a difference value between two pixel values respectively located on two sides of the first boundary in the reference row and both adjacent to the first boundary, the first intra-block difference value is a difference value between two consecutive pixel values located on one side of the first boundary in the reference row and adjacent to the first boundary, and the first threshold and the second threshold are two different values determined by avg _ blk and dif _ blk.
As an alternative, for a vertical boundary, the first processing unit may be configured to: the degree of image detail of the reference block in the direction perpendicular to the first boundary is determined based on the average difference value dif _ hor in the horizontal direction of the pixel values in each of the reference sub-blocks arranged perpendicular to the first boundary and the average value avg _ blk of all the pixel values in each of the reference sub-blocks.
As an alternative, for a vertical boundary, the two consecutive reference sub-blocks include a first reference sub-block and a second reference sub-block located on both sides of the first boundary. The first processing unit may be further to: and determining the image detail strength level of the reference block corresponding to a second interval in which the dif _ hor of the first reference sub-block is located and a third interval in which the dif _ hor of the second reference sub-block is located as the image detail strength and weakness degree of the reference block, wherein the second interval is one of at least two intervals, the third interval is one of at least two intervals, the at least two intervals corresponding to the second interval are obtained by dividing at least one node determined by the avg _ blk of the first reference sub-block, and the at least two intervals corresponding to the third interval are obtained by dividing at least one node determined by the avg _ blk of the second reference sub-block.
As an alternative, for a vertical boundary, the second processing unit may be configured to: under the condition that the first boundary is the boundary between two adjacent columns of decoding blocks, determining the magnitude relation between the difference value between the second blocks and a third threshold value and the magnitude relation between the difference value in the second blocks and a fourth threshold value as the image detail intensity degree of a reference row; the second inter-block difference is a difference between two pixel values respectively located on two sides of the first boundary in the reference row and both adjacent to the first boundary, the second intra-block difference is a difference between one pixel value located on one side of the first boundary in the reference row and adjacent to the first boundary and at least one pixel value located on the same side of the first boundary and continuously arranged with the pixel values, and the third threshold and the fourth threshold are two different values determined by avg _ blk and dif _ hor.
As an optional solution, the image processing apparatus further includes a fourth processing unit, and the fourth processing unit may be configured to: counting the sum of all first difference values, the sum of all second difference values and the sum of all third difference values in the current frame image, wherein the first difference values are the difference values between two continuous pixel values which are respectively positioned at two sides of a first boundary and are adjacent to the first boundary, the second difference values are the difference values between two continuous pixel values which are positioned at one side of the first boundary and are adjacent to the first boundary, and the second difference values are the difference values between two continuous pixel values which are positioned at the other side of the first boundary and are adjacent to the first boundary; and determining the blocking effect strength of the current frame image according to the first difference sum, the second difference sum and the third difference sum.
It should be noted that descriptions of steps related to the above method embodiments may be cited in each functional module in the corresponding product embodiment, and are not described herein again. The image processing device provided by the embodiment of the invention can achieve the same effect as the image processing method.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed control device may be implemented in other manners. For example, the above-described device (or system) embodiments are merely illustrative, and for example, the division of the units (or modules) is only one logical function division, and there may be other divisions when the actual implementation is performed, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units (or modules) described as separate parts may or may not be physically separate, for example, functional units in various embodiments of the present invention may be integrated in one physical unit, may be distributed in different physical units, or may be integrated in one physical unit by two or more units; it is also possible that one functional unit is implemented by two or more physical units in cooperation. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An image processing method, wherein a current frame image comprises a plurality of decoding blocks distributed in an array, each decoding block comprises m rows by n columns of pixel values, the image processing method comprises:
determining the degree of image detail intensity of a reference block along the direction perpendicular to a first boundary according to pixel values in the reference block crossing the first boundary, wherein the first boundary is the boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix forming a pixel value matrix of the two adjacent rows or two columns of decoding blocks, the length a of the reference block along the extending direction of the first boundary is more than or equal to 2 and less than or equal to n or more than or equal to 2 and less than or equal to m, and the length b of the reference block along the direction perpendicular to the first boundary is more than or equal to b and more than or equal to 3;
determining the degree of image detail intensity of a reference line in a direction perpendicular to the first boundary according to pixel values of the reference line on two sides of the first boundary, wherein the reference line is a line of pixel values of the reference block which is arranged perpendicular to the first boundary;
determining a filtering mode of the reference line according to the image detail intensity degree of the reference block and the image detail intensity degree of the reference line so as to filter the pixel values in the reference line;
the method further comprises the following steps: counting the sum of all first differences, the sum of all second differences and the sum of all third differences in the current frame image, wherein the first differences are the differences between two continuous pixel values which are respectively positioned at two sides of a first boundary and are adjacent to the first boundary, the second differences are the differences between two continuous pixel values which are positioned at one side of the first boundary and are adjacent to the first boundary, and the third differences are the differences between two continuous pixel values which are positioned at the other side of the first boundary and are adjacent to the first boundary; and determining the blocking effect strength of the current frame image according to the sum of the first difference values, the sum of the second difference values and the sum of the third difference values.
2. The image processing method according to claim 1,
the reference block comprises at least one reference sub-block, each reference sub-block comprises s rows x n columns of pixel values, wherein s < m is more than or equal to 3, and the edge of the reference sub-block in the column direction is positioned on the boundary between two adjacent columns of the decoding blocks;
the reference block comprises one of the reference sub-blocks if the first boundary is a boundary between two adjacent rows of the decoded blocks; in a case where the first boundary is a boundary between two adjacent columns of the decoded blocks, the reference block includes two consecutive reference sub-blocks arranged along a direction perpendicular to the first boundary.
3. The image processing method according to claim 2,
s is an odd number, and in the case that the first boundary is a boundary between two adjacent columns of the decoding blocks, the reference row is a row of pixel values located in the middle of the reference block.
4. The image processing method according to claim 2,
in the case that the first boundary is a boundary between two adjacent rows of the decoded blocks, the determining, according to pixel values in a reference block crossing the first boundary, the degree of image detail of the reference block in a direction perpendicular to the first boundary comprises:
and determining the degree of the detail of the image of the reference block along the direction perpendicular to the first boundary according to the average difference value dif _ blk of two continuous lines of pixel values of the reference sub-block at the side adjacent to the first boundary in the vertical direction and the average value avg _ blk of all pixel values in the reference sub-block.
5. The method according to claim 4, wherein said determining the degree of image detail of the reference block in the direction perpendicular to the first boundary according to the average difference value dif _ blk in the vertical direction of two consecutive rows of pixel values on the side adjacent to the first boundary in the reference sub-block and the average value avg _ blk of all pixel values in the reference sub-block comprises:
determining an image detail strength level of the reference block corresponding to a first interval in which the dif _ blk is located, as an image detail strength level of the reference block, wherein the first interval is one of at least two intervals, and the at least two intervals are obtained by dividing at least one node determined by the avg _ blk.
6. The image processing method according to claim 2,
in the case that the first boundary is a boundary between two adjacent columns of the decoded blocks, the determining, according to pixel values in a reference block crossing the first boundary, the degree of image detail of the reference block in a direction perpendicular to the first boundary comprises:
and determining the degree of the detail of the image of the reference block in the direction perpendicular to the first boundary according to the average difference value dif _ hor of the pixel values in each reference sub-block arranged in the direction perpendicular to the first boundary in the horizontal direction and the average value avg _ blk of all the pixel values in each reference sub-block.
7. The image processing method according to claim 6, wherein two consecutive reference sub-blocks include a first reference sub-block and a second reference sub-block located on both sides of the first boundary;
the determining, according to the average difference value dif _ hor of the pixel values in each of the reference sub-blocks arranged perpendicular to the first boundary in the horizontal direction and the average value avg _ blk of all the pixel values in each of the reference sub-blocks, the degree of the detail of the image of the reference block in the direction perpendicular to the first boundary comprises:
determining, as the image detail strength level of the reference block, a second interval in which dif _ hor of the first reference sub-block is located and a third interval in which dif _ hor of the second reference sub-block is located, wherein the second interval is one of at least two intervals, the third interval is one of at least two intervals, the at least two intervals corresponding to the second interval are obtained by dividing at least one node determined by avg _ blk of the first reference sub-block, and the at least two intervals corresponding to the third interval are obtained by dividing at least one node determined by avg _ blk of the second reference sub-block.
8. The method according to claim 4 or 5, wherein said determining the degree of detail of the image of the reference line in a direction perpendicular to the first boundary according to the pixel values of the reference line on both sides of the first boundary comprises:
under the condition that the first boundary is a boundary between two adjacent rows of the decoding blocks, determining the magnitude relation between the difference value between the first blocks and a first threshold value and the magnitude relation between the difference value in the first blocks and a second threshold value as the image detail strength of the reference row; the first inter-block difference value is a difference value between two pixel values respectively located on two sides of the first boundary in the reference line and both adjacent to the first boundary, the first intra-block difference value is a difference value between two consecutive pixel values located on one side of the first boundary in the reference line and adjacent to the first boundary, and the first threshold and the second threshold are two different values determined by avg _ blk and dif _ blk.
9. The method according to claim 6 or 7, wherein said determining the degree of detail of the image of the reference line in a direction perpendicular to the first boundary according to the pixel values of the reference line on both sides of the first boundary comprises:
determining the magnitude relation between the difference value between the second blocks and a third threshold value and the magnitude relation between the difference value in the second blocks and a fourth threshold value under the condition that the first boundary is the boundary between two adjacent columns of the decoding blocks, and taking the magnitude relation as the image detail intensity degree of the reference row; the second inter-block difference is a difference between two pixel values respectively located on two sides of the first boundary in the reference row and both adjacent to the first boundary, the second intra-block difference is a difference between one pixel value located on one side of the first boundary in the reference row and adjacent to the first boundary and at least one pixel value located on the same side of the first boundary in the reference row and continuously arranged with the pixel values, and the third threshold and the fourth threshold are two different values determined by avg _ blk and dif _ hor.
10. The image processing method according to claim 1, wherein before said determining the filtering mode of the reference line according to the degree of the image detail intensity of the reference block and the degree of the image detail intensity of the reference line, the image processing method further comprises:
and adjusting the image detail intensity of the reference block of the current frame image according to the blocking effect intensity of the previous frame image.
11. An image processing apparatus characterized by comprising:
the first processing unit is used for determining the image detail intensity degree of a reference block in the direction perpendicular to a first boundary according to pixel values in the reference block crossing the first boundary, wherein a current frame image comprises a plurality of decoding blocks distributed in an array, each decoding block comprises m rows by n columns of pixel values, the first boundary is the boundary between two adjacent rows or two columns of decoding blocks, the reference block is a sub-matrix forming the pixel value matrix of the two adjacent rows or two columns of decoding blocks, the length a of the reference block in the extending direction of the first boundary satisfies 2-a-n or 2-a-m, and the length b of the reference block in the direction perpendicular to the first boundary satisfies b-3;
the second processing unit is used for determining the degree of image detail intensity of a reference line in the direction perpendicular to the first boundary according to pixel values on two sides of the first boundary in the reference line, wherein the reference line is a line of pixel values in the reference block, and the line of pixel values is perpendicular to the first boundary;
a third processing unit, configured to determine a filtering mode of the reference line according to the image detail strength of the reference block and the image detail strength of the reference line, so as to filter pixel values in the reference line;
a fourth processing unit, configured to count a sum of first differences, a sum of second differences, and a sum of third differences in the current frame image, where the first differences are differences between two consecutive pixel values that are located on two sides of a first boundary and adjacent to the first boundary, respectively, the second differences are differences between two consecutive pixel values that are located on one side of the first boundary and adjacent to the first boundary, and the third differences are differences between two consecutive pixel values that are located on the other side of the first boundary and adjacent to the first boundary; and determining the blocking effect strength of the current frame image according to the sum of the first difference values, the sum of the second difference values and the sum of the third difference values.
CN201810351046.6A 2018-04-18 2018-04-18 Image processing method and device Active CN108566551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810351046.6A CN108566551B (en) 2018-04-18 2018-04-18 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810351046.6A CN108566551B (en) 2018-04-18 2018-04-18 Image processing method and device

Publications (2)

Publication Number Publication Date
CN108566551A CN108566551A (en) 2018-09-21
CN108566551B true CN108566551B (en) 2020-11-27

Family

ID=63535758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810351046.6A Active CN108566551B (en) 2018-04-18 2018-04-18 Image processing method and device

Country Status (1)

Country Link
CN (1) CN108566551B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798658A (en) * 2019-11-08 2020-10-20 方勤 Traffic lane passing efficiency detection platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101421935A (en) * 2004-09-20 2009-04-29 Divx公司 Video deblocking filter
CN101742292A (en) * 2008-11-14 2010-06-16 北京中星微电子有限公司 Image content information-based loop filtering method and filter
CN102611831A (en) * 2012-01-12 2012-07-25 陆许明 Method for reducing compressed image encoding noise

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8452117B2 (en) * 2009-02-10 2013-05-28 Silicon Image, Inc. Block noise detection and filtering
CN103220488B (en) * 2013-04-18 2016-09-07 北京大学 Conversion equipment and method on a kind of video frame rate

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101421935A (en) * 2004-09-20 2009-04-29 Divx公司 Video deblocking filter
CN101742292A (en) * 2008-11-14 2010-06-16 北京中星微电子有限公司 Image content information-based loop filtering method and filter
CN102611831A (en) * 2012-01-12 2012-07-25 陆许明 Method for reducing compressed image encoding noise

Also Published As

Publication number Publication date
CN108566551A (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN102150426B (en) reducing digital image noise
CN103947208B (en) Reduce the method and device of deblocking filter
US8295367B2 (en) Method and apparatus for video signal processing
US20070280552A1 (en) Method and device for measuring MPEG noise strength of compressed digital image
US20090285308A1 (en) Deblocking algorithm for coded video
KR100754154B1 (en) Method and device for identifying block artifacts in digital video pictures
JP2011097556A (en) Deblocking apparatus and method for video compression
KR100827106B1 (en) Apparatus and method for discriminating filter condition region in deblocking filter
US6999630B1 (en) Method of processing, and corresponding filtering device
CN108566551B (en) Image processing method and device
CN1767656B (en) Coding distortion removal method, dynamic image encoding method, dynamic image decoding method, and apparatus
CN110249630B (en) Deblocking filter apparatus, method and storage medium
US8811766B2 (en) Perceptual block masking estimation system
US20140056363A1 (en) Method and system for deblock filtering coded macroblocks
CN100371954C (en) Video signal post-processing method
JP2006060841A (en) Image data noise elimination method and apparatus thereof
US7983505B2 (en) Performing deblocking on pixel data
US7844124B2 (en) Method of estimating a quantization parameter
CN103634609A (en) Method and system for carrying out deblocking filtering on coding macro-block
US9326007B2 (en) Motion compensated de-blocking
CN118042163A (en) Image processing method and device, electronic equipment and storage medium
JP2022522140A (en) Deblocking using subpel motion vector thresholds
KR20090106669A (en) Method and device for eliminating mosquitto noise for decoding the compressed image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant