CN113556545B - Image processing method and image processing circuit - Google Patents
Image processing method and image processing circuit Download PDFInfo
- Publication number
- CN113556545B CN113556545B CN202010328270.0A CN202010328270A CN113556545B CN 113556545 B CN113556545 B CN 113556545B CN 202010328270 A CN202010328270 A CN 202010328270A CN 113556545 B CN113556545 B CN 113556545B
- Authority
- CN
- China
- Prior art keywords
- image
- value
- block
- image component
- specific
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 238000002156 mixing Methods 0.000 claims abstract description 41
- 238000000034 method Methods 0.000 claims description 16
- 239000003086 colorant Substances 0.000 claims 1
- 238000009499 grossing Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 239000000872 buffer Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present application relates to an image processing method and an image processing circuit. The image processing method comprises the following steps: receiving specific image component data of an input image picture in a specific color coding channel, the specific image component data being composed of a plurality of image component values of the same characteristic; spatially classifying a plurality of pixel units of an input image frame into a plurality of blocks according to a specific block size; obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block; generating a plurality of image component values of the interpolated image frame according to a plurality of image component average values of the plurality of blocks; and mixing the image component values of the interpolated image frames with the original component values of the input image frames to produce output image frames.
Description
Technical Field
The present application relates to an image processing mechanism, and more particularly, to an image processing method and an image processing circuit.
Background
In general, the phenomenon of false contours is that, when an input image is compression-encoded, a smooth color gradation change of an original image is converted into a step change due to quantization processing, and thus a contour-like artifact (artifact) as perceived by the human eye appears.
In order to remove the false contour, the conventional technology performs false contour detection on the input image, and performs low-pass filtering (smoothing) on the detected false contour portion to perform a reduction (smoothing) process, so that the conventional technology needs to obtain all pixel data in the filtering range at the same time on a circuit, and thus a large amount of storage space is required on hardware resources, including a large amount of flip-flop (flip-flop), static Random Access Memory (SRAM), and line buffer (line buffer) are required. Because of limited hardware resources on the circuit, the coverage of the low-pass filter is sacrificed in cost, so that the reduction processing effect is often insufficient, and a quite serious false contour cannot obtain a completely smooth result.
Disclosure of Invention
It is therefore an objective of the present application to provide an image processing apparatus, circuit and method for solving the above-mentioned problems of the prior art.
According to an embodiment of the present application, an image processing method is disclosed. The image processing method comprises the following steps: receiving specific image component data of an input image picture on a specific color coding channel, wherein the specific image component data consists of a plurality of image component values with the same characteristics; spatially classifying a plurality of pixel units of the input image frame into a plurality of blocks according to a specific block size; for each block, obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block; generating a plurality of image component values of an interpolated image frame according to the plurality of image component average values of the plurality of blocks; and mixing the plurality of image component values of the interpolated image frame with the plurality of original component values of the input image to generate a plurality of corrected image component values of an output image.
According to an embodiment of the present application, an image processing circuit is disclosed. The image processing circuit comprises a receiving circuit and a processing circuit. The receiving circuit is used for receiving specific image component data of an input image picture in a specific color coding channel, wherein the specific image component data consists of a plurality of image component values with the same characteristics. The processing circuit is coupled to the receiving circuit and is configured to: spatially classifying a plurality of pixel units of the input image frame into a plurality of blocks according to a specific block size; for each block, obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block; generating a plurality of image component values of an interpolated image frame according to the plurality of image component average values of the plurality of blocks; and mixing the plurality of image component values of the interpolated image frame with the plurality of original component values of the input image frame to generate a plurality of corrected image component values of an output image frame.
Drawings
Fig. 1 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
FIG. 2 is a schematic diagram of an embodiment of a mixing weight curve for the operation of the mixing circuit shown in FIG. 1.
Fig. 3 is a schematic view of an image processing apparatus according to another embodiment of the present application.
Fig. 4 is a flowchart of a method of image picture processing operation according to an embodiment of the present application.
Detailed Description
The present application is directed to a block-based large-scale image reduction method for obtaining smoother reduction results, counting data of an input image frame in units of fixed blocks, simplifying the input image into image feature values of each block, and interpolating (interpolation) feature values of one or more neighboring blocks at each pixel position as the reduction results. The method can simplify the data quantity required to be called in calculation in hardware implementation, does not need large-scale data to carry out filtering, and can effectively utilize hardware resources. The image feature value of each block may be, for example, an average value or a weighted average value of image values of a plurality of pixel units in each block, and the image feature value may also include, but is not limited to: block luminance/chrominance averages, intra-block luminance/chrominance maxima, intra-block luminance/chrominance minima, luminance distribution histograms (histogram), etc. Each block uses the statistical data, and a group of characteristic values of brightness, chromaticity and chroma are calculated through a calculating circuit and used as the representative value of each block. And then, after the image characteristic values of the blocks are subjected to spatial interpolation operation, the pixel value of each pixel unit after reduction can be obtained, and finally, an input image picture and a reduced image result are subjected to image mixing (blending) to obtain an output image picture with the false contour removed. Therefore, the block-based large-scale image reduction method of the present application does not require the simultaneous acquisition of all pixel data within a filtering range, and thus does not require the use of a large number of flip-flops, sram and line buffers in hardware resources.
The following description uses the mean or weighted average to calculate the image feature value of each block, however, other statistical methods can be used as described above; the practice of calculating the average is not a limitation of this case.
Fig. 1 is a schematic diagram of an image processing apparatus 100 according to an embodiment of the present application. The image processing apparatus 100 is used as a television chip apparatus (but not limited to) for processing compressed image data (but not limited to) for smoothing, reducing or eliminating false contours (false contours) perceived by human eyes in an image frame and/or reducing flickering phenomena suddenly generated in the image frame; the image data includes a plurality of image frames/pictures, each image frame/picture includes a plurality of image values of pixel units, each image value of each pixel unit is represented by a specific color coding format (color encoding format), which is called color space or color model, for example, YUV coding format, wherein Y represents luminance information (i.e. gray level value), U represents chrominance information, V represents density information, or RGB coding format, wherein RGB represents red, green and blue color information, respectively, but other different color coding formats, such as YCbCr and YPbPr, etc., can be adopted in the present application.
In the embodiment of fig. 1, the image processing apparatus 100 is used for processing an image frame with a color coded format being YUV format, and the image processing apparatus 100 processes luminance information, chrominance information and density information included in each image frame separately and independently, and first separates the luminance information, chrominance information and density information included in each image frame into a Y-code channel, a U-code channel and a V-code channel. The image processing apparatus 100 includes a plurality of image processing circuits 105, wherein the image processing circuits 105 are respectively disposed on different Y-code channels, U-code channels and V-code channels of a YUV coding format to respectively process luminance information, chrominance information and density information of an image value of each pixel unit in each input image frame. As shown in fig. 1, the image processing apparatus 100 employs three image processing circuits 105 to process luminance information, chrominance information, and density information of an image value of each pixel unit in each input image frame separately and independently to mitigate, eliminate, or smooth false contours in each input image frame; it should be noted that, in this embodiment, in order to preferably alleviate the false contour in the image frame, the image processing apparatus 100 activates the three image processing circuits 105 to perform the false contour smoothing on the luminance information, the chrominance information and the density information in each image frame, while in other embodiments, the image processing apparatus 100 may also select to activate only one or two image processing circuits 105 to process the information (i.e. the luminance information, the chrominance information and the density information) of the different image components of the image values, and may also achieve the effect of partially alleviating the false contour, for example, the image processing apparatus 100 may select to activate only one image processing circuit 105 on the Y-code channel to perform the false contour smoothing on the luminance information in the image frame, and not to process the color information, which is beneficial in that the computational load may be alleviated; however, this is not a limitation. When one of the image processing circuits 105 is not activated, it means that the image processing circuit 105 directly outputs the received image component information without processing.
The following description will be given with reference to the image processing circuit 105 processing luminance information, but it is also applicable to operations of processing different image component information (chromaticity information, density information) of image values. The image processing circuit 105 includes a receiving circuit 110 and a processing circuit 115, wherein for processing luminance information, the receiving circuit 110 is configured to receive input image data on a coding channel (e.g., Y-channel) of a specific color coding format (e.g., YUV coding format), the input image data including luminance values of a plurality of image values of a plurality of pixel units in one or more image frames. The processing circuit 115 is coupled to the receiving circuit 110 for spatially dividing or classifying the size of an image frame into a plurality of blocks according to a specific block size, each block having, for example, n×n pixel units, for example, if the block size is 40×40 pixel units for a 4K image frame, the 4K image frame size can be divided into 96×54 blocks; however, this is not a limitation of the present disclosure.
In implementation, the processing circuit 115 includes a statistics circuit 1151, an interpolation circuit 1152, and a mixing circuit 1153. For each block, the statistics 1151 of the processing circuit 115 sums and averages the n×n luminance values of the N pixel units in each block to generate a luminance average value (or a weighted average value of luminance in another embodiment) of each block, thereby generating, for example, 96×54 luminance average values (but not limited to). After generating the luminance averages, the interpolation circuit 1152 of the processing circuit 115 generates an interpolated luminance value of each pixel unit in each of the blocks according to the luminance averages of the blocks, and the mixing circuit 1153 mixes the interpolated luminance value with the original luminance value to generate a corrected luminance value, so as to obtain the luminance value of the final corrected picture. Since the interpolated luminance value is determined by the partial average value of the partial block, the interpolated luminance value is a blurred image result, and the corrected luminance value generated by properly mixing the interpolated luminance value with the original luminance value can be used to erase the false contour as much as possible while maintaining the image sharpness of other parts without excessive distortion.
In implementation, for an interpolated luminance value of a specific pixel unit of a specific block, the interpolation circuit 1152 of the processing circuit 115 refers to a plurality of blocks, such as m×m blocks, which are vertically adjacent to each other, and M is, for example, 5 (but not limited to), the interpolation circuit 1152 calculates a plurality of interpolation weight values, such as m×m interpolation weight values, according to distances between the specific pixel unit of the specific block and a plurality of spatial positions of a plurality of neighboring blocks of the specific block, wherein a corresponding interpolation weight value is larger when a spatial position distance is closer, then the interpolation circuit 1152 generates a mixed luminance value of the specific pixel unit of the specific block and an interpolated luminance value of the specific pixel unit according to a luminance average value of the specific block itself, a plurality of average values of the plurality of neighboring blocks and the plurality of interpolation weight values, respectively multiplies m×m interpolation weight values by m×m luminance average values of the M neighboring blocks to calculate an interpolated luminance value of the specific pixel unit of the specific block, and then the mixed luminance value of the specific pixel unit of the specific block and the final mixed luminance value of the specific pixel unit is generated according to the mixed luminance value of the specific pixel unit of the specific block and the interpolation weight value of the specific pixel unit. Therefore, after the above processing operation is performed for each pixel unit of each block in a frame, the processing circuit 115 can generate a luminance value of a corrected frame.
For the mixing operation, please refer to fig. 2, fig. 2 is a schematic diagram of an embodiment of a mixing weight curve of the operation of the mixing circuit 1153. For a particular pixel unit of a particular block, the blending circuit 1153 subtracts an original luminance value of the particular pixel unit from an interpolated luminance value of the particular pixel unit to generate a luminance difference value, and then determines a blending weight value with reference to the blending weight curve shown in fig. 2, where the horizontal axis is represented as a difference value (e.g., luminance difference value) of an image component, and the vertical axis is represented as a blending weight value (e.g., ranging from 0 to 1), when the luminance difference value is larger, the smaller the blending weight value is determined, the larger the proportion of the modified luminance value is determined or contributed by the interpolated luminance value, i.e., the larger the proportion of the modified luminance value is determined or contributed by the luminance value of the original image, whereas when the luminance difference value is smaller, the larger the blending weight value is determined, the larger the proportion of the modified luminance value is determined or contributed by the original luminance value, i.e., the smaller the proportion of the modified luminance value is contributed by the interpolated luminance value decont Can be obtained byThe formula is:
Y decont =α 1 ×Y int +(1-α 1 )×Y raw
wherein alpha is 1 Is a mixed weight value, Y int Is the interpolated luminance value and Y raw Is the brightness value of the original image; that is, equivalently, when the brightness difference is smaller, the processing circuit 115 performs a larger false contour smoothing operation, and when the brightness difference is larger, the processing circuit 115 performs a smaller false contour smoothing operation to avoid erasing the real image edges in the image frame.
In addition, in order to generate a luminance average value of the specific block, in one embodiment, the statistical circuit 1151 of the processing circuit 115 may also determine the luminance average value of the specific block by referring to a luminance average value of a block in the same spatial position as the specific block in the previous image, for example, in order to avoid an excessive difference between luminance information in two images before and after time, the statistical circuit 1151 may determine a first threshold TH1 (e.g., an upper limit value) by referring to a luminance average value of a block in the same spatial position as the specific block in the previous image, when the calculated luminance average value of the specific block is greater than the first threshold TH1, the statistical circuit 1151 may directly take the first threshold TH1 as a final luminance average value of the specific block, and may determine a second threshold TH2 (e.g., a lower limit value) by referring to the luminance average value of the block in the same spatial position as the specific block in the previous image, and when the calculated luminance average value of the specific block is less than the second threshold TH2, the statistical circuit 1151 may directly take the second threshold TH2 as the final luminance average value of the specific block. That is, the processing circuit can refer to the brightness of the previous image frame to correct properly when determining the average brightness value of a specific block of the current image frame, so that the brightness difference between the previous image frame and the next image frame is not too large, and the phenomenon of sudden flickering of the images is avoided.
In addition, for processing the chrominance information of the U-component, the second image processing circuit 105 is configured to receive an input image data of the chrominance information on a U-code channel in YUV coding format, wherein the input image data includes chrominance values of a plurality of pixel units in a plurality of image frames. The processing circuit 115 is coupled to the receiving circuit 110 for spatially dividing or classifying the size of an image frame into a plurality of blocks according to a specific block size, each block having, for example, n×n pixel units, for example, if the block size is 40×40 pixel units for a 4K image frame, the 4K image frame size can be divided into 96×54 blocks; however, this is not a limitation of the present disclosure.
In implementation, for each block, the statistics 1151 of the processing circuit 115 sums and averages n×n chrominance values of n×n pixel units in each block to generate a chrominance average value (or a weighted average of chrominance in another embodiment) of each block, thereby generating, for example, 96×54 chrominance average values (but not limited to). After generating the chrominance averages, the interpolation circuit 1152 of the processing circuit 115 generates an interpolated chrominance value of each pixel unit in each of the blocks according to the chrominance averages of the blocks, and the mixing circuit 1153 mixes the interpolated chrominance value with the chrominance value of the original image to generate a corrected chrominance value, thereby obtaining the chrominance value of the final corrected picture. Since the interpolated chrominance value is determined by the partial average value of the partial block, the interpolated chrominance value is the result of the blurred image, and the final corrected chrominance value generated by properly mixing the interpolated chrominance value with the chrominance value of the original image can be obtained while erasing the false contour as much as possible while maintaining the image sharpness of the other partial image without excessive distortion.
In implementation, for an interpolated chrominance value of a specific pixel unit of a specific block, the interpolation circuit 1152 of the processing circuit 115 refers to a plurality of blocks, such as m×m blocks, which are vertically adjacent to each other, and M is, for example, 5 (but not limited to), the interpolation circuit 1152 calculates a plurality of interpolation weight values, such as m×m interpolation weight values, according to distances between the specific pixel unit of the specific block and a plurality of spatial positions of the plurality of neighboring blocks of the specific block, wherein a corresponding interpolation weight value is larger when a spatial position distance is closer, and then the interpolation circuit 1152 generates a final blended chrominance value according to the chroma average value of the specific block itself, a plurality of average values of the plurality of neighboring blocks itself, and the plurality of interpolation weight values, and accordingly multiplies m×m interpolation weight values by m×m chroma average values of M neighboring blocks to calculate an interpolated chrominance value of the specific pixel unit of the specific block, and then a final blended chrominance value of the specific pixel unit of the specific block is generated according to a blended chrominance value of the specific pixel unit and the final blended chrominance value of the specific pixel unit. Therefore, after the above processing operation is performed for each pixel unit of each block in a frame, the processing circuit 115 can generate a chrominance value of a corrected frame.
Similarly, for a particular pixel unit of a particular block, the blending circuit 1153 subtracts a chrominance value of an original image of the particular pixel unit from an interpolated chrominance value of the particular pixel unit to generate a chrominance difference value, and then similarly determines a blending weight value with reference to the blending weight curve shown in fig. 2, wherein the smaller the blending weight value is determined to be the smaller the chrominance difference value, the smaller the proportion of a final corrected chrominance value is determined or contributed by the interpolated chrominance value, i.e., the larger the proportion of the corrected chrominance value is determined or contributed by the chrominance value of the original image, whereas the larger the blending weight value is determined to be the larger the chrominance difference value is the larger the proportion of a final corrected chrominance value is determined or contributed by the interpolated chrominance value, i.e., the smaller the proportion of the corrected chrominance value is determined or contributed by the chrominance value of the original imageDetermining or contributing, for example, corrected chrominance value U decont This can be expressed by the following equation:
U decont =α 2 ×U int +(1-α 2 )×U raw
wherein alpha is 2 Is a mixed weight value, U int Is the interpolated chrominance value and U raw Is the chroma value of the original image; that is, equivalently, when the chrominance difference is small, the processing circuit 115 performs a large degree of false contour smoothing operation, and when the chrominance difference is large, the processing circuit 115 performs a small degree of false contour smoothing operation to avoid erasing the real image edges in the image frame.
In addition, in order to generate a chrominance mean value of the specific block, in one embodiment, the statistical circuit 1151 of the processing circuit 115 may also determine the chrominance mean value of the specific block by referring to a chrominance mean value of a block in the same spatial position as the specific block in a previous image frame, for example, in order to avoid an excessive difference in chrominance information in two image frames before and after time, the statistical circuit 1151 may determine a third threshold TH3 (e.g., an upper limit value) by referring to a chrominance mean value of a block in the same spatial position as the specific block in the previous image frame, when the calculated chrominance mean value of the specific block is greater than the third threshold TH3, the statistical circuit 1151 may directly take the third threshold TH3 as a final chrominance mean value of the specific block, and may determine a fourth threshold TH4 (e.g., a lower limit value) by referring to the chrominance mean value of the block in the same spatial position as the specific block in the previous image frame, and the calculated chrominance mean value of the specific block is less than the fourth threshold TH4, when the calculated chrominance mean value of the specific block is less than the fourth threshold TH4, the statistical circuit 1151 takes the final chrominance mean value of the specific block as the final chrominance mean value of the specific block. That is, the processing circuit can refer to the chromaticity of the previous image frame to correct properly when determining the average value of the chromaticity of a specific block of the current image frame, so that the chromaticity difference between the previous image frame and the next image frame is not too large, and the phenomenon of sudden flickering of the images is avoided.
In addition, for processing the density information of the V component, the third image processing circuit 105 is configured to receive an input image data of the density information on a V encoding channel in YUV encoding format, where the input image data includes density values of a plurality of image values of a plurality of pixel units in a plurality of image frames. The processing circuit 115 is coupled to the receiving circuit 110 for spatially dividing or classifying the size of an image frame into a plurality of blocks according to a specific block size, each block having, for example, n×n pixel units, for example, if the block size is 40×40 pixel units for a 4K image frame, the 4K image frame size can be divided into 96×54 blocks; however, this is not a limitation of the present disclosure.
In implementation, for each block, the statistics 1151 of the processing circuit 115 sums and averages the N concentration values of the n×n pixel units in each block to generate a concentration average value (or a weighted average value of the concentrations in another embodiment) of each block, thereby generating, for example, 96×54 concentration average values (but not limited to). After generating the concentration averages, the interpolation circuit 1152 of the processing circuit 115 generates an interpolated concentration value of each pixel unit in each of the blocks according to the concentration averages of the blocks, and the mixing circuit 1153 mixes the interpolated concentration value with the concentration value of the original image to generate a corrected concentration value, thereby obtaining a concentration value of a final corrected image. Since the interpolated density value is determined by the partial average value of the partial block, the interpolated density value is the result of blurring the image, and the final corrected density value generated by properly mixing the interpolated density value with the density value of the original image can erase the false contour as much as possible while maintaining the image of other parts with sufficient image sharpness without excessive distortion.
In implementation, for generating an interpolated concentration value of a specific pixel unit of a specific block, the interpolation circuit 1152 of the processing circuit 115 refers to a plurality of blocks, such as m×m blocks, which are vertically and horizontally adjacent to the specific block, and M is, for example, 5 (but not limited to), the interpolation circuit 1152 calculates a plurality of interpolation weight values, such as m×m interpolation weight values, for example, according to distances between the specific pixel unit of the specific block and a plurality of spatial positions of a plurality of adjacent blocks of the specific block, wherein a corresponding interpolation weight value is larger when a spatial position distance is closer, and then the interpolation circuit 1152 generates a mixed concentration value of the specific pixel unit of the specific block and the final interpolation weight value according to a concentration average value of the specific block itself, a plurality of average values of the plurality of adjacent blocks and the interpolation weight value, respectively multiplies m×m interpolation weight values by m×m concentration average values of the m×m adjacent blocks to calculate an interpolated concentration value of the specific pixel unit of the specific block, and then mixes a concentration value of the specific pixel unit of the specific block and the final interpolation weight value of the specific pixel unit according to a mixed concentration value of the specific pixel unit of the specific block and the final interpolation weight value. Therefore, after performing the above processing operation for each pixel unit of each block in a frame, the processing circuit 115 can generate a corrected density value of the frame.
Similarly, for a particular pixel cell of a particular block, the blending circuit 1153 subtracts the concentration value of an original image of the particular pixel cell from an interpolated concentration value of the particular pixel cell to generate a concentration difference value, and then similarly determines a blending weight value with reference to the blending weight curve shown in FIG. 2, wherein the smaller the blending weight value is determined as the concentration difference value is larger, the smaller the proportion of a final corrected concentration value is determined or contributed by the interpolated concentration value, i.e., the larger the proportion of the corrected concentration value is determined or contributed by the concentration value of the original image, whereas the larger the blending weight value is determined as the concentration difference value is smaller, the larger the proportion of a final corrected concentration value is determined or contributed by the interpolated concentration valueThe smaller the proportion of the corrected density value is determined or contributed by the density value of the original image, for example, the corrected density value U raw This can be expressed by the following equation:
V decont =α 3 ×V int +(1-α 3 )×V raw
wherein alpha is 3 Is a mixed weight value, V int Is the interpolated concentration value and V raw Is the intensity value of the original image; that is, equivalently, when the density difference is small, the processing circuit 115 performs a large degree of false contour smoothing operation, and when the density difference is large, the processing circuit 115 performs a small degree of false contour smoothing operation to avoid erasing the real image edges in the image frame.
In addition, in order to generate a concentration average value of the specific block, in one embodiment, the statistical circuit 1151 of the processing circuit 115 may also determine the concentration average value of the specific block by referring to a concentration average value of a block in the same spatial position as the specific block in the previous image, for example, in order to avoid an excessive difference between concentration information in two images before and after time, the statistical circuit 1151 may determine a fifth threshold TH5 (e.g., an upper limit value) by referring to a concentration average value of a block in the same spatial position as the specific block in the previous image, when the calculated concentration average value of the specific block is greater than the fifth threshold TH5, the statistical circuit 1151 may directly take the fifth threshold TH5 as a final concentration average value of the specific block, and may determine a sixth threshold TH6 (e.g., a lower limit value) by referring to the concentration average value of the block in the same spatial position as the specific block in the previous image, and the calculated concentration average value of the specific block is less than the sixth threshold TH6, when the calculated concentration average value of the specific block is less than the sixth threshold TH6, the statistical circuit 1151 takes the final concentration average value of the specific block as the final concentration average value of the specific block. That is, the processing circuit can refer to the density of the previous image frame to correct properly when determining the average value of the density of a specific block of the previous image frame, so that the density difference between the previous image frame and the next image frame is not too large, and the phenomenon of sudden flickering of the images is avoided. It should be noted that the values of the upper limits may be the same or different, and the values of the lower limits may be the same or different, and may be dynamically adjusted depending on the design of the user.
Finally, when the image information of the different image components is obtained, the image processing apparatus 100 combines the image information of the different image components to generate a corrected image frame, so that the image of other parts can still have enough image sharpness while erasing the false contour generated by the image compression in the original image frame as much as possible.
In addition, it should be noted that the technical disclosure is also applicable to RGB color coding formats, please refer to fig. 3, fig. 3 is a schematic diagram of an image processing apparatus 300 according to another embodiment of the present application. The image processing apparatus 300 is used as a television chip apparatus, for example, for processing compressed image data, where the image data includes a plurality of image frames, each image frame includes a plurality of pixel units, and an image value of each pixel unit is represented by an RGB coding format, and the three image processing circuits 105 included in the image processing apparatus 300 respectively disposed on an R coding channel, a G coding channel, and a B coding channel can respectively and independently process image data of different components (or color components) of the RGB color coding format, so that the image of other parts can still maintain sufficient image sharpness while erasing false contours generated due to image compression in the original image frame as much as possible.
Moreover, in order to make the technical spirit of the present application easier for the reader, fig. 4 is a flowchart of a method for processing an image frame according to an embodiment of the present application, if substantially the same result is achieved, it is not necessary to perform the steps in the flowchart shown in fig. 4 in the sequence, and the steps shown in fig. 4 are not necessarily performed continuously, that is, other steps may be inserted therein. The description of the flow steps is described below:
step 405: starting;
step 410: receiving a specific image component data of an input image picture on a specific color coding channel, the specific image component data comprising a plurality of image component values of the same characteristic in one or more image pictures;
step 415: spatially classifying a plurality of pixel units of the input image frame into a plurality of blocks according to a specific block size;
step 420: for each block, obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block;
step 425: generating a plurality of image component values of an interpolated image frame according to the plurality of image component average values of the plurality of blocks;
step 430: mixing the plurality of image component values of the interpolated image frame with the plurality of original component values of the input image frame to generate a plurality of corrected image component values of an output image frame; and
step 435: and (5) ending.
The foregoing description is only of the preferred embodiments of the application, and all changes and modifications that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
[ symbolic description ]
100,300 image processing apparatus
105 image processing circuit
110 receiving circuit
115 processing circuit
1151 statistical circuit
1152 interpolation circuit
1153 hybrid circuit
Claims (10)
1. An image processing method, comprising:
receiving specific image component data of an input image picture on a specific color coding channel, wherein the specific image component data consists of a plurality of image component values with the same characteristics;
spatially classifying a plurality of pixel units of the input image frame into a plurality of blocks according to a specific block size;
for each block, obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block;
generating a plurality of image component values of an interpolated image frame according to the plurality of image component average values of the plurality of blocks; and
the plurality of image component values of the interpolated image frame are mixed with the plurality of original component values of the input image frame to produce a corrected plurality of image component values of an output image frame.
2. The image processing method of claim 1, wherein the step of generating a plurality of image component values for the interpolated image frame comprises:
for a particular pixel unit of a particular block in the input image frame:
determining a plurality of interpolation weight values according to the space position distances between the specific pixel unit of the specific block and a plurality of adjacent blocks of the specific block;
generating an interpolated image component value of the specific pixel unit of the specific block according to an image component average value of the specific block, a plurality of image component average values of the plurality of neighboring blocks, and the plurality of interpolation weight values;
determining a blending weight value according to an original image component value of the specific pixel unit of the specific block and the interpolated image component value of the specific pixel unit of the specific block; and
and mixing the original image component value and the interpolated image component value according to the mixing weight value to generate a corrected image component value of the specific pixel unit.
3. The image processing method of claim 2, wherein determining the blending weight value comprises:
calculating an image component difference value between the original image component value and the interpolated image component value; and
and determining the mixing weight value according to the calculated image component difference value.
4. An image processing method according to claim 3, wherein the blending weight value is smaller as the image component difference value is larger, so that a larger proportion of the corrected image component value is contributed by the original image component value; and when the image component difference value is smaller, the blending weight value is larger, so that the larger proportion of the corrected image component value is contributed by the interpolated image component value.
5. The method of image processing according to claim 1, wherein the specific color-coded channel is one of a luminance, a chrominance, and a density corresponding to a YUV color-coding system.
6. The image processing method of claim 1, wherein the particular color-coded channel is one of the particular colors corresponding to an RGB color-coded space.
7. The method of claim 1, wherein calculating the image component average value for each block of the input image is further performed by referring to an image component average value and a threshold value of a corresponding block in the same spatial position in a previous input image of the input image frame.
8. The image processing method according to claim 7, further comprising:
for each block, when the threshold value is between the average value of the image components corresponding to the corresponding block in the same spatial position in the previous input image picture and the average value of the image components corresponding to the each block in the same spatial position in the input image picture, the average value of the image components corresponding to the each block in the same spatial position in the input image picture is corrected to the threshold value.
9. The method of claim 8, wherein the threshold is determined by an average of the image components corresponding to the corresponding block at the same spatial location within the previous input image frame.
10. An image processing circuit comprising:
a receiving circuit for receiving a specific image component data of an input image frame in a specific color coding channel, the specific image component data being composed of a plurality of image component values of the same characteristics; and
a processing circuit, coupled to the receiving circuit, for:
spatially classifying a plurality of pixel units of the input image frame into a plurality of blocks according to a specific block size;
for each block, obtaining a plurality of image component values corresponding to a plurality of pixel units in each block from the specific image component data, and calculating an image component average value of each block;
generating a plurality of image component values of an interpolated image frame according to a plurality of image component average values of the plurality of blocks; and
the plurality of image component values of the interpolated image frame are mixed with the plurality of original component values of the input image frame to produce a corrected plurality of image component values of an output image frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010328270.0A CN113556545B (en) | 2020-04-23 | 2020-04-23 | Image processing method and image processing circuit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010328270.0A CN113556545B (en) | 2020-04-23 | 2020-04-23 | Image processing method and image processing circuit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113556545A CN113556545A (en) | 2021-10-26 |
CN113556545B true CN113556545B (en) | 2023-12-08 |
Family
ID=78101102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010328270.0A Active CN113556545B (en) | 2020-04-23 | 2020-04-23 | Image processing method and image processing circuit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113556545B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113706665B (en) * | 2021-10-28 | 2023-01-31 | 北京美摄网络科技有限公司 | Image processing method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004023316A (en) * | 2002-06-14 | 2004-01-22 | Ricoh Co Ltd | Image processor |
JP2005130244A (en) * | 2003-10-24 | 2005-05-19 | Ricoh Co Ltd | Image processing apparatus and image processing method |
CN101778297A (en) * | 2009-01-08 | 2010-07-14 | 华晶科技股份有限公司 | Interference elimination method of image sequence |
CN102117610A (en) * | 2009-12-31 | 2011-07-06 | 瑞轩科技股份有限公司 | Method for adjusting brightness of picture |
CN102147916A (en) * | 2010-02-10 | 2011-08-10 | 索尼公司 | Image processing device, image processing method and program |
CN104104968A (en) * | 2013-04-02 | 2014-10-15 | 联咏科技股份有限公司 | Image processing circuit and annular fake image elimination method |
TW201519160A (en) * | 2013-09-10 | 2015-05-16 | Apple Inc | Image tone adjustment using local tone curve computation |
CN107211141A (en) * | 2015-01-30 | 2017-09-26 | 汤姆逊许可公司 | The method and apparatus that inverse tone mapping (ITM) is carried out to image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9911179B2 (en) * | 2014-07-18 | 2018-03-06 | Dolby Laboratories Licensing Corporation | Image decontouring in high dynamic range video processing |
KR20170087278A (en) * | 2016-01-20 | 2017-07-28 | 한국전자통신연구원 | Method and Apparatus for False Contour Detection and Removal for Video Coding |
JP6912869B2 (en) * | 2016-06-15 | 2021-08-04 | オリンパス株式会社 | Image processing device, image processing program, image processing method |
-
2020
- 2020-04-23 CN CN202010328270.0A patent/CN113556545B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004023316A (en) * | 2002-06-14 | 2004-01-22 | Ricoh Co Ltd | Image processor |
JP2005130244A (en) * | 2003-10-24 | 2005-05-19 | Ricoh Co Ltd | Image processing apparatus and image processing method |
CN101778297A (en) * | 2009-01-08 | 2010-07-14 | 华晶科技股份有限公司 | Interference elimination method of image sequence |
CN102117610A (en) * | 2009-12-31 | 2011-07-06 | 瑞轩科技股份有限公司 | Method for adjusting brightness of picture |
CN102147916A (en) * | 2010-02-10 | 2011-08-10 | 索尼公司 | Image processing device, image processing method and program |
CN104104968A (en) * | 2013-04-02 | 2014-10-15 | 联咏科技股份有限公司 | Image processing circuit and annular fake image elimination method |
TW201519160A (en) * | 2013-09-10 | 2015-05-16 | Apple Inc | Image tone adjustment using local tone curve computation |
CN107211141A (en) * | 2015-01-30 | 2017-09-26 | 汤姆逊许可公司 | The method and apparatus that inverse tone mapping (ITM) is carried out to image |
Non-Patent Citations (1)
Title |
---|
Geometrical_transformation-based_ghost_artifacts_removing_for_high_dynamic_range_image;Jaehyun Im, Sangsik Jang, Seungwon Lee, and Joonki Paik;2011 18th IEEE International Conference on Image Processing;357-360 * |
Also Published As
Publication number | Publication date |
---|---|
CN113556545A (en) | 2021-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI407777B (en) | Apparatus and method for feature-based dynamic contrast enhancement | |
US7075993B2 (en) | Correction system and method for enhancing digital video | |
CN105850114A (en) | Method for inverse tone mapping of an image | |
CN113518185B (en) | Video conversion processing method and device, computer readable medium and electronic equipment | |
JP2002510174A (en) | Method and system for improving image quality in interlaced video display | |
CN109345490B (en) | Method and system for enhancing real-time video image quality of mobile playing terminal | |
CN107862672B (en) | Image defogging method and device | |
TWI698124B (en) | Image adjustment method and associated image processing circuit | |
CN108806638B (en) | Image display method and device | |
CN106157253B (en) | Image processing apparatus and image processing method | |
CN112534466B (en) | Directional scaling system and method | |
CN113556545B (en) | Image processing method and image processing circuit | |
CN110580690B (en) | Image enhancement method for identifying peak value transformation nonlinear curve | |
US20120321182A1 (en) | Image Compression Circuit for Compressing Pieces of Image Data of Four Adjacent Pixels, Image Compression Method, Semiconductor Device, and Smartphone | |
TWI743746B (en) | Image processing method and image processing circuit | |
TWI517097B (en) | Method, apparatus, and non-transitory computer readable medium for enhancing image contrast | |
CN113542864B (en) | Video splash screen area detection method, device and equipment and readable storage medium | |
CN113613024B (en) | Video preprocessing method and device | |
CN115239578A (en) | Image processing method and device, computer readable storage medium and terminal equipment | |
US9008463B2 (en) | Image expansion apparatus for performing interpolation processing on input image data, and image expansion method thereof | |
US8345155B2 (en) | Integrated circuit comprising deflicker unit for filtering image data, and a method therefor | |
CN114078101A (en) | Image display method, electronic device, and computer-readable storage medium | |
CN104063845B (en) | Enhance the method and device and non-transitory computer-readable media of image contrast | |
CN101877759B (en) | Device and method for processing image | |
CN113099220B (en) | Video conference video image green screen detection method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |