CN109257605B - Image processing method, device and system - Google Patents

Image processing method, device and system Download PDF

Info

Publication number
CN109257605B
CN109257605B CN201710571640.1A CN201710571640A CN109257605B CN 109257605 B CN109257605 B CN 109257605B CN 201710571640 A CN201710571640 A CN 201710571640A CN 109257605 B CN109257605 B CN 109257605B
Authority
CN
China
Prior art keywords
image block
reconstructed image
current
filter
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710571640.1A
Other languages
Chinese (zh)
Other versions
CN109257605A (en
Inventor
高山
张红
杨海涛
刘杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710571640.1A priority Critical patent/CN109257605B/en
Priority to PCT/CN2018/085537 priority patent/WO2019011046A1/en
Priority to TW107116752A priority patent/TWI681672B/en
Publication of CN109257605A publication Critical patent/CN109257605A/en
Application granted granted Critical
Publication of CN109257605B publication Critical patent/CN109257605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Abstract

The application provides an image processing method, equipment and a system, comprising the following steps: generating a reconstruction signal of the current image block to be encoded, and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block; and if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters, and performing upsampling processing on the current reconstructed image block through the first filter. Compared with the prior art that the same filter is adopted for the reconstructed image blocks in the whole image, the corresponding filter is selected for each reconstructed image block, namely the filter is selected in a targeted selection mode, and the selected filter is used for performing up-sampling processing on the reconstructed image blocks, so that the reconstructed image blocks with better display effect can be obtained.

Description

Image processing method, device and system
Technical Field
The present application relates to image processing technologies, and in particular, to an image processing method, device and system.
Background
Digital video is video recorded in digital form. Fig. 1 is a schematic diagram of a digital video provided by the present application, and as shown in fig. 1, the digital video is composed of a plurality of frames of digital images. Fig. 2 is a schematic diagram of a digital image provided in the present application, and as shown in fig. 2, the image is composed of 12 × 16 pixels, where each pixel is referred to as a pixel, and 12 × 16 represents the image resolution. For example, the image resolution of 2K video is 1920 × 1080 and the image resolution of 4K video is 3840 × 2160. The original video usually includes a large amount of data, which is not suitable for storage and transmission, and the original data needs to be compressed by using an efficient video compression coding technique.
Specifically, fig. 3 is a schematic encoding diagram of the encoding end provided in the present application, and as shown in fig. 3, an encoding process of the encoding end includes: after receiving the video, the encoding end divides each frame of image forming the video into a plurality of image blocks to be encoded. For a current image block to be encoded, firstly predicting the current image block to be encoded through a reference reconstructed image block (the reference reconstructed image block is used for providing reference pixels required by the current image block to be encoded, and the reference pixels are used for predicting the current image block to be encoded), and obtaining a prediction signal of the current image block to be encoded; and subtracting the prediction signal from the original signal of the current image block to be coded to obtain a residual signal. After prediction, the amplitude of the residual signal is much smaller than the original signal. The residual signal is subjected to transform and quantization operations. And after transformation and quantization, obtaining a transformation quantization coefficient, and coding the quantization coefficient and other indication information in coding by an entropy coding technology to obtain a code stream. Furthermore, the encoding end needs to reconstruct the current image block to be encoded, so as to provide reference pixels for encoding the subsequent image block to be encoded. Specifically, after obtaining the transform quantization coefficient of the current image block to be encoded, the encoding end needs to perform inverse quantization and inverse transform on the transform quantization coefficient of the current image block to be encoded to obtain a reconstructed residual signal, add the reconstructed residual signal to the prediction signal corresponding to the current image block to be encoded to obtain a reconstructed signal of the current image block to be encoded, and obtain a reconstructed image block according to the reconstructed signal. The reconstructed image block may predict a subsequent image block to be encoded. Optionally, the residual signal is transformed to obtain a transform coefficient, and the transform coefficient is quantized to have information loss, which is irreversible. That is, the transform coefficients after inverse quantization have distortion, so that the reconstructed signal is inconsistent with the original signal, and the compression method is lossy compression. Therefore, for lossy compression, after obtaining a reconstructed image block, the reconstructed image block needs to be filtered, so as to remove some distortions, such as blocking effect, ringing effect, and the like, introduced by the lossy compression. To remove blocking artifacts, DBK filters in the h.264, h.265 standard may be used. In order to remove the ringing effect, an SAO filter in h.265, an ALF filter in the next generation standard, and the like may be used. There are also lossless compression methods, i.e. the residual signal is transformed into transform coefficients using lossless transform operation, and entropy coding is performed on the transform coefficients without quantization operation. For lossless compression, the filtering operation is generally not performed. Further, after each image block of the current image is reconstructed, a reconstructed image is obtained, wherein the reconstructed image can predict other subsequent frame images.
Fig. 4 is a schematic decoding diagram of a decoding end provided by the present application, and as shown in fig. 4, after the decoding end acquires a code stream, the decoding end first performs entropy decoding on the code stream to obtain a transform quantization coefficient of a current image block to be reconstructed, and then performs inverse quantization and inverse transformation on the transform quantization coefficient to obtain a reconstructed residual signal of the current image block to be reconstructed. The method comprises the steps of predicting a current image block to be reconstructed through a reference reconstructed image block of the image block to obtain a prediction signal of the current image block to be reconstructed, adding the prediction signal and a reconstructed residual signal to obtain a reconstructed signal of the current image block to be reconstructed, and obtaining a current reconstructed image block corresponding to the current image block to be reconstructed according to the reconstructed signal, wherein the current reconstructed image block can predict other subsequent image blocks to be reconstructed. Similar to the case of the encoding side described above, optionally, the current reconstructed image block needs to be filtered at the decoding side. Further, after each image block of the current image is reconstructed, a reconstructed image is obtained, wherein the reconstructed image can predict other subsequent frame images.
In order to reduce the complexity of encoding and decoding, a coding end performs downsampling on each frame of image, fig. 5 is a schematic encoding diagram of the coding end provided by the present application, and as shown in fig. 5, the coding end performs downsampling on the whole image, and then encodes each image block to be encoded in the downsampled image to obtain a code stream. And the resolution of the reconstruction image block corresponding to each image block to be coded is the down-sampling resolution. Correspondingly, the decoding end analyzes the code stream, the resolution of each image block to be reconstructed is a down-sampling resolution, the resolution of the corresponding reconstructed image block is also the down-sampling resolution, and the decoding end needs to perform up-sampling processing on the reconstructed image block to obtain the reconstructed image block with the original resolution.
In the prior art, an encoding end or a decoding end performs upsampling processing on each reconstructed image block in the whole image by using the same filter, however, the characteristics of each reconstructed image block may be different, for example, some reconstructed image blocks may be relatively flat; some reconstructed image blocks may have more details, and in the prior art, some reconstructed image blocks after upsampling processing are blurred, so that the problem of poor display effect exists.
Disclosure of Invention
The application provides an image processing method, equipment and a system, so that the problem that the display effect of reconstructed image blocks after some up-sampling processing is poor is solved.
In a first aspect, the present application provides an image processing method, including: generating a reconstruction signal of the current image block to be encoded, and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block; and if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters, and performing upsampling processing on the current reconstructed image block through the first filter.
The beneficial effect of this application does: compared with the prior art that the same filter is adopted for the reconstructed image blocks in the whole image, the corresponding filter is selected for each reconstructed image block, namely the filter is selected in a targeted selection mode, and the selected filter is used for performing up-sampling processing on the reconstructed image blocks, so that the reconstructed image blocks with better display effect can be obtained.
Optionally, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters specifically includes: a first filter is selected from the at least two candidate filters according to texture features of a current reconstructed image block.
Optionally, selecting a first filter from the at least two candidate filters according to the texture features of the current reconstructed image block includes: and selecting a first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and at least two candidate filters comprising the first filter.
The first filter is selected for the current reconstructed image block according to the texture features of the current reconstructed image block, and the selected filter is used for performing up-sampling processing on the reconstructed image block, so that the reconstructed image block with better display effect can be obtained.
Optionally, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters specifically includes: determining the similarity between each adjacent reconstructed image block and the current reconstructed image block in at least two adjacent reconstructed image blocks of the current reconstructed image block, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two second filters as the first filter.
The first filter is selected according to the adjacent reconstructed image block of the current reconstructed image block, and the selected filter is used for performing up-sampling processing on the reconstructed image block, so that the reconstructed image block with better display effect can be obtained.
Optionally, selecting a first filter for upsampling the current reconstructed image block from the at least two candidate filters includes: respectively performing upsampling processing on the current reconstructed image block through at least two candidate filters to obtain upsampled image blocks respectively corresponding to the at least two candidate filters; respectively calculating errors of the up-sampling image blocks corresponding to the at least two candidate filters and the original image block corresponding to the current reconstructed image block; and taking the candidate filter corresponding to the minimum error as the first filter. By this method the first filter can be selected more accurately.
Optionally, the upsampling the current reconstructed image block by the first filter includes: performing primary up-sampling processing on a current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: and if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: if all image blocks of a current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the needed adjacent reconstructed image blocks through a third filter, wherein the other part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when the current reconstructed image block is subjected to first up-sampling processing; part of the boundary of the current reconstructed image block is contiguous with another part of the neighboring reconstructed image block.
The two alternative methods can avoid the problem that the boundary of the current reconstructed image block is discontinuous.
Optionally, the third filter is the first filter.
Optionally, before performing, by using a third filter, a secondary upsampling process on a partial boundary of a current reconstructed image block according to another part of the required adjacent reconstructed image blocks, the method further includes: judging whether to perform secondary up-sampling processing on a part of boundary according to the part of boundary of the other part of adjacent reconstructed image block and the current reconstructed image block; and if the part of the boundary of the first reconstructed image block is determined to be subjected to secondary up-sampling processing, performing secondary up-sampling processing on the part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through a third filter.
When the secondary up-sampling processing is not performed on the current reconstructed image block, the overhead of a decoding end can be reduced, and when the secondary up-sampling processing is performed on the current reconstructed image block, the problem of discontinuous boundary of the current reconstructed image block can be solved.
Optionally, the method further comprises: generating a code stream, wherein the code stream comprises: identification information of the first filter.
Optionally, the code stream further includes first indication information, where the first indication information is used to indicate how to select, from the at least two candidate filters, a filter used when performing upsampling processing on the current reconstructed image block.
Optionally, a code stream is generated, where the code stream includes: and second indication information, wherein the second indication information is used for indicating whether a decoding end needs to perform secondary up-sampling processing on the current reconstructed image block.
The image processing method at the decoding end will be described below, and the effect thereof is similar to the corresponding effect at the encoding end, and will not be described further below.
In a second aspect, the present application provides an image processing method, including: analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain the current reconstruction image block; if the encoding mode of the current image block to be reconstructed is a down-sampling encoding mode, selecting a first filter for performing up-sampling processing on the current reconstructed image block according to first indication information acquired from a code stream, wherein the first indication information is used for indicating how to select a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters; and performing upsampling processing on the current reconstructed image block through a first filter.
Optionally, the first indication information is used to indicate that a filter used when upsampling processing is performed on the current reconstructed image block is selected from at least two candidate filters according to the texture feature selection filter of the current reconstructed image block; selecting a first filter for performing upsampling processing on the current reconstructed image block according to first indication information acquired from the code stream, wherein the first filter comprises: determining texture features of the current reconstructed image block according to the first indication information; and selecting the first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and the at least two candidate filters comprising the first filter.
Optionally, the first indication information is used to indicate that a filter used when the up-sampling processing is performed on the current reconstructed image block is selected from at least two candidate filters according to an adjacent reconstructed image block of the current reconstructed image block; the method for selecting a first filter for performing upsampling processing on a current reconstructed image block according to first indication information acquired from a code stream comprises the following steps: determining the similarity between each adjacent reconstructed image block and the current reconstructed image block in the at least two adjacent reconstructed image blocks, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity from at least two second filters as the first filter.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: if all image blocks of a current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the needed adjacent reconstructed image blocks through a third filter, wherein the other part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when the current reconstructed image block is subjected to first up-sampling processing; part of the boundary of the current reconstructed image block is contiguous with another part of the neighboring reconstructed image block.
Optionally, the third filter is the first filter.
Optionally, before performing, by using a third filter, a secondary upsampling process on a partial boundary of a current reconstructed image block according to another part of the required adjacent reconstructed image blocks, the method further includes: judging whether to perform secondary up-sampling processing on a part of boundary according to the part of boundary of the other part of adjacent reconstructed image block and the current reconstructed image block; and if the part of the boundary of the first reconstructed image block is determined to be subjected to secondary up-sampling processing, performing secondary up-sampling processing on the part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through a third filter.
Optionally, the code stream further includes: second indication information; correspondingly, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks through a third filter, including: and if the second indication information indicates that the secondary up-sampling processing needs to be performed on the current reconstructed image block, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of the adjacent reconstructed image block in the required adjacent reconstructed image block through a third filter.
In a third aspect, the present application provides an image processing method, including: analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; generating a reconstruction signal of the current image block to be reconstructed according to the coding information, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain the current reconstructed image block; and if the coding mode of the current image block to be reconstructed is a down-sampling coding mode, acquiring identification information of the first filter from the code stream, and performing up-sampling processing on the current reconstructed image block through the first filter identified by the identification information.
The following describes an image processing apparatus and system, which implement principles and technical effects similar to those described above, and are not described herein again.
In a fourth aspect, the present application provides an image processing apparatus comprising: the generating module is used for generating a reconstruction signal of the current image block to be encoded and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block; the device comprises a selection module, a first filter and a second filter, wherein the selection module is used for selecting the first filter for performing up-sampling processing on a current reconstructed image block from at least two candidate filters if the coding mode of the current reconstructed image block is a down-sampling coding mode; and the processing module is used for performing up-sampling processing on the current reconstructed image block through a first filter.
In a fifth aspect, the present application provides an image processing apparatus comprising: the analysis module is used for analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; the generating module is used for generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block; the device comprises a selection module and a reconstruction module, wherein the selection module is used for selecting a first filter for performing up-sampling processing on a current reconstructed image block according to first indication information acquired from a code stream if a coding mode of the current image block to be reconstructed is a down-sampling coding mode, and the first indication information is used for indicating how to select a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters; and the processing module is used for performing up-sampling processing on the current reconstructed image block through the first filter.
In a sixth aspect, the present application provides an image processing apparatus comprising: the analysis module is used for analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; the generating module is used for generating a reconstruction signal of the current image block to be reconstructed according to the coding information and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain the current reconstructed image block; the analysis module is also used for acquiring the identification information of the first filter from the code stream if the coding mode of the current image block to be reconstructed is a down-sampling coding mode; and the processing module is used for performing upsampling processing on the current reconstructed image block through the first filter identified by the identification information.
In a seventh aspect, the present application provides an image processing system, comprising: an image processing apparatus as described in the fourth aspect and alternatives thereof, and an image processing apparatus as described in the fifth aspect and alternatives thereof.
In an eighth aspect, the present application provides an image processing system comprising: an image processing apparatus as described in the fourth aspect and alternatives thereof, and an image processing apparatus as described in the sixth aspect and alternatives thereof.
In a ninth aspect, the present application provides an image processing apparatus comprising an encoder configured to:
generating a reconstruction signal of a current image block to be encoded, and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block; and if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters, and performing upsampling processing on the current reconstructed image block through the first filter.
In a tenth aspect, the present application provides an image processing apparatus comprising a decoder configured to:
analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed; reconstructing a current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block; if the encoding mode of the current image block to be reconstructed is a down-sampling encoding mode, selecting a first filter for performing up-sampling processing on the current reconstructed image block according to first indication information acquired from a code stream, wherein the first indication information is used for indicating how to select a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters; and performing upsampling processing on the current reconstructed image block through a first filter.
In an eleventh aspect, the present application provides an image processing apparatus comprising a decoder configured to:
analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed; generating a reconstruction signal of the current image block to be reconstructed according to the coding information, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain the current reconstructed image block; and if the coding mode of the current image block to be reconstructed is a down-sampling coding mode, acquiring identification information of the first filter from the code stream, and performing up-sampling processing on the current reconstructed image block through the first filter identified by the identification information.
In a twelfth aspect, the present application provides a computer storage medium for storing computer software instructions for an image processing apparatus according to the fourth or ninth aspect, which contains a program designed to execute the fourth or ninth aspect.
In a thirteenth aspect, the present application provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the functions performed by the image processing apparatus of the fourth or ninth aspect.
In a fourteenth aspect, the present application provides a computer storage medium for storing computer software instructions for an image processing apparatus according to the fifth or tenth aspect, which contains a program designed to execute the fifth or tenth aspect.
In a fifteenth aspect, the present application provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the functions performed by the image processing apparatus of the fifth or tenth aspect.
In a sixteenth aspect, the present application provides a computer storage medium for storing computer software instructions for an image processing apparatus according to the sixth or eleventh aspect, wherein the computer software instructions comprise a program for executing the program according to the fifth or tenth aspect.
In a seventeenth aspect, the present application provides a computer program product comprising instructions that, when executed by a computer, cause the computer to perform the functions performed by the image processing apparatus of the sixth or eleventh aspect.
Compared with the prior art that the same filter is adopted for the reconstructed image blocks in the whole image, the corresponding filter is selected for each reconstructed image block, namely the filter is selected in a targeted selection mode, and the reconstructed image blocks are subjected to up-sampling processing through the selected filter, so that the reconstructed image blocks with better display effect can be obtained.
Drawings
FIG. 1 is a schematic diagram of a digital video provided herein;
FIG. 2 is a schematic representation of a digital image provided herein;
fig. 3 is a schematic encoding diagram of an encoding end provided in the present application;
fig. 4 is a decoding diagram of a decoding end provided in the present application;
fig. 5 is a schematic encoding diagram of an encoding end provided in the present application;
FIG. 6 is a schematic diagram of an image being encoded according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a reference pixel template according to an embodiment of the present application;
FIGS. 8A and 8B are schematic diagrams of a Planar model according to an embodiment of the present application;
FIG. 9 is a diagram illustrating specific directions of 33 angle prediction modes according to an embodiment of the present application;
FIG. 10 is a schematic diagram of down-sampling an image according to an embodiment of the present application;
FIG. 11 is a schematic diagram of image upsampling provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of image upsampling provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of an upsampled image provided in an embodiment of the present application;
fig. 14 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 15 is a diagram illustrating a 4-neighborhood pixel according to an embodiment of the present application;
FIG. 16 is a diagram illustrating an 8-neighborhood pixel according to an embodiment of the present application;
FIG. 17 is a schematic diagram of image upsampling provided in an embodiment of the present application;
FIG. 18 is a schematic diagram of image upsampling provided in another embodiment of the present application;
FIG. 19 is a schematic diagram of image upsampling provided in an embodiment of the present application;
FIG. 20 is a schematic diagram of image upsampling provided in another embodiment of the present application;
fig. 21 is a schematic diagram of a current reconstructed image block and an adjacent reconstructed image block according to an embodiment of the present application;
fig. 22 is a schematic diagram of a current reconstructed image block according to an embodiment of the present application;
FIG. 23 is a schematic diagram of a right boundary and a border of the right boundary provided by an embodiment of the present application;
fig. 24 is a flowchart of an image processing method according to another embodiment of the present application;
FIG. 25 is a flowchart of an image processing method according to yet another embodiment of the present application;
fig. 26 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 27 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application;
fig. 28 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present application;
FIG. 29 is a block diagram of an image processing system according to the present application;
fig. 30 is a schematic structural diagram of an image processing system according to the present application.
Detailed Description
Hereinafter, some terms of art in the present application will be explained to facilitate understanding by those skilled in the art.
Digital video is video recorded in digital form. Digital video consists of frames of digital images. The original video usually includes a large amount of data, which is not suitable for storage and transmission, and the original data needs to be compressed by using an efficient video compression coding technique.
Video compression techniques achieve compression by eliminating video redundancy. Video redundancy mainly includes the following items: spatial redundancy, temporal redundancy, visual redundancy and information entropy redundancy.
Spatial redundancy: it is the most dominant data redundancy that exists for still images. It means that in an image, the amplitudes of adjacent pixels are relatively similar, and this spatial coherence is called spatial correlation or spatial redundancy. The spatial redundancy is mainly eliminated by an intra-frame prediction method, wherein the intra-frame prediction method is to use the correlation of a video spatial domain to predict the pixels of the current reconstructed image block by using the pixels of the reference reconstructed image block so as to achieve the purpose of removing the video spatial redundancy.
Time redundancy: it is a redundancy often included in a video sequence, and since adjacent images of a video often include the same or similar background and moving object, but the spatial position of the moving object is slightly different, the high correlation of data between the adjacent images is called temporal redundancy. Temporal redundancy is mainly eliminated by inter prediction techniques, which refer to predicting a current pixel using pixels of temporally adjacent pictures.
Visual redundancy: the human eye vision system is not sensitive to the change of image details, and even if the information of the slight change is lost, the human eye cannot feel the information. When recording raw video data, it is generally assumed that the sensitivity of the vision system to various content is consistent, thus yielding more data than is ideally encoded, referred to as visual redundancy. The visual redundancy is mainly eliminated by a transformation and quantization technology, wherein the transformation technology is to transform an image signal into a frequency domain for processing, and perform data expression and bit redistribution according to the contribution of different frequency signals to the visual quality, so that the unreasonable expression of uniform sampling on a spatial domain can be corrected. Meanwhile, the requirement of removing visual redundancy is fused and considered in the bit redistribution process, and through quantization operation, excessively fine high-frequency component expression is omitted, so that effective compression is realized.
Information entropy redundancy: as can be seen from the information theory, to represent a pixel of image data, only the corresponding bit number needs to be allocated according to the size of the information entropy, and for each pixel of image data, it is difficult to obtain the information entropy when acquiring an image, so that the same bit number is generally used for representing each pixel, and thus redundancy is inevitably present. The information entropy redundancy is mainly to eliminate the entropy coding technology through the entropy coding technology, which is to distribute different bit numbers for data with different information entropies through the information entropy distribution of statistical coefficients.
The current mainstream video compression coding architecture is a hybrid coding architecture, and for the redundancy, different technologies are adopted to eliminate the redundancy, and the technologies are combined together to form the hybrid architecture of video coding. As shown in fig. 3, after the encoding end receives the video, each frame of image constituting the video is divided into image blocks to be encoded. For a current image block to be coded, firstly, predicting the current image block to be coded by referring to a reconstructed image block to obtain a prediction signal of the current image block to be coded; and subtracting the prediction signal from the original signal of the current image block to be coded to obtain a residual signal. After prediction, the amplitude of the residual signal is much smaller than the original signal. The residual signal is subjected to transform and quantization operations. And after transformation and quantization, obtaining a transformation quantization coefficient, and coding the quantization coefficient and other indication information in coding by an entropy coding technology to obtain a code stream. Furthermore, the encoding end needs to reconstruct the current image block to be encoded, so as to provide reference pixels for encoding the subsequent image block to be encoded. Specifically, after obtaining the transform quantization coefficient of the current image block to be encoded, the encoding end needs to perform inverse quantization and inverse transform on the transform quantization coefficient of the current image block to be encoded to obtain a reconstructed residual signal, add the reconstructed residual signal to the prediction signal corresponding to the current image block to be encoded to obtain a reconstructed signal of the current image block to be encoded, and obtain a reconstructed image block according to the reconstructed signal.
As shown in fig. 4, after the decoding end obtains the code stream, it first performs entropy decoding on the code stream to obtain a transform quantization coefficient of the current image block to be reconstructed, and then performs inverse quantization and inverse transform on the transform quantization coefficient to obtain a reconstructed residual signal of the current image block to be reconstructed. And predicting the current image block to be reconstructed by referring to the reconstructed image block to obtain a prediction signal of the current image block to be reconstructed, adding the prediction signal and the reconstructed residual signal to obtain a reconstructed signal of the current image block to be reconstructed, and then obtaining the current reconstructed image block corresponding to the current image block to be reconstructed according to the reconstructed signal.
In order to reduce the complexity of encoding and decoding, the encoding end performs downsampling on each frame of image, as shown in fig. 5, the encoding end performs downsampling on the whole image, and then encodes each image block to be encoded in the downsampled image to obtain a code stream. And the resolution of the reconstruction image block corresponding to each image block to be coded is the down-sampling resolution. Correspondingly, the decoding end analyzes the code stream, the resolution of each image block to be reconstructed is a down-sampling resolution, the resolution of the corresponding reconstructed image block is also the down-sampling resolution, and the decoding end needs to perform up-sampling processing on the reconstructed image block to obtain the reconstructed image block with the original resolution.
The method comprises the steps of predicting a current reconstruction image block (a current image block to be coded or a current image block to be reconstructed) at an encoding end and a decoding end by referring to the reconstruction image block to obtain a prediction signal of the current reconstruction image block. In the present application, the prediction mode (mainly intra prediction method) of the current reconstructed image block may adopt the prior art, specifically as follows:
for example: fig. 6 is a schematic diagram of an image being encoded according to an embodiment of the present application, and as shown in fig. 6, the image includes a plurality of image blocks, where an encoding order of the image is: from top to bottom and from left to right. In fig. 6, the image blocks C, B, D, E and a represent reconstructed image blocks that have completed reconstruction, the image block F is the current image block to be encoded, and the other areas in the image are non-encoded image areas.
The specific process of the intra-frame prediction method is described in the h.265 standard, and h.265 supports the division of the current image block to be coded into smaller sub-image blocks for prediction operation. The division structure of the sub image blocks is a quadtree structure, that is, one image block can be divided into four sub image blocks, and each sub image block can be continuously divided into four sub image blocks. As shown in fig. 6, assuming that the current image block to be encoded is divided into 7 sub image blocks for prediction operation, the current image block to be encoded may also be divided into more sub image blocks for prediction operation. And for each sub image block, firstly performing prediction operation to obtain a prediction signal, then obtaining a residual signal of the sub image block according to the prediction signal, and further performing transformation, quantization and entropy coding on the residual signal. For the prediction operation, there are 35 intra prediction methods available for each sub image block, including Planar mode, DC mode, and 33 angular prediction modes. All prediction modes use the same reference pixel template (consisting of multiple reference pixels), and fig. 7 is a reference pixel template provided in an embodiment of the present applicationSchematic representation of the plate, as shown in FIG. 7, P1,1,P2,1……PN,1……P1,N,P2,N……PN,NThese pixels constitute the sub-image blocks to be encoded, for example: the sub image block to be encoded may be sub image block 1 in fig. 6. As shown in fig. 7, in addition to the sub-image blocks to be encoded, other reference pixels R0,0,R1,0……R2N+1,0……R0,2NA reference pixel template is formed, assuming that the sub image block to be encoded is the sub image block 1 in fig. 6, in which case, a part of the reference pixels are the pixels in the last row of the reference reconstructed image block B, and another part of the reference pixels are the pixels in the rightmost column of the reference reconstructed image block a. For other standards, a portion of the reference pixels are pixels of lower rows included in the reference reconstructed image block B, and another portion of the reference pixels are pixels of right columns included in the reference reconstructed image block a. That is, the present application does not limit the reference pixel templates.
Planar model
The Planar mode is suitable for the area with slowly changing pixel value, fig. 8A and 8B are schematic diagrams of the Planar mode provided in an embodiment of the present application, and as shown in fig. 8, two linear filters in the horizontal and vertical directions are used to obtain two predicted values respectively
Figure BDA0001349826420000091
And
Figure BDA0001349826420000092
and will be
Figure BDA0001349826420000093
And
Figure BDA0001349826420000094
as the prediction signal for pixel (x, y).
DC mode
The DC mode is suitable for large-area flat area, and the prediction signal of the current sub-image block to be coded can be obtained by averaging the reference pixels at the left side and the upper side of the current sub-image blockThe values are derived, as shown in FIG. 7, by R, the prediction signal for each pixel in the sub-image block to be encoded0,1,…,R0,N,R1,0,…,RN,0The average value of (A) was obtained.
Angular mode
h.265/HEVC specifies 33 angular prediction modes to better accommodate different directional textures in video content. Fig. 9 is a schematic view showing specific directions of 33 angle prediction modes according to an embodiment of the present application, and as shown in fig. 9, the 33 angle prediction modes are divided into horizontal type modes (2 to 17) and vertical type modes (18 to 34). Where V0 (pattern 26) and H0 (pattern 10) represent the vertical and horizontal directions, respectively, the prediction directions of the remaining angular prediction modes can be considered as being angularly offset in either the vertical or horizontal direction. The angle prediction process is illustrated here by the vertical direction V0(26), which is to predict a sub-image block to be currently encoded by using a row of reference pixels adjacent to the sub-image block to be currently encoded, where the prediction signal of each pixel in the sub-image block to be currently encoded is equal to the pixel value of the reference pixel corresponding to the column of the pixel, i.e. Px,y=Ry,0. For other angular prediction modes, there will be an angular offset from the horizontal or vertical direction from which the position of the reference pixel can be calculated. The position of the reference pixel may be a position between two adjacent reference pixels, and if this is the case, a reference pixel needs to be interpolated between two reference pixels according to the calculated positions. A prediction signal is generated from the obtained reference pixels.
It should be noted that the intra prediction method is also applicable to the decoding end, and is not described herein again.
The application also relates to image down-sampling and image up-sampling processes.
The image down-sampling process involves three aspects of information: 1. down-sampling proportion; 2. a down-sampling position; 3. the filters used for the downsampling.
The down-sampling ratio is a ratio between the original image and the down-sampled image, and can be described in the horizontal direction and the vertical direction, respectively. For example, the image signal may be down-sampled 2:1 in the horizontal direction and 4:1 in the vertical direction; or not sampling in the horizontal direction and sampling in the vertical direction at a ratio of 2: 1; or 2:1 down-sampling in both horizontal and vertical directions, etc.
The down-sampling position refers to the position relationship between the down-sampling point and the original sampling point, for example, the position of the down-sampling point may be the same as the position of a part of the original sampling point, or the down-sampling point may fall between several original sampling points.
The downsampling filter may be a 3-lobe Lanczos filter, a Bilinear filter, Bicubic, Gauss filter, or the like.
The down-sampling process is described below by taking as an example image blocks with a resolution of 16 x 16 (the actual image would be much larger than this, e.g. 1920 x 1080). Fig. 10 is a schematic diagram of image downsampling provided in an embodiment of the present application, assuming that sampling ratios in a horizontal direction and a vertical direction are both 2:1, a downsampling point falls at an original sampling point position on the left side of two original sampling points in the horizontal direction, and the downsampling point falls at an original sampling point position above the two original sampling points in the vertical direction. As shown in fig. 10, the circled out represents the position of the downsampling point, and the downsampled filter is as follows:
Figure BDA0001349826420000101
the filter is a simple low pass filter that can be viewed as a two-dimensional filter or as two one-dimensional filters. If it is considered as a two-dimensional filter, down-sampling in the horizontal and vertical directions can be done simultaneously in one filtering operation. As shown in fig. 10, when downsampling a downsampled point a, the pixel value of the downsampled point a is calculated from the above filter using 8 neighboring original sample points (circles surrounded by triangles). If the filter is used as two one-dimensional filters, down-sampling in the horizontal or vertical direction needs to be completed first, and then down-sampling in the vertical or horizontal direction needs to be performed on the result of the completed down-sampling in the horizontal or vertical direction. As shown in fig. 10, when down-sampling the down-sampling point a, horizontal down-sampling is performed by using one original sampling point on the left and right sides of the down-sampling point a, then vertical down-sampling is performed on the down-sampled result by using one original sampling point on the top and bottom sides of the down-sampling point a, and the pixel value of the down-sampling point a is calculated according to the filter. And (3) performing downsampling processing on the whole 16 × 16 image block by using the same method, wherein the final downsampling result is shown in fig. 10, the positions of the downsampling points are shown as circles formed by frames, and the pixel values of the downsampling points are numerical values after the filter operation. As shown in fig. 10, the resolution of the downsampled image block is 8 × 8.
Generally, an encoding end or a decoding end needs to perform upsampling processing on a downsampled image in order to obtain an image with an original resolution. The upsampling process involves three aspects of information: 1. an up-sampling proportion; 2. an upsampling location; 3 the filter used for upsampling.
The up-sampling ratio refers to a ratio of an image before up-sampling to an image after up-sampling, and may be described in a horizontal direction and a vertical direction, respectively. For example, the image signal subjected to upsampling can be subjected to 1:2 upsampling in the horizontal direction and 1:4 upsampling in the vertical direction; or the horizontal direction does not carry out up-sampling, and the vertical direction carries out 1:2 up-sampling; or 1:2 upsampling in both horizontal and vertical directions, etc.
For example, fig. 11 is a schematic image upsampling diagram provided in an embodiment of the present application, and as shown in fig. 11, in a first row, a horizontal 1:2 upsampling ratio is adopted, and a position of an upsampled sampling point may be on the right side of the upsampled sampling point, where x represents a position of the upsampled sampling point, and a circle represents a position of the upsampled sampling point. In the second row, with a horizontal 1:2 upsampling ratio, the position of the upsampled sample point may be to the left of the upsampled sample point, where x represents the upsampled sample point position and the circle represents the sample point position before upsampling. It should be noted that the position of the up-sampled sampling point should correspond to the position of the down-sampled sampling point, for example: when down-sampling is performed, the position of the down-sampling point is selected as the position of the original sampling point on the left side of the down-sampling point, and when up-sampling is performed, the position of the sampling point after up-sampling is selected as the position of the sampling point before up-sampling (down-sampling point) on the right side of the up-sampling point.
The upsampling filter may be a DCTIF filter, a bilinear interpolation filter, a sinc filter, or the like. The up-sampling process will be described below by taking an image block with a resolution of 8 × 8 (i.e., the image block after the down-sampling). Assuming that the up-sampling ratios in the horizontal direction and the vertical direction are both 1:2, the position of the up-sampled sampling point in the horizontal direction is the position of the sampling point before the up-sampling on the right side, and the position of the up-sampled sampling point in the vertical direction is the position of the sampling point before the up-sampling on the lower side, here, the up-sampling in the horizontal direction and the vertical direction is taken as an example, and the DCTIF filter is taken as an example to explain the up-sampling process. The DCTIF filter is (-1,4, -11,40,40, -11,4, -1), and assuming that horizontal up-sampling is currently performed, in fig. 11, assuming that B3 sampling points need to be inserted, the pixel value of B3 is determined using the following formula:
B3=(-A0+4*A1-11*A2+40*A3+40*A4-11*A5+4*A6-A7)>>6
for interpolated samples at other locations, such as B7, four pixels to the right of B7 are needed, which are not currently available, and in practice a7 is typically repeated 4 times to calculate the B7 pixel value. The up-sampling in the vertical direction is similar to the up-sampling in the horizontal direction and will not be described in detail here. It is also possible to perform upsampling in the vertical direction first and then in the horizontal direction. Fig. 12 is a schematic diagram of image upsampling according to an embodiment of the present application, where, as shown in fig. 12, x represents a sample point after upsampling, and a circle represents a sample point before upsampling.
The up-sampling filter can also be a 6-tap Wiener filter, and the process of up-sampling the image by the filter is as follows: fig. 13 is a schematic diagram of an upsampled image according to an embodiment of the present application, and as shown in fig. 13, the non-labeled boxes in the diagram indicate integer pixels, aa, bb, cc, dd, ee, ff, gg, hh and b, h, s, m, j are 1/2 position pixels, and others are 1/4 position pixels. The calculation process is to use the six-tap filter of (1, -5, 20, 20, -5, 1) to perform 1/2 position pixel interpolation, and then calculate 1/4 position pixel interpolation by the method of adjacent pixel interpolation, so as to obtain the final up-sampling image. There are three types of pixels: horizontal pixels, vertical pixels, and diagonal pixels. The method comprises the following specific steps:
horizontal 1/2 position pixel: such as b ═ E-5F +20G +20H-5I + J), b ═ Clip1((b +16) > > 5; the role of Clip1 is to limit the results to 0-255, a right shift of 5 is equivalent to a division by 32, plus 16 is to round the result.
Vertical 1/2 position pixel: such as h ═ a-5C +20G +20M-5R + T), h ═ Clip1((h +16) > > 5;
diagonal 1/2 position pixel: such as (cc-5dd +20h +20m-5ee + ff) ═ aa-5bb +20b +20q-5gg + hh, j ═ Clip1((j +16) > > 5);
horizontal 1/4 position pixel: if a ═ G + b +1 > > 1; i ═ h + j +1) > > 1;
vertical 1/4 position pixel: if d ═ G + h +1) > > 1; f ═ b + j +1) > > 1;
diagonal 1/4 position pixel: e ═ h + b +1) > > 1; g ═ 1 (b + m +1) >; p ═ h + s +1) > > 1; r ═ s + m +1) > > 1.
In the prior art, an encoding end or a decoding end performs upsampling processing on each reconstructed image block in the whole image by using the same filter, however, the characteristics of each reconstructed image block may be different, for example, some reconstructed image blocks may be relatively flat; some reconstructed image blocks may have more details, and in the prior art, some reconstructed image blocks after upsampling processing are blurred, so that the problem of poor display effect exists.
In order to solve the above technical problem, the present application provides an image processing method, an apparatus and a system, and the present application may be based on the encoding schematic diagrams shown in fig. 3 and fig. 5, as shown in fig. 3 and fig. 5, a coding mode of an image block to be encoded included in an image may be an original resolution coding mode shown in fig. 3 or a downsampling coding mode shown in fig. 5. The original resolution coding mode is to directly code the current image block to be coded. The down-sampling coding mode refers to that down-sampling processing is carried out on the current image block to be coded firstly, and then coding operation is carried out on the current image block to be coded after down-sampling. In general, the texture image block adopts an original resolution coding mode, and the smooth image block adopts a downsampling coding mode. The main idea of the application is as follows: if the encoding mode of the current reconstructed image block is a downsampling encoding mode, a filter is selected for the current reconstructed image block, and upsampling processing is performed on the current reconstructed image block through the filter, namely, the filter is selected for different reconstructed image blocks to perform upsampling processing.
Specifically, fig. 14 is a flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 14, the method includes:
step S1401: generating a reconstruction signal of the current image block to be encoded, and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block;
step S1402: and if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for performing upsampling processing on the current reconstructed image block from the at least two candidate filters, and performing upsampling processing on the current reconstructed image block through the first filter.
Specifically, in step S1401: the encoding end can acquire the encoding mode of the current image block to be encoded of the current image and the pixels in each reference reconstruction image block; determining a plurality of reference pixels of the current image block to be coded according to the coding mode of the current image block to be coded and the pixels in the M reference reconstructed image blocks; generating a prediction signal of a current image block to be coded according to a plurality of reference pixels; acquiring a coding signal of a current image block to be coded, wherein when the coding mode of the current image block to be coded is an original resolution coding mode, the coding signal is an original signal of the current image block to be coded, and when the coding mode of the current image block to be coded is a down-sampling coding mode, the coding signal is a signal obtained after the down-sampling processing is performed on the original signal of the current image block to be coded; generating a residual signal of a current image block to be coded according to the prediction signal and the coding signal; the residual signal is subjected to transform and quantization operations. And after transformation and quantization, obtaining a transformation and quantization coefficient, carrying out inverse quantization and inverse transformation on the transformation and quantization coefficient of the current image block to be coded by the coding end to obtain a reconstructed residual signal, adding the reconstructed residual signal and a prediction signal corresponding to the current image block to be coded to obtain a reconstructed signal of the current image block to be coded, and obtaining a current reconstructed image according to the reconstructed signal.
The current image block to be reconstructed corresponds to M reference reconstructed image blocks, wherein M is a positive integer greater than or equal to 1. The reference reconstructed image block is used for determining a plurality of reference pixels of the image block to be reconstructed, wherein the plurality of reference pixels are used for generating a prediction signal of the current reconstructed image block. In fact, which reconstructed image block is the reference reconstructed image block is related to the prediction mode adopted by the decoding end. When any of the above 35 prediction modes is employed, reference pixel templates as shown in fig. 7 may be referred to.
In fact, which reconstructed image block is the reference reconstructed image block is related to the prediction mode adopted by the decoding end. When any of the above 35 prediction modes is employed, reference pixel templates as shown in fig. 7 may be referred to. The prediction signal of the current image block to be reconstructed is generated according to a plurality of reference pixels, and any one of the prediction modes in 35 above may be adopted, and of course, other prediction modes in the prior art may also be adopted, which is not limited in this application.
If the resolution of the reference reconstruction image block is the same as that of the current image block to be reconstructed, at least one reference pixel is directly determined in the reference reconstruction image block; if the current image block to be reconstructed is the original resolution and the resolution of the reference reconstructed image block is the down-sampling resolution, acquiring at least one pixel required for reconstructing the current reconstructed image block from the reference reconstructed image block, and performing up-sampling processing on at least one pixel required for reconstructing the current reconstructed image block to obtain at least one reference pixel of the current image block to be reconstructed; if the current image block to be reconstructed is the down-sampling resolution and the resolution of the reference reconstructed image block is the original resolution, at least one pixel required for reconstructing the current image block is obtained from the reference reconstructed image block, and the down-sampling processing is performed on the pixels so as to obtain at least one reference pixel of the current image block to be reconstructed.
Further, the encoding end performs upsampling processing on the current reconstructed image block based on pixels of an adjacent reconstructed image block required by the current reconstructed image block. It should be noted that the required pixels of the adjacent reconstructed image blocks are mainly used for performing upsampling processing on part of the boundary of the current reconstructed image block, and for the parts of the current reconstructed image block except for the part of the boundary, the pixels of the current reconstructed image block are all used for performing upsampling processing. Assuming that the first Filter is a Discrete Cosine Transform-Based Interpolation Filter (DCTIF), in this case, the adjacent reconstructed image blocks required for the current reconstructed image block are specifically as follows: fig. 15 is a schematic diagram of 4 neighboring pixels according to an embodiment of the present application, and as shown in fig. 15, neighboring reconstructed image blocks required by a current reconstructed image block include: an upper image block, a lower image block, a left image block, and a right image block of the current reconstructed image block. Assume that the first filter is a Convolutional Neural Network (CNN) filter. In this case, the adjacent reconstructed image blocks required by the current reconstructed image block are specifically as follows: fig. 16 is a schematic diagram of 8 neighboring pixels according to an embodiment of the present application, and as shown in fig. 16, neighboring reconstructed image blocks required by a current reconstructed image block include: an upper image block, a lower image block, a left image block, a right image block, an upper left image block, a lower left image block, an upper right image block, and a lower right image block of the current reconstructed image block. According to the current coding sequence (from top to bottom, from left to right), the lower image block, the right image block, the left lower image block and the right lower image block of the current reconstructed image block are not reconstructed yet, in the prior art, the up-sampling processing is realized by copying the pixels of the current reconstructed image block, but the mode causes the problem that the current reconstructed image block subjected to the up-sampling processing is discontinuous at the right boundary and the lower boundary. To solve this problem, the present application provides the following four alternatives:
the method I is that the up-sampling processing is carried out after all the adjacent reconstructed image blocks required by the up-sampling processing of the current reconstructed image block are reconstructed; correspondingly, the code stream includes: a coding mode of each reference reconstruction image block in the M reference reconstruction image blocks; determining a plurality of reference pixels of the current image block to be reconstructed according to the coding mode of the current image block to be reconstructed and the pixels in the M reference reconstructed image blocks, wherein the method comprises the following steps: and determining a plurality of reference pixels of the current image block to be reconstructed according to the coding mode of the current image block to be reconstructed, the coding modes of the M reference reconstructed image blocks and the pixels in the M reference reconstructed image blocks.
The second mode is that the up-sampling processing is carried out after all image blocks of the current image are reconstructed; correspondingly, the code stream includes: a coding mode of each reference reconstruction image block in the M reference reconstruction image blocks; determining a plurality of reference pixels of the current image block to be reconstructed according to the coding mode of the current image block to be reconstructed and the pixels in the M reference reconstructed image blocks, wherein the method comprises the following steps: and determining a plurality of reference pixels of the current image block to be reconstructed according to the coding mode of the current image block to be reconstructed, the coding modes of the M reference reconstructed image blocks and the pixels in the M reference reconstructed image blocks.
Performing primary up-sampling processing on the current reconstructed image block according to pixels of a part of the currently reconstructed adjacent reconstructed image blocks in the required adjacent reconstructed image blocks; and if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
The fourth mode of performing upsampling processing on the current reconstructed image block comprises the following steps: performing primary up-sampling processing on a current reconstructed image block according to pixels of a part of the adjacent reconstructed image blocks which are required to be reconstructed and where the current reconstructed image block is located and have been reconstructed; if all image blocks of the current image are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image blocks, wherein the other part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when the first up-sampling processing is performed on the current reconstructed image block; part of the boundary of the current reconstructed image block is contiguous with another part of the neighboring reconstructed image block.
Wherein, part of the boundary of the current reconstructed image block satisfies the condition: in the first upsampling process performed on the current reconstructed image block, the reconstruction of another part of adjacent reconstructed image blocks required by the part of boundary is not completed.
Optionally, the partial boundaries of the current reconstructed image block are a right boundary and a lower boundary of the current reconstructed image block.
Optionally, the one part of the adjacent reconstructed image blocks are an upper side image block and a left side image block of the current reconstructed image block, and the another part of the adjacent reconstructed image blocks are a lower side image block and a right side image block of the current reconstructed image block.
Alternatively, the first and second electrodes may be,
the one part of the adjacent reconstructed image blocks are an upper left image block, an upper side image block, an upper right image block and a left side image block of the current reconstructed image block, and the other part of the adjacent reconstructed image blocks are a lower right image block, a lower left image block, a lower side image block and a lower right image block of the current reconstructed image block.
For the third and fourth modes, in step S1402, the current reconstructed image block is upsampled by the first filter, which is specifically the first upsampling for the current reconstructed image block.
The first embodiment will be described in detail:
specifically, the adjacent reconstructed image blocks required for the current reconstructed image block are different for different filters. For example: as shown in fig. 15, the neighboring reconstructed image blocks required by the current reconstructed image block include: an upper image block, a lower image block, a left image block, and a right image block of the current reconstructed image block. As shown in fig. 16, the neighboring reconstructed image blocks required for the current reconstructed image block include: an upper image block, a lower image block, a left image block, a right image block, an upper left image block, a lower left image block, an upper right image block, and a lower right image block of the current reconstructed image block.
The current reconstructed image block can be processed by adopting an up-sampling processing method in the prior art. For example: fig. 17 is a schematic diagram of image upsampling provided in an embodiment of the present application, and as shown in fig. 17, all of an adjacent reconstructed image block 1, an adjacent reconstructed image block 2, an adjacent reconstructed image block 3, and an adjacent reconstructed image block 4 required by a current reconstructed image block B have been reconstructed. Based on this, the up-sampling process is performed on the current reconstructed image block B, as shown in fig. 17, where circles in B represent sample points before the up-sampling, and x represents a sample point after the up-sampling. When the up-sampling processing is performed on the B, the up-sampling processing may be performed on the B in the horizontal direction first, and then the up-sampled signal is up-sampled in the vertical direction; alternatively, B may be up-sampled in the vertical direction first, and then the up-sampled signal may be up-sampled in the horizontal direction.
Particularly, if the current reconstructed image block itself is a boundary image block of an image, in this case, even if the required adjacent reconstructed image blocks are reconstructed, the pixels of the current reconstructed image still need to be copied when the upsampling process is performed. For example: as shown in fig. 15, when the currently reconstructed image block is the rightmost image block of an image, its right image block is not present, so that the pixels in the rightmost column included in the currently reconstructed image block can be copied to implement the upsampling process. Of course, other methods may be used to perform the upsampling process, and the present application is not limited thereto.
Further, since the encoding mode of each adjacent reconstructed image block required by the current reconstructed image block may be a downsampling encoding mode or an original resolution encoding mode, when performing upsampling processing on the current reconstructed image block, the following two specific cases are specifically adopted:
1. if the encoding mode of a certain adjacent reconstructed image block is a down-sampling encoding mode, the up-sampling processing can be directly performed on the current reconstructed image block according to the pixels in the adjacent reconstructed image block.
2. If the encoding mode of a certain adjacent reconstructed image block is the original resolution encoding mode, at least one pixel required for up-sampling processing in the pixels of the adjacent reconstructed image block can be acquired, the pixels are subjected to down-sampling processing, and the current reconstructed image block is subjected to up-sampling processing according to at least one pixel subjected to down-sampling processing.
Specifically, the neighboring reconstructed image block is mainly used for performing upsampling processing on a part of the boundary of the current reconstructed image block (the part of the boundary is different according to different filters), for example: as shown in fig. 17, the neighboring reconstructed image block 3 adopts a downsampling coding method, and in this case, the right boundary of the current reconstructed image block B may be upsampled by directly using the pixels included in the neighboring reconstructed image block 3. And the adjacent reconstructed image block 4 adopts the original resolution coding mode, the downsampling processing needs to be performed on the pixels required by the upsampling processing included in the adjacent reconstructed image block 4, or the downsampling processing is performed on the adjacent reconstructed image block 4, and the upsampling processing is performed on the lower boundary of the current reconstructed image block B according to the pixels subjected to the downsampling processing. The downsampling process is applied to the adjacent reconstructed image block 4, and specifically, pixels circled in fig. 16 may be directly taken as downsampled sample points. Or the down-sampling process in the vertical direction is performed on the neighboring reconstructed image blocks 4. Fig. 18 is a schematic diagram of image upsampling according to another embodiment of the present application, as shown in fig. 18, in this case of 8 neighboring pixels, an upsampling processing method of a decoding end for sampling a current reconstructed image block C is similar to that in the case of 4 neighboring pixels, and details thereof are not repeated here.
It should be noted that, in order to avoid the repeated upsampling process on the current reconstructed image block, the upsampling process may be performed on the current reconstructed image block, and then the upsampling process is performed on the current reconstructed image block is identified. Or, performing upsampling processing on the current reconstructed image block according to a certain rule. When the upsampling process is based on the 4-neighborhood pixels, the upsampling process may be performed on the current reconstructed image block once the lower image block of the current reconstructed image block is reconstructed. When the upsampling process is based on the 8-neighborhood pixels, the upsampling process may be performed on the current reconstructed image block once the reconstruction of the lower right image block of the current reconstructed image block is completed.
The second embodiment will be described in detail:
when all image blocks of the current image are reconstructed, for each reconstructed image block, the reconstruction of the adjacent reconstructed image block required by the reconstructed image block is completed, and based on the reconstruction, any reconstructed image block adopting downsampling coding can be subjected to upsampling processing. The specific upsampling process is similar to the above-described manner one, and is not described herein again.
The third mode is explained in detail:
in the third mode, the upsampling process performed on the current reconstructed image block includes two upsampling processes. The first upsampling process comprises the following steps: and performing primary up-sampling processing on the current reconstructed image block according to pixels of a part of the currently reconstructed adjacent image blocks in the required adjacent reconstructed image blocks. The second upsampling process comprises the following steps: if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
Optionally, before performing the first upsampling process on the current reconstructed image block, the current reconstructed image block in the downsampling coding mode is saved, and a reference pixel is provided for the subsequent prediction of other image blocks to be reconstructed.
Specifically, assuming that a coding sequence from top to bottom and from left to right is adopted, for the case of performing upsampling processing based on 4-neighbor pixels or 8-neighbor pixels, when performing first upsampling processing on a current reconstructed image block, reconstruction of a right image block, a lower image block, a left lower image block and a right lower image block of the current reconstructed image block is not completed. In this case, the decoding end may copy the rightmost one or more columns of pixels included in the current reconstructed image block to obtain interpolated pixels. And performing upsampling processing on the right boundary included by the current reconstructed image block through the interpolation pixels. The decoding end can also copy the pixels of one or more lines at the bottom of the current reconstructed image block to obtain interpolated pixels. The lower boundary included in the current reconstructed image block is upsampled by the interpolated pixels.
Fig. 19 is a schematic diagram of image upsampling provided in an embodiment of the present application, and as shown in fig. 19, assuming that a current reconstructed image block is an image block a, when performing upsampling on the current reconstructed image block a for the second time, assuming that upsampling is currently performed by using a DCTIF filter, the current reconstructed image block a has already completed upsampling for the first time, and as described above, four pixels on the left and right are required when performing upsampling by using the DCTIF filter, so that four columns x (where x represents a sampling point after the first upsampling) of the right side of the current reconstructed image block a are not complete in all of the four reference pixels required when performing the upsampling for the first time. For example, for the rightmost column x, the four right reference pixels required for each x are not present. And if the adjacent reconstructed image block C is reconstructed, performing second upsampling processing on the right boundary of the current reconstructed image block according to the adjacent reconstructed image block C. The upsampling method is the same as the upsampling method described above, and is not described herein again.
Similarly, assuming that the current reconstructed image block is the image block B, when the current reconstructed image block B is subjected to the second upsampling process, assuming that the DCTIF filter is currently used for the upsampling process, the current reconstructed image block B has already completed the first upsampling process, and when the DCTIF filter is used for the upsampling process as described above, four pixels are required for the upper and lower pixels, so that in the first upsampling process, all four reference pixels required for four rows x (where x represents a sampling point after the first upsampling process) below the current reconstructed image block B are incomplete. For example, for the bottom row x, the lower four reference pixels required for each x do not exist. And if the adjacent reconstructed image block C is reconstructed, performing second upsampling processing on the lower boundary of the current reconstructed image block according to the adjacent reconstructed image block C. The upsampling method is the same as the upsampling method described above, and is not described herein again.
Fig. 20 is a schematic diagram of image upsampling according to another embodiment of the present application, as shown in fig. 20, in this case of 8 neighboring pixels, an upsampling processing method adopted by a decoding end for a right boundary of a current reconstructed image block a and a lower boundary of a current reconstructed image block B is similar to that in the case of 4 neighboring pixels, and details are not repeated here.
It should be noted that, in order to avoid the repeated upsampling process on the current reconstructed image block, after the second upsampling process is completed on the current reconstructed image block, it may be identified that the upsampling process has been completed on the current reconstructed image block. Or, performing upsampling processing on the current reconstructed image block according to a certain rule. When the upsampling process is based on the 4-neighborhood pixels, once the lower image block of the current reconstructed image block is reconstructed, the current reconstructed image block may be subjected to a second upsampling process. When the upsampling process is based on the 8-neighborhood pixels, once the reconstruction of the lower right image block of the current reconstructed image block is completed, the second upsampling process may be performed on the current reconstructed image block.
The fourth mode will be explained in detail:
when all image blocks of the current image are reconstructed, for each reconstructed image block, the reconstruction of the adjacent reconstructed image block required by the reconstructed image block is completed, and based on the reconstruction, the second upsampling processing can be performed on any reconstructed image block which is subjected to the first upsampling processing. The specific upsampling process is similar to the above-described manner three, and is not described herein again.
Optionally, before performing the first upsampling process on the current reconstructed image block, the current reconstructed image block in the downsampling coding mode is saved, and a reference pixel is provided for the subsequent prediction of other image blocks to be reconstructed.
When the current reconstructed image block is subjected to upsampling processing in the four ways, part of the boundary of the current reconstructed image block is subjected to upsampling processing through the required adjacent reconstructed image block, and in the prior art, part of the boundary of the current reconstructed image block is subjected to upsampling processing by copying pixels of the current reconstructed image block, so that the problem that the boundary of the current reconstructed image block is discontinuous can be solved.
In summary, compared with the prior art in which the same filter is used for the reconstructed image blocks in the whole image, the filter corresponding to each reconstructed image block is selected, that is, the filter is selected in a targeted selection manner, and the reconstructed image blocks are subjected to upsampling processing through the selected filter, so that the reconstructed image blocks with better display effect can be obtained.
Alternatively, the first filter for upsampling the current reconstructed image block may be selected by:
specifically, case one: the selecting a first filter for performing upsampling on the current reconstructed image block from the at least two candidate filters specifically includes: the similarity between each adjacent reconstructed image block and the current reconstructed image block in at least two adjacent reconstructed image blocks of the current reconstructed image block is higher than that of each adjacent reconstructed image block in the current reconstructed image block, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two second filters as the first filter.
The second filter is used as the first filter, and not only represents an assignment relationship, but also represents that the second filter is directly selected to perform upsampling processing on the current reconstructed image block.
For the above first and second modes, the neighboring reconstructed image blocks refer to: and among all the adjacent reconstructed image blocks of the current reconstructed image block, the reconstructed image block which is subjected to the upsampling processing is already subjected to the upsampling processing. For the third and fourth modes, the adjacent reconstructed image blocks refer to: and among all the adjacent reconstructed image blocks of the current reconstructed image block, the reconstructed image block which has completed the first upsampling process or the second upsampling process. For the third and fourth modes, the second filter is used for performing first upsampling on the adjacent reconstructed image block, or the second filter is used for performing second upsampling on the adjacent reconstructed image block, which is not limited in this application.
The method for calculating the similarity between the current reconstructed image block and the adjacent reconstructed image block may be: if the resolution of the current reconstructed image block is the same as that of the adjacent reconstructed image block, the difference between each pixel of the current reconstructed image block and the corresponding pixel in the adjacent reconstructed image block is calculated to obtain a corresponding difference, the weighted average value of all the differences of the current reconstructed image block is calculated to finally obtain the error between the current reconstructed image block and the adjacent reconstructed image block, and the smaller the error is, the higher the similarity is. If the resolution of the current reconstructed image block is different from that of the adjacent reconstructed image block, sampling the adjacent reconstructed image block to enable the resolution of the sampled adjacent reconstructed image block to be the same as that of the current reconstructed image block, then calculating the difference between each pixel of the current reconstructed image block and the corresponding pixel in the sampled adjacent reconstructed image block to obtain a corresponding difference value, calculating a weighted average value of all the difference values of the current reconstructed image block, and finally obtaining the error between the current reconstructed image block and the adjacent reconstructed image block, wherein the smaller the error is, the higher the similarity is represented.
It should be noted that, the present application is not limited to determining the similarity between the current reconstructed image block and the adjacent reconstructed image block.
Further, if the adjacent reconstructed image block does not exist in the current reconstructed image block, for the first and second manners, the reconstructed image block that has been subjected to the upsampling does not exist in all the adjacent reconstructed image blocks of the current reconstructed image block. Or, for the third and fourth modes, there is no reconstructed image block that has already been subjected to the first upsampling and the second upsampling in all the adjacent reconstructed image blocks of the current reconstructed image block. The similarity of neighboring reconstructed image blocks of these neighboring reconstructed image blocks to the current reconstructed image block may be calculated. The adjacent reconstructed image blocks of the adjacent reconstructed image blocks correspond to at least two filters when the up-sampling processing is carried out on the adjacent reconstructed image blocks; and selecting a filter corresponding to an adjacent reconstructed image block of an adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two filters as a first filter. This is not limited by the present application.
Optionally, the application may further use a corresponding second filter as the first filter when any adjacent reconstructed image block of the current reconstructed image block is subjected to upsampling processing. If the current reconstructed image block does not have the adjacent reconstructed image block, the reconstructed image block which is subjected to the upsampling processing does not exist in all the adjacent reconstructed image blocks of the current reconstructed image block according to the first mode and the second mode. Or, for the third and fourth modes, there is no reconstructed image block that has already been subjected to the first upsampling and the second upsampling in all the adjacent reconstructed image blocks of the current reconstructed image block. The neighboring reconstructed image block that needs to be upsampled among all the neighboring reconstructed image blocks of the neighboring reconstructed image blocks can be selected. And finally, taking the corresponding filter when the selected adjacent reconstructed image block is subjected to upsampling as the first filter, which is not limited in the present application.
In case two, selecting a first filter for upsampling the current reconstructed image block from the at least two candidate filters includes: determining at least two second filters used when each adjacent reconstructed image block in at least two adjacent reconstructed image blocks of the current reconstructed image block is subjected to upsampling; the at least two candidate filters include the at least two second filters, and a second filter corresponding to a first adjacent reconstructed image block is selected from the at least two second filters according to the number sequence of each adjacent reconstructed image block and is used as a first filter.
For example: fig. 21 is a schematic diagram of a current reconstructed image block and an adjacent reconstructed image block according to an embodiment of the present application, and as shown in fig. 21, an adjacent reconstructed image block a is determined0、A1、B0、B1、B2Of neighboring reconstructed image blocks in which the upsampling process has been completed, assume a0、A1、B0、B1、B2Are all adjacent reconstructed image blocks and their coding order is: b is1,A1,B2,B0,A0In coding order, then B is selected1And the corresponding second filter is used as the first filter of the current reconstructed image block.
In case three, selecting a first filter for upsampling the current reconstructed image block from the at least two candidate filters includes: determining at least two second filters used when each adjacent reconstructed image block in at least two adjacent reconstructed image blocks of the current reconstructed image block is subjected to upsampling; the at least two candidate filters include the at least two second filters, and the second filter with the highest probability of use is selected as the first filter.
In case four, selecting a first filter from the at least two candidate filters for upsampling the current reconstructed image block includes: a first filter is selected from the at least two candidate filters according to texture features of a current reconstructed image block. The method for selecting the first filter from the at least two candidate filters according to the texture features of the current reconstructed image block comprises the following steps: and selecting the first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and the at least two candidate filters comprising the first filter.
Specifically, detecting the texture features of the current reconstructed image block includes: an edge detection method, a method of determining texture features in a frequency domain, or the like. The following takes the second approach as an example:
fig. 22 is a schematic diagram of a current reconstructed image block according to an embodiment of the present application, and as shown in fig. 22, the current reconstructed image block is an image block with a resolution of 8 × 8. When the current reconstructed image block is subjected to Discrete Cosine Transform (DCT), 0 to 63 coefficients as shown in fig. 22 are generated, where i represents a horizontal direction, and i is 0 to 7; the vertical direction is represented by j, and j is 0-7.
When the current reconstruction image block meets the requirements on the frequency domainFormula ∑ AC2<a*DC2If not, the texture feature of the current reconstructed image block is represented as a texture.
The left side of the above formula represents the sum of squares of Alternating Currents (AC) of all pixels in the Current reconstructed image block, and the right side represents the product of the sum of squares of Direct Currents (DC) of all pixels in the Current reconstructed image block and a coefficient a, where a is greater than 0 and less than or equal to 1. For example: a may take an empirical value of 0.02.
Further, the preset mapping relationship between the texture features and the filter may be the mapping relationship shown in table 1, and the preset mapping relationship is not limited in the present application.
TABLE 1
Texture features Filter
Flat and flat DCTIF filter
Texture CNN filter
And finally, selecting a first filter according to the preset mapping relation and the texture characteristics of the current reconstructed image block. Assuming that the texture feature of the current reconstructed image block is a platform and the preset mapping relationship is as shown in table 1, the selected first filter is a DCTIF filter.
In case five, selecting a first filter for performing upsampling on the current reconstructed image block from the at least two candidate filters includes: a first filter is selected from the at least two candidate filters according to texture features of a current reconstructed image block. The method for selecting the first filter from the at least two candidate filters according to the texture features of the current reconstructed image block comprises the following steps: at least two adjacent reconstructed image blocks with the same texture characteristics as the current reconstructed image block are determined in all the adjacent reconstructed image blocks, the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling, the at least two candidate filters comprise at least two second filters, and the second filter corresponding to the adjacent reconstructed image block with the same texture characteristics as the current reconstructed image block is selected from the at least two second filters to serve as the first filter.
The mapping relationship between the texture features of the adjacent reconstructed image blocks and the second filter may refer to the preset mapping relationship in table 1. This is not limited by the present application.
In case six, selecting a first filter for performing upsampling on the current reconstructed image block from the at least two candidate filters includes: respectively performing up-sampling processing on the current reconstructed image block through at least two candidate filters to obtain up-sampled image blocks respectively corresponding to the at least two candidate filters; respectively calculating errors of the up-sampling image blocks corresponding to the at least two candidate filters and the original image block corresponding to the current reconstructed image block; and taking the candidate filter corresponding to the minimum error as the first filter.
The method for performing upsampling processing on the current reconstructed image block through the filter may refer to the upsampling process, which is not described herein again. Computing errors of an upsampled image block and an original image block, comprising: and calculating the difference between each pixel of the up-sampled image block and the corresponding pixel in the original image block to obtain a corresponding difference value, calculating a weighted average value of all the difference values, and finally obtaining the error between the up-sampled image block and the original image block. The smaller the error between the upsampled image block and the original image block is, the better the upsampling processing effect of the filter corresponding to the upsampled image block is. Conversely, the larger the error of the upsampled image block from the original image block, the worse the upsampling processing effect of the corresponding filter of the upsampled image block is.
It should be noted that, in the present application, the filter may also be selected according to the luminance component and the chrominance component.
In summary, the present application may select the first filter for the current reconstructed image block according to the above six cases. Compared with the prior art that the same filter is adopted for the reconstructed image blocks in the whole image, the corresponding filter is selected for each reconstructed image block, and therefore the reconstructed image block with better display effect can be obtained.
When the up-sampling processing is performed on the current reconstructed image block in the third or fourth mode, because the second up-sampling processing exists, the second up-sampling processing needs to be performed on the current reconstructed image block through the third filter.
The present application provides an image processing method, wherein the third filter may be the first filter or may be selected by referring to the manner of selecting the first filter, and it is emphasized that the third filter performs upsampling processing on a part of the boundary of the current reconstructed image block.
Optionally, before performing, by using a third filter, a secondary upsampling process on a partial boundary of the current reconstructed image block according to another part of the neighboring reconstructed image blocks in the required neighboring reconstructed image blocks, the method further includes: and judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block, and if the secondary up-sampling processing on the partial boundary of the first reconstructed image block is determined, performing secondary up-sampling processing on the partial boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through a third filter.
Specifically, if the partial boundary of the current reconstructed image block includes: its right and lower boundaries. One or more columns of pixels adjacent to the right boundary are determined in another portion of the second adjacent reconstructed image block. And determining whether to perform secondary up-sampling processing on the right boundary according to the right boundary subjected to the primary up-sampling processing and the one or more columns of pixels (adjacent boundaries forming the right boundary), and when determining to perform secondary up-sampling processing on the right boundary, considering that secondary up-sampling processing is performed on the lower boundary.
Alternatively, the first and second electrodes may be,
if the partial boundary of the current reconstructed image block comprises: its right and lower boundaries. One or more rows of pixels adjacent to the lower boundary are determined in another portion of the second adjacent reconstructed image block. And determining whether to perform secondary up-sampling processing on the lower boundary according to the lower boundary subjected to the primary up-sampling processing and the one or more rows of pixels (adjacent boundaries forming the lower boundary), wherein when determining to perform secondary up-sampling processing on the lower boundary, the right boundary is considered to be subjected to secondary up-sampling processing.
Determining whether to perform secondary upsampling processing on the right boundary according to the right boundary subjected to the primary upsampling processing and the one or more columns of pixels, wherein the method specifically comprises the following steps:
fig. 23 is a schematic diagram of a right boundary and a border of the right boundary provided by an embodiment of the present application, as shown in fig. 23,
and if all the following conditions are met, determining to perform secondary up-sampling processing on the right boundary, and otherwise, not performing secondary up-sampling processing on the right boundary.
|p0–q0|<TH1
|p1–p0|<TH2
|q1–q0|<TH3
Where p0 to p3 denote pixel values of respective pixels of the right boundary, and q0 to q3 denote pixel values of respective pixels of an adjacent boundary of the right boundary. TH1, TH2, and TH3 are respectively preset thresholds, and they may be the same or different.
By the method, whether the secondary up-sampling processing is carried out on the partial boundary can be effectively determined.
Optionally, the method further includes: generating a code stream, wherein the code stream comprises: identification information of the first filter.
The filter that can implement the upsampling process is generally one or more, and they may be different types of filters with different numbers of taps. Or filters of the same type, with different numbers of taps. Or the same type, same number of taps and different coefficients. Wherein each filter has corresponding identification information. See in particular tables 2, 3 and 4.
Table 2:
identification information Filter
0 DCTIF filter
1 Neural network Convolution (CNN) filter
2 Wiener filter
3 Bilinear interpolation filter
Table 3:
identification information Filter
0 [-1 0 9 16 9 0-1]/16
1 [1 -5 20 20 -5 1]/32
2 [l -1 -4 9 22 9 -4 -1 1]/32
3 [-8 1 72 126 72 1 -8]/128
Table 4:
identification information Filter
0 [-1 0 9 16 9 0 -1]/16
1 [1 -5 20 20 -5 1]/32
2 [l -1 -4 9 22 9 -4 -1 1]/32
3 [-5 0 21 32 21 0 -5]/32
By means of carrying the identification information of the first filter, the decoding end can perform upsampling processing on the current reconstructed image block through the first filter.
Optionally, a code stream is generated, where the code stream includes: first indication information, wherein the first indication information is used for indicating how to select a filter used when upsampling the current reconstructed image block from at least two candidate filters.
Optionally, the selecting of the filter includes: selecting a filter used when the up-sampling processing is carried out on the current reconstructed image block from at least two candidate filters according to the texture characteristics of the current reconstructed image block; and selecting a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters according to the adjacent reconstructed image block of the current reconstructed image block.
Optionally, a code stream is generated, where the code stream includes: and second indication information, wherein the second indication information is used for indicating whether a decoding end needs to perform secondary up-sampling processing on the current reconstructed image block.
It should be noted that the code stream may include at least one of the following: the identification information, the first indication information, the second indication information, and the like of the first filter, optionally, the code stream further includes: the decoding end can obtain the transformation quantization coefficient of the current image block to be coded by carrying out entropy decoding on the coding information, and can also obtain a prediction signal and the like through the coding information. The code stream may further include: the coding mode of the current image block to be coded, and the like.
The image processing method at the encoding end is mainly described above, and the image processing method at the decoding end is described below.
Specifically, fig. 24 is a flowchart of an image processing method according to another embodiment of the present application, and as shown in fig. 24, the method includes:
step S2401: analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
step S2402: generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed;
step S2403: reconstructing a current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block;
step S2404: if the coding mode of the current image block to be reconstructed is a down-sampling coding mode, selecting a first filter for performing up-sampling processing on the current reconstructed image block according to first indication information acquired from a code stream;
step S2405: and performing upsampling processing on the current reconstructed image block through a first filter.
Wherein the first indication information is used for indicating how to select a filter used for performing upsampling processing on the current reconstructed image block from at least two candidate filters.
The code stream further includes: the method comprises the coding mode of a current image block to be reconstructed and the coding mode of a reconstructed image block which is reconstructed in a current image of the current image block to be reconstructed. Or, the code stream includes some coding parameters, and the coding mode of the current image block to be reconstructed, the coding mode of the reconstructed image block that has been reconstructed in the current image of the current image block to be reconstructed, and the like can be determined through the coding parameters.
The coding information is used to generate a reconstructed signal of the image block to be reconstructed, such as: and the decoding end performs entropy decoding on the coded information to obtain a transformation quantization coefficient of the current image block to be reconstructed, and then performs inverse quantization and inverse transformation on the transformation quantization coefficient to obtain a reconstructed residual signal of the current image block to be reconstructed. Predicting the current image block to be reconstructed through a reference reconstruction image block (the information of the reference reconstruction image block belongs to the coding information) of the image block to be reconstructed to obtain a prediction signal of the current image block to be reconstructed, and then adding the prediction signal and the reconstructed residual signal to obtain a reconstructed signal of the current image block to be reconstructed. The code stream further includes more coding information related to the prior art, which is not limited in this application.
Optionally, the selecting of the filter includes: selecting a filter used when the up-sampling processing is carried out on the current reconstructed image block from at least two candidate filters according to the texture characteristics of the current reconstructed image block; and selecting a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters according to the adjacent reconstructed image block of the current reconstructed image block.
And finally, if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for upsampling the current reconstructed image block according to the first indication information, and upsampling the current reconstructed image block through the first filter.
It should be noted that methods for reconstructing a current image block to be reconstructed related to the decoding end and the encoding end are similar, and methods for performing upsampling processing on the current reconstructed image block related to the decoding end and the encoding end are also similar. These two methods are not described in detail herein.
In summary, compared with the prior art in which the same filter is used for the reconstructed image blocks in the whole image, the filter corresponding to each reconstructed image block is selected, that is, the filter is selected in a targeted selection manner, and the reconstructed image blocks are subjected to upsampling processing through the selected filter, so that the reconstructed image blocks with better display effect can be obtained.
Optionally, the first indication information is used to indicate that a filter used when the upsampling processing is performed on the current reconstructed image block is selected from at least two candidate filters according to texture features of the current reconstructed image block; selecting a first filter for performing upsampling processing on the current reconstructed image block according to first indication information acquired from a code stream, wherein the first filter comprises: and selecting a first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and at least two candidate filters comprising the first filter.
Optionally, the first indication information is used to indicate that a filter used when the up-sampling processing is performed on the current reconstructed image block is selected from at least two candidate filters according to an adjacent reconstructed image block of the current reconstructed image block; the method for selecting a first filter for performing upsampling processing on a current reconstructed image block according to first indication information acquired from a code stream comprises the following steps: determining the similarity between each adjacent reconstructed image block in the at least two adjacent reconstructed image blocks and the current reconstructed image block, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity from at least two second filters as the first filter.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: and if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: if all image blocks of the current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the needed adjacent reconstructed image blocks through a third filter, wherein the other part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when the first up-sampling processing is performed on the current reconstructed image block; part of the boundary of the current reconstructed image block is contiguous with another part of the neighboring reconstructed image block.
Optionally, the third filter may be the first filter, or may be selected by referring to the manner of selecting the first filter, and it is emphasized that the third filter performs upsampling processing on a part of the boundary of the current reconstructed image block.
Optionally, before performing, by using a third filter, a secondary upsampling process on a partial boundary of a current reconstructed image block according to another part of the required adjacent reconstructed image blocks, the method further includes: judging whether to perform secondary up-sampling processing on a part of boundary according to the part of boundary of the other part of adjacent reconstructed image block and the current reconstructed image block; and if the part of the boundary of the first reconstructed image block is determined to be subjected to secondary up-sampling processing, performing secondary up-sampling processing on the part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through a third filter.
The above method is the same as the corresponding method of the encoding end, and the corresponding content and effect are not described herein again.
Optionally, the code stream further includes: second indication information; correspondingly, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks through a third filter, including: if the second indication information indicates that the current reconstructed image block needs to be subjected to secondary up-sampling processing, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image block through a third filter
Whether the secondary up-sampling processing is carried out on the current reconstructed image block can be judged through the two optional modes. When the secondary up-sampling processing is not performed on the current reconstructed image block, the overhead of a decoding end can be reduced, and when the secondary up-sampling processing is performed on the current reconstructed image block, the problem of discontinuous boundary of the current reconstructed image block can be solved.
Fig. 25 is a flowchart of an image processing method according to still another embodiment of the present application, and as shown in fig. 25, the method includes:
step S2501: analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
step S2502: generating a reconstruction signal of the current image block to be reconstructed according to the coding information, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain the current reconstructed image block;
step S2503: and if the coding mode of the current image block to be reconstructed is a down-sampling coding mode, acquiring identification information of the first filter from the code stream, and performing up-sampling processing on the current reconstructed image block through the first filter identified by the identification information.
Wherein, this code stream still includes: the method comprises the coding mode of a current image block to be reconstructed and the coding mode of a reconstructed image block which is reconstructed in a current image of the current image block to be reconstructed. The coding information is used to generate a reconstructed signal of the image block to be reconstructed, such as: and the decoding end performs entropy decoding on the coded information to obtain a transformation quantization coefficient of the current image block to be reconstructed, and then performs inverse quantization and inverse transformation on the transformation quantization coefficient to obtain a reconstructed residual signal of the current image block to be reconstructed. Predicting the current image block to be reconstructed through a reference reconstruction image block (the information of the reference reconstruction image block belongs to the coding information) of the image block to be reconstructed to obtain a prediction signal of the current image block to be reconstructed, and then adding the prediction signal and the reconstructed residual signal to obtain a reconstructed signal of the current image block to be reconstructed. The code stream further includes more coding information related to the prior art, which is not limited in this application.
And finally, if the coding mode of the current reconstructed image block is a down-sampling coding mode, performing up-sampling processing on the current reconstructed image block through a first filter.
It should be noted that methods for reconstructing a current image block to be reconstructed related to the decoding end and the encoding end are similar, and methods for performing upsampling processing on the current reconstructed image block related to the decoding end and the encoding end are also similar. These two methods are not described in detail herein.
In summary, compared with the prior art in which the same filter is used for the reconstructed image blocks in the whole image, the filter corresponding to each reconstructed image block is selected, that is, the filter is selected in a targeted selection manner, and the reconstructed image blocks are subjected to upsampling processing through the selected filter, so that the reconstructed image blocks with better display effect can be obtained.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: and if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks.
Optionally, the upsampling processing on the current reconstructed image block by the first filter includes: performing primary up-sampling processing on the current reconstructed image block through a first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the up-sampling processing is performed on the current reconstructed image block; correspondingly, the method further comprises the following steps: if all image blocks of a current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the needed adjacent reconstructed image blocks through a third filter, wherein the other part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when the first up-sampling processing is performed on the current reconstructed image block; part of the boundary of the current reconstructed image block is contiguous with another part of the neighboring reconstructed image block.
Optionally, before performing, by using a third filter, secondary upsampling processing on a partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks, the method further includes: judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block; and if the part of the boundary of the first reconstructed image block is determined to be subjected to secondary up-sampling processing, performing secondary up-sampling processing on the part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through a third filter.
The above method is the same as the corresponding method of the encoding end, and the corresponding content and effect are not described herein again.
Optionally, the code stream further includes: second indication information; correspondingly, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks through a third filter, including: and if the second indication information indicates that the secondary up-sampling processing needs to be performed on the current reconstructed image block, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of the adjacent reconstructed image block in the required adjacent reconstructed image block through a third filter.
Whether the secondary up-sampling processing is carried out on the current reconstructed image block can be judged through the two optional modes. When the secondary up-sampling processing is not performed on the current reconstructed image block, the overhead of a decoding end can be reduced, and when the secondary up-sampling processing is performed on the current reconstructed image block, the problem of discontinuous boundary of the current reconstructed image block can be solved.
It should be noted that, in the present application, the encoding end and the decoding end may also perform upsampling processing on a current reconstructed image block through a filter negotiated in advance.
Fig. 26 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 26, the apparatus includes: a generating module 2601, configured to generate a reconstruction signal of a current image block to be encoded, and reconstruct the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block; a selecting module 2602, configured to select, if the encoding mode of the current reconstructed image block is a downsampling encoding mode, a first filter for performing upsampling processing on the current reconstructed image block from among at least two candidate filters; a processing module 2603, configured to perform upsampling on the current reconstructed image block through the first filter.
Optionally, the selecting module 2602 is specifically configured to select the first filter from at least two candidate filters according to a texture feature of the current reconstructed image block.
Optionally, the selecting module 2602 is specifically configured to: the method is specifically configured to select the first filter according to a preset mapping relationship and a texture feature of the current reconstructed image block, where the preset mapping relationship is a mapping relationship between a preset texture feature including the texture feature of the current reconstructed image block and the at least two candidate filters including the first filter.
Optionally, the selecting module 2602 is specifically configured to determine a similarity between each of at least two adjacent reconstructed image blocks of the current reconstructed image block and the current reconstructed image block, where the at least two adjacent reconstructed image blocks correspond to at least two second filters when performing upsampling processing, and the at least two candidate filters include the at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two second filters as the first filter.
Optionally, the selecting module 2602 is specifically configured to: respectively performing upsampling processing on the current reconstructed image block through the at least two candidate filters to obtain upsampled image blocks respectively corresponding to the at least two candidate filters; respectively calculating errors of the up-sampling image blocks corresponding to the at least two candidate filters and the original image block corresponding to the current reconstructed image block; and taking the candidate filter corresponding to the minimum error as the first filter.
Optionally, the processing module 2603 is specifically configured to: performing primary up-sampling processing on the current reconstructed image block through the first filter according to pixels of a part of currently reconstructed adjacent reconstructed image blocks in the adjacent reconstructed image blocks required by the up-sampling processing of the current reconstructed image block; the processing module 2603 is further configured to, if another part of the neighboring reconstructed image blocks that are not currently reconstructed in the required neighboring reconstructed image blocks have already been reconstructed, perform, by using a third filter, secondary upsampling on a partial boundary of the current reconstructed image block according to the another part of the neighboring reconstructed image blocks, where the partial boundary of the current reconstructed image block is adjacent to the another part of the neighboring reconstructed image blocks.
Optionally, the processing module 2603 is specifically configured to: performing primary up-sampling processing on the current reconstructed image block through the first filter according to pixels of a part of currently reconstructed adjacent reconstructed image blocks in the adjacent reconstructed image blocks required by the up-sampling processing of the current reconstructed image block, wherein the required adjacent reconstructed image block is the adjacent reconstructed image block required by the up-sampling processing of the current reconstructed image block; the processing module 2603, if all image blocks of the current image where the current reconstructed image block is located have been reconstructed, performs secondary upsampling on a part of boundaries of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks through a third filter, where the another part of the adjacent reconstructed image blocks is an image block that has not been reconstructed when performing the first upsampling on the current reconstructed image block; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
Optionally, the third filter is the first filter.
Optionally, the method further comprises: a determining module 2604, configured to determine whether to perform secondary upsampling on the partial boundary according to the partial boundary of the another portion of adjacent reconstructed image blocks and the current reconstructed image block; the selecting module 2602 is specifically configured to, if the determining module 2604 determines to perform secondary upsampling on a partial boundary of the first reconstructed image block, perform secondary upsampling on a partial boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through the third filter.
Optionally, the generating module 2601 is further configured to generate a code stream, where the code stream includes: identification information of the first filter.
Optionally, the generating module 2601 is further configured to generate a code stream, where the code stream includes: first indication information, the first indication information being used for indicating how to select a filter from at least two candidate filters to be used in upsampling the current reconstructed image block.
Optionally, the generating module 2601 is further configured to generate a code stream, where the code stream includes: and second indication information, where the second indication information is used to indicate whether the decoding end needs to perform secondary upsampling on the current reconstructed image block.
The image processing device provided by the present application may execute the image processing method corresponding to fig. 14 and the optional manner of the method, and the implementation principle and the technical effect are similar, and are not described herein again.
Fig. 27 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application, and as shown in fig. 27, the apparatus includes: the analysis module 2701 is configured to analyze the code stream to obtain coding information of a current image block to be reconstructed and a coding mode of the current image block to be reconstructed; the generating module 2702 is configured to generate a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed, and reconstruct the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block; a selecting module 2703, configured to select, if the encoding mode of the current image block to be reconstructed is a downsampling encoding mode, a first filter for performing upsampling processing on the current reconstructed image block according to first indication information obtained from a code stream, where the first indication information is used to indicate how to select a filter used when performing upsampling processing on the current reconstructed image block from at least two candidate filters; the processing module 2704 is configured to perform upsampling processing on the current reconstructed image block through the first filter.
Optionally, the first indication information is used to indicate that a filter used when the upsampling processing is performed on the current reconstructed image block is selected from at least two candidate filters according to texture features of the current reconstructed image block; the selecting module 2703 is specifically configured to: and selecting the first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and the at least two candidate filters comprising the first filter.
Optionally, the first indication information is used to indicate that a filter used when the up-sampling processing is performed on the current reconstructed image block is selected from the at least two candidate filters according to an adjacent reconstructed image block of the current reconstructed image block; the selecting module 2703 is specifically configured to: determining the similarity between each adjacent reconstructed image block and the current reconstructed image block in at least two adjacent reconstructed image blocks, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise the at least two second filters; and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity from the at least two second filters as the first filter.
Optionally, the processing module 2704 is specifically configured to: the upsampling processing module 2704 performs, according to pixels of a part of adjacent reconstructed image blocks, which are required to be currently reconstructed, in adjacent reconstructed image blocks when upsampling processing is performed on a current reconstructed image block, performs primary upsampling processing on the current reconstructed image block through the first filter, and is further configured to perform, if reconstruction is completed on another part of adjacent reconstructed image blocks, which are not currently reconstructed, in the required adjacent reconstructed image blocks, perform secondary upsampling processing on a part of boundaries of the current reconstructed image block according to the other part of adjacent reconstructed image blocks through a third filter, where the part of boundaries of the current reconstructed image block is adjacent to the other part of adjacent reconstructed image blocks.
Optionally, the processing module 2704 is specifically configured to: performing primary up-sampling processing on the current reconstructed image block through the first filter according to pixels of a part of currently reconstructed adjacent reconstructed image blocks in the adjacent reconstructed image blocks required by the up-sampling processing of the current reconstructed image block; the processing module 2704 is further configured to, if all image blocks of the current image where a current reconstructed image block is located have been reconstructed, perform, by using a third filter, secondary upsampling on a part of a boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks, where the another part of the adjacent reconstructed image blocks is an image block that has not been reconstructed when performing the first upsampling on the current reconstructed image block; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
Optionally, the third filter is the first filter.
Optionally, the method further comprises: a determining module 2705, configured to determine whether to perform secondary upsampling on the partial boundary according to the partial boundary of the another portion of adjacent reconstructed image blocks and the current reconstructed image block; the processing module 2704 is specifically configured to, if the determining module 2705 determines to perform secondary upsampling on the partial boundary of the first reconstructed image block, perform secondary upsampling on the partial boundary of the current reconstructed image block according to the other part of adjacent reconstructed image blocks through the third filter.
Optionally, the code stream further includes: second indication information; correspondingly, the processing module 2704 is specifically configured to, if the second indication information indicates that the secondary upsampling processing needs to be performed on the current reconstructed image block, perform secondary upsampling processing on a part of the boundary of the current reconstructed image block through a third filter according to another part of the adjacent reconstructed image block in the required adjacent reconstructed image block.
The image processing apparatus provided in the present application may execute the image processing method corresponding to fig. 24 and the optional manner of the method, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 28 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present application, and as shown in fig. 28, the apparatus includes: the parsing module 2801 is configured to parse the code stream to obtain coding information of a current image block to be reconstructed and a coding mode of the current image block to be reconstructed; a generating module 2802, configured to generate a reconstruction signal of a current image block to be reconstructed according to the encoding information, and reconstruct the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block; the parsing module 2801 is further configured to, if the coding mode of the current image block to be reconstructed is a downsampling coding mode, obtain identification information of a first filter from the code stream; a processing module 2803, configured to perform upsampling on the current reconstructed image block through the first filter identified by the identification information.
The image processing device provided by the present application may execute the image processing method corresponding to fig. 25 and the optional manner of the method, and the implementation principle and the technical effect are similar, and are not described herein again.
The present application provides an image processing apparatus, including: a processor and a memory for storing executable instructions of the processor; wherein the processor may perform the image processing method corresponding to fig. 14 and alternatives to the method. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
The present application provides an image processing apparatus, including: a processor and a memory for storing executable instructions of the processor; wherein the processor may perform the image processing method corresponding to fig. 24 and alternatives to the method. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
The present application provides an image processing apparatus, including: a processor and a memory for storing executable instructions of the processor; wherein the processor may perform the image processing method corresponding to fig. 25 and alternatives to the method. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
Fig. 29 is a schematic structural diagram of an image processing system provided in the present application, and as shown in fig. 29, the system includes: the image processing apparatus 2901 at the decoding side and the image processing apparatus 2902 at the encoding side are described above.
The image processing device at the decoding end included in the image processing system provided by the present application may execute the image processing method corresponding to fig. 14 and the optional manner of the method, and the image processing device at the encoding end included in the image processing system may execute the image processing method corresponding to fig. 24 and the optional manner of the method, which have similar implementation principles and technical effects, and are not described herein again.
Fig. 30 is a schematic structural diagram of an image processing system provided in the present application, and as shown in fig. 30, the system includes: the image processing apparatus 3001 on the decoding side and the image processing apparatus 3002 on the encoding side described above.
The image processing device at the decoding end included in the image processing system provided by the present application may execute the image processing method corresponding to fig. 14 and the optional manner of the method, and the image processing device at the encoding end included in the image processing system may execute the image processing method corresponding to fig. 25 and the optional manner of the method, which have similar implementation principles and technical effects, and are not described herein again.

Claims (36)

1. An image processing method, comprising:
generating a reconstruction signal of a current image block to be encoded, and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block;
if the coding mode of the current reconstructed image block is a downsampling coding mode, selecting a first filter for performing upsampling processing on the current reconstructed image block from at least two candidate filters, and performing upsampling processing on the current reconstructed image block through the first filter;
the upsampling the current reconstructed image block through the first filter includes:
performing primary up-sampling processing on the current reconstructed image block through the first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the current reconstructed image block is subjected to up-sampling processing;
the method further comprises the following steps: if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks;
if all image blocks of the current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image blocks through a third filter, wherein the another part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when performing first up-sampling processing on the current reconstructed image block; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
2. The method according to claim 1, wherein the selecting a first filter from the at least two candidate filters for upsampling the current reconstructed image block specifically comprises:
and selecting the first filter from at least two candidate filters according to the texture characteristics of the current reconstructed image block.
3. The method according to claim 2, wherein said selecting the first filter from at least two candidate filters according to texture features of the current reconstructed image block comprises:
and selecting the first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and the at least two candidate filters comprising the first filter.
4. The method according to claim 1, wherein the selecting a first filter from the at least two candidate filters for upsampling the current reconstructed image block specifically comprises:
determining the similarity between each adjacent reconstructed image block of at least two adjacent reconstructed image blocks of the current reconstructed image block and the current reconstructed image block, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise the at least two second filters;
and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two second filters as the first filter.
5. The method of claim 1, wherein selecting the first filter from the at least two candidate filters for upsampling the current reconstructed image block comprises:
respectively performing upsampling processing on the current reconstructed image block through the at least two candidate filters to obtain upsampled image blocks respectively corresponding to the at least two candidate filters;
respectively calculating errors of the up-sampling image blocks corresponding to the at least two candidate filters and the original image block corresponding to the current reconstructed image block;
and taking the candidate filter corresponding to the minimum error as the first filter.
6. The method of claim 1, wherein the third filter is the first filter.
7. The method according to claim 1 or 6, wherein before performing the secondary upsampling process on the partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks by using the third filter, the method further comprises:
judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block;
and if the secondary up-sampling processing is determined to be performed on the partial boundary of the current reconstructed image block, performing secondary up-sampling processing on the partial boundary of the current reconstructed image block according to the other part of adjacent reconstructed image blocks through the third filter.
8. The method of any one of claims 1-6, further comprising:
generating a code stream, wherein the code stream comprises: identification information of the first filter.
9. The method of any one of claims 1-6, further comprising:
generating a code stream, wherein the code stream comprises: first indication information for indicating how to select a filter from at least two candidate filters to use in upsampling the current reconstructed image block.
10. The method of any one of claims 1-6, further comprising:
generating a code stream, wherein the code stream comprises: and second indication information, wherein the second indication information is used for indicating whether a decoding end needs to perform secondary upsampling processing on the current reconstructed image block.
11. An image processing method, comprising:
analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed;
reconstructing a current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block;
if the encoding mode of the current image block to be reconstructed is a down-sampling encoding mode, selecting a first filter for performing up-sampling processing on the current reconstructed image block according to first indication information acquired from the code stream, wherein the first indication information is used for indicating how to select a filter used for performing up-sampling processing on the current reconstructed image block from at least two candidate filters;
performing upsampling processing on the current reconstructed image block through the first filter;
the upsampling the current reconstructed image block through the first filter includes:
performing primary up-sampling processing on the current reconstructed image block through the first filter according to pixels of a part of adjacent reconstructed image blocks which are required by the current reconstructed image block and are currently reconstructed when the current reconstructed image block is subjected to up-sampling processing;
the method further comprises the following steps: if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks;
if all image blocks of the current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image blocks through a third filter, wherein the another part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when performing first up-sampling processing on the current reconstructed image block; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
12. The method according to claim 11, wherein the first indication information is used to indicate that a filter used in upsampling the current reconstructed image block is selected from the at least two candidate filters according to texture features of the current reconstructed image block;
the selecting a first filter for performing upsampling processing on the current reconstructed image block according to first indication information acquired from the code stream includes:
and selecting the first filter according to a preset mapping relation and the texture features of the current reconstructed image block, wherein the preset mapping relation is the mapping relation between the preset texture features comprising the texture features of the current reconstructed image block and the at least two candidate filters comprising the first filter.
13. The method according to claim 11, wherein the first indication information is used to indicate that a filter used in upsampling the current reconstructed image block is selected from the at least two candidate filters according to an adjacent reconstructed image block of the current reconstructed image block;
the selecting a first filter for performing upsampling processing on the current reconstructed image block according to first indication information acquired from the code stream includes:
determining the similarity between each adjacent reconstructed image block and the current reconstructed image block in at least two adjacent reconstructed image blocks, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise the at least two second filters;
and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity from the at least two second filters as the first filter.
14. The method of claim 11, wherein the third filter is the first filter.
15. The method according to claim 11 or 14, wherein before performing the upsampling process on the partial boundary of the current reconstructed image block according to another part of the required neighboring reconstructed image blocks by the third filter, the method further comprises:
judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block;
and if the secondary up-sampling processing is determined to be performed on the partial boundary of the current reconstructed image block, performing secondary up-sampling processing on the partial boundary of the current reconstructed image block according to the other part of adjacent reconstructed image blocks through the third filter.
16. The method of claim 11 or 14, wherein the codestream further comprises: second indication information;
correspondingly, the performing, by the third filter, a secondary upsampling process on a partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks includes:
and if the second indication information indicates that secondary up-sampling processing needs to be performed on the current reconstructed image block, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image block through the third filter.
17. An image processing method, comprising:
analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
generating a reconstruction signal of a current image block to be reconstructed according to the coding information, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block;
if the coding mode of the current image block to be reconstructed is a down-sampling coding mode, acquiring identification information of a first filter from the code stream, and performing up-sampling processing on the current reconstructed image block through the first filter identified by the identification information;
the upsampling processing of the current reconstructed image block by the first filter identified by the identification information includes:
performing primary up-sampling processing on the current reconstructed image block through the first filter identified by the identification information according to pixels of a part of currently reconstructed adjacent reconstructed image blocks in the adjacent reconstructed image blocks required by the up-sampling processing of the current reconstructed image block;
the method further comprises the following steps: if the reconstruction of the other part of the adjacent reconstructed image blocks which are not reconstructed currently in the required adjacent reconstructed image blocks is completed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image blocks through a third filter, wherein the part of the boundary of the current reconstructed image block is adjacent to the other part of the adjacent reconstructed image blocks;
if all image blocks of the current image where the current reconstructed image block is located are completely reconstructed, performing secondary up-sampling processing on part of the boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image blocks through a third filter, wherein the another part of adjacent reconstructed image blocks are image blocks which are not completely reconstructed when performing first up-sampling processing on the current reconstructed image block; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
18. An image processing apparatus characterized by comprising:
the generating module is used for generating a reconstruction signal of the current image block to be encoded and reconstructing the current image block to be encoded according to the reconstruction signal to obtain a current reconstructed image block;
the selection module is used for selecting a first filter for performing upsampling processing on the current reconstructed image block from at least two candidate filters if the coding mode of the current reconstructed image block is a downsampling coding mode;
the processing module is used for performing up-sampling processing on the current reconstructed image block through the first filter;
the processing module is specifically configured to perform, by using the first filter, one upsampling processing on the current reconstructed image block according to pixels of a part of adjacent reconstructed image blocks, which are required to be currently reconstructed, in the adjacent reconstructed image block when the upsampling processing is performed on the current reconstructed image block;
the processing module is further configured to perform, if another part of adjacent reconstructed image blocks, which are not currently reconstructed, of the required adjacent reconstructed image blocks have already been reconstructed, a second upsampling process on a partial boundary of the current reconstructed image block according to the another part of adjacent reconstructed image blocks through a third filter, where the partial boundary of the current reconstructed image block is adjacent to the another part of adjacent reconstructed image blocks;
the processing module is further configured to perform, by using a third filter, secondary upsampling on a partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks if all image blocks of the current image in which the current reconstructed image block is located have been reconstructed, where the another part of the adjacent reconstructed image blocks are image blocks that have not been reconstructed when the current reconstructed image block is subjected to the first upsampling; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
19. The apparatus of claim 18,
the selection module is specifically configured to select the first filter from at least two candidate filters according to a texture feature of the current reconstructed image block.
20. The apparatus of claim 19,
the selection module is specifically configured to select the first filter according to a preset mapping relationship and the texture feature of the current reconstructed image block, where the preset mapping relationship is a mapping relationship between a preset texture feature that includes the texture feature of the current reconstructed image block and the at least two candidate filters that include the first filter.
21. The device of claim 18, wherein the selection module is specifically configured to:
determining the similarity between each adjacent reconstructed image block of at least two adjacent reconstructed image blocks of the current reconstructed image block and the current reconstructed image block, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise the at least two second filters;
and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity with the current reconstructed image block from the at least two second filters as the first filter.
22. The device of claim 18, wherein the selection module is specifically configured to:
respectively performing upsampling processing on the current reconstructed image block through the at least two candidate filters to obtain upsampled image blocks respectively corresponding to the at least two candidate filters;
respectively calculating errors of the up-sampling image blocks corresponding to the at least two candidate filters and the original image block corresponding to the current reconstructed image block;
and taking the candidate filter corresponding to the minimum error as the first filter.
23. The apparatus of claim 18, wherein the third filter is the first filter.
24. The apparatus of claim 18 or 23, further comprising:
the judging module is used for judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block;
the selection module is specifically configured to, if the determination module determines to perform secondary upsampling on a partial boundary of the current reconstructed image block, perform secondary upsampling on a partial boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through the third filter.
25. The apparatus according to any one of claims 18 to 23,
the generating module is further configured to generate a code stream, where the code stream includes: identification information of the first filter.
26. The apparatus according to any one of claims 18 to 23,
the generating module is further configured to generate a code stream, where the code stream includes: first indication information for indicating how to select a filter from at least two candidate filters to use in upsampling the current reconstructed image block.
27. The apparatus according to any one of claims 18 to 23,
the generating module is further configured to generate a code stream, where the code stream includes: and second indication information, wherein the second indication information is used for indicating whether a decoding end needs to perform secondary upsampling processing on the current reconstructed image block.
28. An image processing apparatus characterized by comprising:
the analysis module is used for analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
the generating module is used for generating a reconstruction signal of the current image block to be reconstructed according to the coding information of the current image block to be reconstructed, and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block;
the selection module is used for selecting a first filter for performing upsampling processing on the current reconstructed image block according to first indication information acquired from the code stream if the coding mode of the current image block to be reconstructed is a downsampling coding mode, wherein the first indication information is used for indicating how to select a filter used for performing upsampling processing on the current reconstructed image block from at least two candidate filters;
the processing module is used for performing up-sampling processing on the current reconstructed image block through the first filter;
the processing module is specifically configured to perform, by using the first filter, one upsampling processing on the current reconstructed image block according to pixels of a part of adjacent reconstructed image blocks, which are required to be currently reconstructed, in the adjacent reconstructed image block when the upsampling processing is performed on the current reconstructed image block;
the processing module is further configured to perform, if another part of adjacent reconstructed image blocks, which are not currently reconstructed, of the required adjacent reconstructed image blocks have already been reconstructed, a second upsampling process on a partial boundary of the current reconstructed image block according to the another part of adjacent reconstructed image blocks through a third filter, where the partial boundary of the current reconstructed image block is adjacent to the another part of adjacent reconstructed image blocks;
the processing module is further configured to perform, by using a third filter, secondary upsampling on a partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks if all image blocks of the current image in which the current reconstructed image block is located have been reconstructed, where the another part of the adjacent reconstructed image blocks are image blocks that have not been reconstructed when the current reconstructed image block is subjected to the first upsampling; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
29. The apparatus according to claim 28, wherein the first indication information is used to indicate that a filter used in upsampling the current reconstructed image block is selected from the at least two candidate filters according to texture features of the current reconstructed image block;
the selection module is specifically configured to select the first filter according to a preset mapping relationship and the texture feature of the current reconstructed image block, where the preset mapping relationship is a mapping relationship between a preset texture feature that includes the texture feature of the current reconstructed image block and the at least two candidate filters that include the first filter.
30. The apparatus according to claim 28, wherein the first indication information is used to indicate that a filter used in upsampling the current reconstructed image block is selected from the at least two candidate filters according to an adjacent reconstructed image block of the current reconstructed image block;
the selection module is specifically configured to:
determining the similarity between each adjacent reconstructed image block and the current reconstructed image block in at least two adjacent reconstructed image blocks, wherein the at least two adjacent reconstructed image blocks correspond to at least two second filters when being subjected to upsampling processing, and the at least two candidate filters comprise the at least two second filters;
and selecting a second filter corresponding to the adjacent reconstructed image block with the highest similarity from the at least two second filters as the first filter.
31. The apparatus of claim 28, wherein the third filter is the first filter.
32. The apparatus of claim 28 or 31, further comprising:
the judging module is used for judging whether to perform secondary up-sampling processing on the partial boundary according to the partial boundary of the other part of the adjacent reconstructed image block and the current reconstructed image block;
the processing module is specifically configured to, if the determining module determines to perform secondary upsampling on the partial boundary of the current reconstructed image block, perform secondary upsampling on the partial boundary of the current reconstructed image block according to the other part of the adjacent reconstructed image block through the third filter.
33. The apparatus of claim 28 or 31, wherein the codestream further comprises: second indication information;
the processing module is specifically configured to, if the second indication information indicates that secondary upsampling processing needs to be performed on the current reconstructed image block, perform secondary upsampling processing on a part of a boundary of the current reconstructed image block according to another part of adjacent reconstructed image blocks in the required adjacent reconstructed image block through the third filter.
34. An image processing apparatus characterized by comprising:
the analysis module is used for analyzing the code stream to acquire the coding information of the current image block to be reconstructed and the coding mode of the current image block to be reconstructed;
the generating module is used for generating a reconstruction signal of the current image block to be reconstructed according to the coding information and reconstructing the current image block to be reconstructed according to the reconstruction signal to obtain a current reconstructed image block;
the analysis module is further configured to obtain identification information of a first filter from the code stream if the coding mode of the current image block to be reconstructed is a down-sampling coding mode;
the processing module is used for performing upsampling processing on the current reconstructed image block through the first filter identified by the identification information;
the processing module is specifically configured to perform, according to pixels of a part of adjacent reconstructed image blocks, which are required to be currently reconstructed, in the adjacent reconstructed image blocks when the current reconstructed image block is subjected to upsampling, one upsampling on the current reconstructed image block through the first filter identified by the identification information;
the processing module is further configured to perform, if another part of adjacent reconstructed image blocks, which are not currently reconstructed, of the required adjacent reconstructed image blocks have already been reconstructed, a second upsampling process on a partial boundary of the current reconstructed image block according to the another part of adjacent reconstructed image blocks through a third filter, where the partial boundary of the current reconstructed image block is adjacent to the another part of adjacent reconstructed image blocks;
the processing module is further configured to perform, by using a third filter, secondary upsampling on a partial boundary of the current reconstructed image block according to another part of the required adjacent reconstructed image blocks if all image blocks of the current image in which the current reconstructed image block is located have been reconstructed, where the another part of the adjacent reconstructed image blocks are image blocks that have not been reconstructed when the current reconstructed image block is subjected to the first upsampling; the partial boundary of the current reconstructed image block is adjacent to the other partial adjacent reconstructed image block.
35. An image processing system, comprising: the image processing apparatus according to any one of claims 18 to 27, and the image processing apparatus according to any one of claims 28 to 33.
36. An image processing system, comprising: the image processing apparatus according to any one of claims 18 to 27, and the image processing apparatus according to claim 34.
CN201710571640.1A 2017-07-13 2017-07-13 Image processing method, device and system Active CN109257605B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710571640.1A CN109257605B (en) 2017-07-13 2017-07-13 Image processing method, device and system
PCT/CN2018/085537 WO2019011046A1 (en) 2017-07-13 2018-05-04 Image processing method, device and system
TW107116752A TWI681672B (en) 2017-07-13 2018-05-17 Method, apparatus and system for processing picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710571640.1A CN109257605B (en) 2017-07-13 2017-07-13 Image processing method, device and system

Publications (2)

Publication Number Publication Date
CN109257605A CN109257605A (en) 2019-01-22
CN109257605B true CN109257605B (en) 2021-11-19

Family

ID=65001554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710571640.1A Active CN109257605B (en) 2017-07-13 2017-07-13 Image processing method, device and system

Country Status (3)

Country Link
CN (1) CN109257605B (en)
TW (1) TWI681672B (en)
WO (1) WO2019011046A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506599A (en) * 2019-02-22 2023-07-28 华为技术有限公司 Method and device for intra-frame prediction by using linear model
CN111369438B (en) * 2020-02-28 2022-07-26 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
WO2022237899A1 (en) * 2021-05-14 2022-11-17 Beijing Bytedance Network Technology Co., Ltd. Method, device, and medium for video processing
WO2023019567A1 (en) * 2021-08-20 2023-02-23 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium
CN113822815B (en) * 2021-09-24 2024-02-06 广州光锥元信息科技有限公司 Method and apparatus for high performance picture clutter removal using GPU rendering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1939066A (en) * 2004-04-02 2007-03-28 汤姆森许可贸易公司 Method and apparatus for complexity scalable video decoder
CN102387366A (en) * 2005-03-18 2012-03-21 夏普株式会社 Methods and systems for extended spatial scalability with picture-level adaptation
CN103716622A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method and device
CN105191313A (en) * 2013-01-04 2015-12-23 三星电子株式会社 Scalable video encoding method and apparatus using image up-sampling in consideration of phase-shift and scalable video decoding method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742321B (en) * 2010-01-12 2011-07-27 浙江大学 Layer decomposition-based Method and device for encoding and decoding video
US10448032B2 (en) * 2012-09-04 2019-10-15 Qualcomm Incorporated Signaling of down-sampling location information in scalable video coding
CN103916676B (en) * 2012-12-31 2017-09-29 华为技术有限公司 A kind of boundary intensity determines method, block-eliminating effect filtering method and device
US10455249B2 (en) * 2015-03-20 2019-10-22 Qualcomm Incorporated Downsampling process for linear model prediction mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1939066A (en) * 2004-04-02 2007-03-28 汤姆森许可贸易公司 Method and apparatus for complexity scalable video decoder
CN102387366A (en) * 2005-03-18 2012-03-21 夏普株式会社 Methods and systems for extended spatial scalability with picture-level adaptation
CN103716622A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Image processing method and device
CN105191313A (en) * 2013-01-04 2015-12-23 三星电子株式会社 Scalable video encoding method and apparatus using image up-sampling in consideration of phase-shift and scalable video decoding method and apparatus

Also Published As

Publication number Publication date
CN109257605A (en) 2019-01-22
TWI681672B (en) 2020-01-01
WO2019011046A1 (en) 2019-01-17
TW201909643A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109257605B (en) Image processing method, device and system
CN109302608B (en) Image processing method, device and system
KR102185954B1 (en) Apparatus and method for image coding and decoding
CN109314789B (en) Method and apparatus for video signal processing based on intra prediction
RU2461977C2 (en) Compression and decompression of images
CN108028941B (en) Method and apparatus for encoding and decoding digital images by superpixel
JP2020508010A (en) Image processing and video compression method
JP2017536033A (en) Method and apparatus for performing graph-based prediction using an optimization function
CN109257608B (en) Image processing method, device and system
US11202082B2 (en) Image processing apparatus and method
Drynkin et al. Video images compression and restoration methods based on optimal sampling
US20230044603A1 (en) Apparatus and method for applying artificial intelligence-based filtering to image
Moses et al. A survey on adaptive image interpolation based on quantitative measures
CN115731133A (en) Image filtering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant