CN114727108A - Quantization factor adjusting method and device, electronic equipment and storage medium - Google Patents

Quantization factor adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114727108A
CN114727108A CN202110002860.9A CN202110002860A CN114727108A CN 114727108 A CN114727108 A CN 114727108A CN 202110002860 A CN202110002860 A CN 202110002860A CN 114727108 A CN114727108 A CN 114727108A
Authority
CN
China
Prior art keywords
coding block
current coding
quantization factor
consistency
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110002860.9A
Other languages
Chinese (zh)
Inventor
成超
蔡媛
樊鸿飞
汪贤
鲁方波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202110002860.9A priority Critical patent/CN114727108A/en
Publication of CN114727108A publication Critical patent/CN114727108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Abstract

The method determines the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range thereof, determines a quantization factor offset value for the current coding block of the image according to determined consistency status expression information, and further adjusts the current coding block of the image based on the quantization factor offset value, so that the quantization factors of the coding blocks are adjusted and updated according to the consistency of a reference coding block and a neighborhood coding block thereof, the calculation process of the quantization factors of one block is associated with other blocks, the distribution of the quantization factors of different coding blocks of the image is more reasonable, the distortion degree of the image processed based on the adjusted quantization factors is more smooth and uniform when observed by human eyes, and the visual fault/visual inconsistency caused by the distortion with larger difference degree in the image or video picture is avoided And (4) sex.

Description

Quantization factor adjusting method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image/video encoding and decoding technologies, and in particular, to a quantization factor adjustment method, apparatus, electronic device, and storage medium.
Background
Video coding is a set of standard processes for conveniently storing or transmitting video, and a series of methods and means are used for removing redundancy in video information, so as to achieve the purpose of coding and compressing correspondingly. The aim of video coding is to keep the information contained in the video stream as much as possible and to compress the size of the video stream as low as possible on this basis.
Currently, the conventional standard encoders, such as h.264 and h.265 encoders, basically perform three steps of transform-quantization-entropy coding when compressing video information. In the quantization step, the frequency spectrum value obtained after the conversion to the frequency domain is quantized based on a specific quantization step, the adopted quantization step is obtained based on a quantization factor (or also called a quantization coefficient), and the quantization step and the value of the quantization factor are in positive correlation. Prior to performing encoding operations, a standard encoder usually divides a video frame image into blocks (coding blocks) of several sizes for transform-quantization-entropy encoding, and for example, h.264 divides the image into blocks of 8 × 8 size, wherein each block is quantized by a corresponding quantization factor/quantization step.
When the quantization factors which are respectively needed by different blocks are determined in the existing coding process, the situations that the quantization factors of the different blocks are very uneven in size and large in difference degree easily occur, which can easily cause the phenomena that some information in places with large difference degree of some quantization factors of an image has high distortion degree and low definition, and the other information in places with low distortion degree and high definition, correspondingly can cause obvious visual faults, and the faults also mean visual inconsistency, so that the visual faults/visual inconsistency easily occurs due to the fact that the existing mode for determining the quantization factors of the different blocks of the image is uneven in distortion degree.
Disclosure of Invention
In view of this, the present application provides a quantization factor adjustment method, an apparatus, an electronic device, and a storage medium, which adjust quantization factors of coding blocks by determining a quantization factor offset value for the coding blocks, so that the quantization factors of different coding blocks of an image are distributed more reasonably, and the phenomenon of visual fault/visual inconsistency caused by non-uniform distortion degree is avoided as much as possible.
The specific technical scheme is as follows:
a quantization factor adjustment method, comprising:
determining the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range of the current coding block to obtain consistency condition expression information of the current coding block and the at least one coding block;
determining a quantization factor offset value of the current coding block according to the consistency status expression information;
and adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
Optionally, the determining consistency between a current coding block of an image and at least one coding block in a predetermined neighborhood range of the current coding block to obtain consistency status expression information of the current coding block and the at least one coding block includes:
determining texture consistency of the current coding block and the at least one coding block to obtain texture consistency information of the current coding block and the at least one coding block;
determining the area correlation of the current coding block and the at least one coding block to obtain the area correlation information of the current coding block and the at least one coding block;
wherein the consistency status expression information includes: the texture consistency information, and/or the region correlation information.
Optionally, the determining texture consistency between the current coding block and the at least one coding block to obtain texture consistency information between the current coding block and the at least one coding block includes:
respectively determining the texture complexity of the current coding block and the at least one coding block;
determining a mean and a standard deviation of texture complexity of the current coding block and the at least one coding block;
and determining the similarity degree of the texture complexity degree of the current coding block and the texture complexity degree of the at least one coding block according to the texture complexity degree of the current coding block, the mean value and the standard deviation, and using the similarity degree as texture consistency information of the current coding block and the at least one coding block.
Optionally, the determining the texture complexity of the current coding block and the texture complexity of the at least one coding block respectively includes:
for each of the current coding block and the at least one coding block, calculating a gradient x-grad of a brightness component of the coding block on an x axis and a gradient y-grad of the brightness component on a y axis under a two-dimensional coordinate system;
calculating the amplitude grad of the gradient of the coding block based on the formula grad ═ sqrt (x-grad + y-grad);
based on the formula gradaverCalculating the average gradient magnitude grad of the coding blockaverAs the texture complexity of the coding block; the size represents a size of the encoded block;
the determining the similarity between the texture complexity of the current coding block and the texture complexity of the at least one coding block according to the texture complexity of the current coding block, the mean and the standard deviation comprises:
calculating the similarity degree sim-tex of the texture complexity degree of the current encoding block and the texture complexity degree of the at least one encoding block based on the following calculation formula:
sim-tex=exp(-(gradaver-mgrad)2/2sigma2);
in the above calculation formula: gradaverRepresenting the average gradient amplitude of the current coding block; mgrad and sigma respectively represent the mean and standard deviation of the texture complexity of the current coding block and the at least one coding block.
Optionally, the determining the area correlation between the current coding block and the at least one coding block to obtain the area correlation information between the current coding block and the at least one coding block includes:
calculating the similarity between the current coding block and each coding block in the at least one coding block;
if the at least one coding block comprises one coding block, taking the similarity between the current coding block and the coding block as the area correlation information;
if the at least one coding block comprises a plurality of coding blocks, calculating an average value of the similarity between the current coding block and each coding block, and taking the average value as the area correlation information.
Optionally, the determining, according to the consistency status expression information, a quantization factor offset value of the current coding block includes:
calculating a quantization factor offset value of the current coding block according to the quantization factor of the current coding block, an average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information and the region correlation information;
wherein the size of the quantization factor offset value of the current coding block is in a negative correlation with the consistency level of the current coding block and the at least one coding block characterized by the texture consistency information and the region correlation information.
Optionally, the calculating a quantization factor offset value of the current coding block according to the quantization factor of the current coding block, the average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information, and the region correlation information includes:
calculating a quantization factor Offset value QP _ Offset for the current coding block according to the following calculation:
QP_Offset=[(mQP-QP)*(sim-tex*mssim)];
in the above calculation formula: QP represents a quantization factor for the current coding block; mQP represents an average of the quantization factors of the current coding block and the at least one coding block; sim-tex represents the texture consistency information; mssim represents the area correlation information; [] Indicating a rounding operation.
Optionally, the adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block includes:
and calculating the sum of the quantization factor of the current coding block and the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
A quantization factor adjustment apparatus comprising:
the device comprises a first determining unit, a second determining unit and a judging unit, wherein the first determining unit is used for determining the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range of the current coding block to obtain consistency status expression information of the current coding block and the at least one coding block;
a second determining unit, configured to determine a quantization factor offset value of the current coding block according to the consistency status expression information;
and the adjusting unit is used for adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
An electronic device, comprising:
a memory for storing a set of computer instructions;
a processor for implementing the method of quantization factor adjustment as described in any of the above by executing a set of instructions stored on the memory.
A computer readable storage medium having stored therein a set of computer instructions which, when executed by a processor, implement a method of quantization factor adjustment as recited in any preceding claim.
The quantization factor adjusting method, the device, the electronic device and the storage medium provided by the embodiment of the application determine the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range thereof, determine a quantization factor offset value for the current coding block of the image according to the determined consistency status expression information, and adjust the current coding block of the image based on the quantization factor offset value, thereby realizing the adjustment and update of the quantization factors of the coding blocks by combining the consistency of a reference coding block and a neighborhood coding block thereof, establishing the association between the calculation process of the quantization factor of one block and other blocks, enabling the quantization factors of different coding blocks of the image to be more reasonable in distribution, correspondingly enabling the distortion degree of the image obtained based on the adjusted quantization factor to be smoother and uniform in the view of human eyes, and avoiding the visual fault/visual inconsistency caused by the distortion with larger difference degree in the image or video picture .
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1(a) - (b) are an example of converting a spatial domain two-dimensional image signal into a frequency domain signal through a DCT transform according to an embodiment of the present application;
2(a) - (b) are schematic diagrams of a spatial domain visual image and a corresponding frequency domain spectrum signal provided by an embodiment of the application;
FIG. 3 is a schematic diagram of an entropy coding process based on zigzag scanning according to an embodiment of the present disclosure;
fig. 4(a) - (b) are exemplary diagrams of a frame image and its quantized coefficient spectrum provided in an embodiment of the present application;
fig. 5(a) - (b) are comparison diagrams of the visual effect of each coding block of the original image, obtained by performing encoding compression processing on the image in fig. 4(a) based on the quantized coefficient spectrum shown in fig. 4(b) and decoding and recovering, provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of a quantization factor adjustment method according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of another method for adjusting a quantization factor according to an embodiment of the present disclosure;
FIG. 8 is a flow chart illustrating the process of determining texture consistency of different encoded blocks according to an embodiment of the present application;
fig. 9(a), (b), and (c) are schematic comparison diagrams of an original image, an image obtained by decoding and restoring based on an existing encoding process, and an image obtained by decoding and restoring based on an encoding process based on quantization factor adjustment logic of the present application, in sequence, provided in this embodiment of the present application;
fig. 10 is a schematic structural diagram of a quantization factor adjustment apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
For the sake of reference and clarity, the technical terms, abbreviations or abbreviations used hereinafter are to be interpreted in summary as follows:
frame (frame): one frame is a static picture, one video is composed of continuous frames, slight difference exists between the frames, a dynamic effect is generated when the video is continuously played, and the principle is the same as that of an animation picture;
block (block): in actual operation, video coding is performed by dividing a video frame into a plurality of blocks with variable sizes, which is beneficial to improving coding efficiency through parallel, and enables the coding of different areas to have certain independence, thereby avoiding continuous error codes.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The aim of video coding is to preserve as much as possible the information contained in the video stream and on this basis to compress the size of the video stream as low as possible. Currently, the commonly used video encoders, such as h.264 and h.265 encoders, basically perform information compression by three steps of transform-quantization-entropy coding. The treatment process of each step is specifically described as follows:
and (3) transforming: the transform refers to a two-dimensional discrete Cosine transform, namely dct (discrete Cosine transform), which can convert a two-dimensional image in a spatial domain into a frequency domain for representation. Referring to fig. 1(a) - (b), an example of converting a spatial domain two-dimensional image signal into a frequency domain signal through DCT transform is provided, and according to fig. 1(a) - (b), it can be seen that the distribution of values (e.g., gray/brightness or RGB color channel values) of an image in a spatial domain (corresponding to an image that can be viewed in a visual aspect) is relatively flat, and the places where the difference of values is relatively large are the edges of the image or the regions with rich texture, but there is no uniform rule for the distribution of different images as a whole. However, the difference is in the frequency domain, and almost all images are in the DCT transform domain (i.e., the frequency domain), the values (each value in fig. 1(b) represents the amplitude/intensity of the frequency in the frequency spectrum) are mostly concentrated on the upper left corner of the frequency spectrum, i.e., the lower right corner, i.e., the values distributed on the high frequency component are smaller, because one image is mainly based on a flat region and secondary on edge texture, as shown in fig. 2(a), the low frequency component in the frequency spectrum represents a region that is relatively flat or uniformly distributed in the image, and the high frequency component represents information such as edge, texture, etc. in the image, and the characteristic of the low frequency concentration correspondingly appears, which can be further referred to fig. 2 (b).
And (3) quantification: quantization refers to a technique of compressing the image frequency domain information obtained after DCT transformation by some means, and the general method is as follows: a set of quantization step sizes are defined by a set of quantization factors (also called QP values), the quantization step sizes and the values of the quantization factors are in positive correlation, that is, the larger the quantization factor is, the larger the quantization step size is, after a specific quantization factor is selected, a specific quantization step size is correspondingly determined, on the basis, the spectrum value (amplitude/intensity of frequency in spectrum) of the image in the frequency domain is divided by the quantization step size, and then rounding (e.g. rounding) is performed, so that a quantization result can be obtained. It can be expected that when the quantization step is large, if the value of a certain component in the spectrum is small, a 0 value will easily appear after the above-mentioned dividing operation and rounding operation in the quantization process, and since the information of the image is mainly concentrated in a low-frequency region in the frequency domain (the low-frequency component has a large value) and the information amount of the high-frequency part is small (the high-frequency component has a small value), after the quantization operation, the more the 0 value in the lower right corner of the spectrum is, and it is also expected that the larger the quantization factor is, the more the 0 value is.
Entropy coding: after DCT transformation and quantization, the image is transformed into a spectrum signal with abundant low frequency values and many 0 at high frequencies, and entropy coding aims to perform final compression on the spectrum signal. Referring to fig. 3, the entropy coding process is typically performed using a zigzag scanning method, which includes: the frequency domain signal value is scanned based on a Z-shaped mode, once a value which is not 0 is scanned, the value is recorded, and the number of continuous 0 values is recorded when the value 0 is scanned, and the 0 values generally appear in pieces, so that the compression efficiency of the continuous 0 values can be greatly improved based on the mode.
Quantization-transform-entropy coding is the most core process of coding and compression, a quantization level can be specified by using a quantization factor, the value range of the quantization factor is usually 1-51, the larger the quantization factor is, the larger the quantization step is, the more easily the high frequency part of a frequency spectrum is discarded to have a value of 0, correspondingly, the smaller the data size obtained after the final entropy coding is, that is, the higher the compression ratio is, it is easy to understand, in this case, the higher the image distortion degree is, and conversely, the smaller the quantization factor is, the lower the compression ratio is, and the lower the image distortion degree is.
At the decoding end, a restored image can be obtained by performing the inverse process of encoding and compressing, such as restoring entropy encoding, multiplying by a quantization step size, and performing DCT inverse transformation. The larger the quantization step size in the encoding process is, the larger the information loss is, and the higher the distortion degree of the image restored after decoding is.
Prior to performing encoding operations, a standard encoder usually divides a video frame image into blocks (coding blocks) of several sizes for transform-quantization-entropy encoding, and for example, h.264 divides the image into blocks of 8 × 8 size, wherein each block is quantized by a corresponding quantization factor/quantization step. When determining quantization factors of different blocks, the existing encoding flow is easier to obtain a higher quantization factor for a block with more dynamic and richer texture in an image so as to obtain a larger compression rate, and is easier to obtain a lower quantization factor for a block with more static and flatter texture so as to reserve the information amount of the block to a larger extent.
However, the inventor has found that the calculation and generation process of the quantization factors in the spatial domain is often independent for a specific block, that is, the calculation process of the quantization factor of one block is not directly related to what quantization factor is used by other blocks or other blocks, which easily causes a phenomenon that a significant visual fault is easily caused where some quantization factors are very non-uniform, that is, a part of an image has high distortion and low definition, and a part of the image has low distortion and high definition. Such a fault also refers to visual inconsistency, which affects the quality of video coding compression and causes visual discomfort for the user.
For example, it is assumed that the original video resolution is 512x512, a frame image shown in fig. 4(a) is extracted, and a quantization coefficient spectrum of the frame image, shown in fig. 4(b), is obtained after being encoded by an encoder, where each quantization factor in the quantization coefficient spectrum corresponds to one coding block (e.g., a coding block of 8x 8), and in practical applications, the sizes of the coding blocks of the image may be the same or different, for example, the sizes of the coding blocks of the image are respectively 8x8, 16x16, 64x64 …, etc., and it can be seen from fig. 4(b) that the values of the quantization factors are very non-uniform, and since the distortion degree changes with the change of the quantization factors, the larger the distortion degree of the quantization factors is, the result of the non-uniform distribution of the quantization factors is that the distortion degree of different coding blocks in the image is very non-uniform, after the image of fig. 4(a) is encoded, compressed, decoded and restored based on the quantized coefficient spectrum shown in fig. 4(b), the resulting image is shown in fig. 5(a), which has a large contrast with the visual effect of each coding block of the original image shown in fig. 5(b), and as can be seen from fig. 5(a), a significant visual fault/visual inconsistency occurs when the background is flat or the difference in distortion level is large in the region where the main body and the background are in transition.
In view of the above technical problems, the present application provides a quantization factor adjustment method, apparatus, electronic device, and storage medium, which aim to adjust quantization factors of coding blocks by determining a quantization factor offset value for the coding blocks, so that the quantization factors of different coding blocks of an image are distributed more reasonably, and the phenomenon of visual fault/visual inconsistency caused by non-uniform distortion degree is avoided as much as possible.
Referring to fig. 6, a schematic flow chart of the quantization factor adjustment method provided in the embodiment of the present application is shown, and the quantization factor adjustment method in the embodiment of the present application may be applied to, but is not limited to, a terminal device such as a mobile phone, a tablet computer, and a personal PC (e.g., a notebook, an all-in-one machine, and a desktop computer) having an image/video frame encoding and decoding processing function, or a physical machine corresponding to a private cloud/public cloud platform and a server having an image/video frame encoding and decoding processing function.
As shown in fig. 6, in this embodiment, the method for adjusting the quantization factor includes the following processing steps:
step 601, determining the consistency between a current coding block of an image and at least one coding block in a predetermined neighborhood range of the current coding block, and obtaining consistency status expression information of the current coding block and the at least one coding block.
The image in this step may be a certain image in an equipment local album/image set or a network or cloud platform album/image set, or may also be a certain frame of image extracted from a video stream. Before encoding is performed, an encoder usually divides an image into blocks (encoding blocks) of several sizes, and performs transform-quantization-entropy encoding on each encoding block, where a current encoding block is a block to be subjected to transform-quantization-entropy encoding in the image.
The predetermined neighborhood range of the current coding block may refer to a full neighborhood range or a non-full neighborhood range of the current coding block, and at least one coding block within the predetermined neighborhood range of the current coding block may be at least one coding block adjacent to (contiguous to) the current coding block in a neighborhood (full neighborhood or non-full neighborhood) of the current coding block, but is not limited thereto.
It should be noted that, in the actual encoding process of the encoder, the partitions of the blocks are not the same size as each block in the checkerboard, and the sizes of the blocks may be the same or different, so that the predetermined neighborhood range corresponding to the current encoding block may also change according to the partition policy of the block size.
In this step, texture consistency between the current coding block and at least one coding block in the predetermined neighborhood range of the current coding block and/or region correlation between the current coding block and at least one coding in the predetermined neighborhood range of the current coding block are specifically adopted to measure consistency between the current coding block and the at least one coding block.
Preferably, the consistency between the current coding block and the at least one coding block can be measured by combining the texture consistency between the current coding block and the at least one coding block in the predetermined neighborhood range of the current coding block and the area correlation between the current coding block and the at least one coding block in the predetermined neighborhood range of the current coding block, and the texture consistency information and the area correlation information between the current coding block and the at least one coding block are used as the consistency status expression information.
Step 602, determining a quantization factor offset value of the current coding block according to the consistency status expression information.
Specifically, a quantization factor offset value for adjusting a quantization factor of the current coding block may be determined according to texture consistency information and region correlation information between the current coding block and the at least one coding block.
It is easy to understand that, based on the texture consistency information and the region correlation information, the degree of consistency between the current coding block and at least one coding block in the predetermined neighborhood range can be characterized, and on this basis, in the embodiment of the present application, the determination of the quantization factor offset value of the current coding block at least follows the following principle:
1) for a current coding block which is more consistent with at least one coding block in a predetermined neighborhood range, namely has higher consistency, a smaller quantization factor offset value (the minimum value is 0) is determined, so that the quantization factor value is adjusted to a smaller degree or is not adjusted;
and determining a larger quantization factor offset value for the current coding block which is less inconsistent with at least one coding block in the predetermined neighborhood range, namely, has lower consistency degree, so as to adjust the quantization factor value to a larger degree.
2) While the quantization factor is adjusted based on the determined offset value, the quantization factor obtained by the adjustment is required to have the characteristics of "more dynamic, more abundant texture blocks have higher quantization factors to obtain a larger compression ratio, and more static and flatter blocks have lower quantization factors to retain the information amount of the blocks to a greater extent".
Step 603, adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
After the quantization factor offset value of the current coding block is obtained, the quantization factor offset value is summed with the quantization factor of the current coding block, and the adjusted quantization factor of the current coding block can be obtained.
The value of the quantization factor offset value is determined based on the consistency of the current coding block and at least one coding block in the preset neighborhood range of the current coding block, after the quantization factor of the current coding block is adjusted based on the quantization factor offset value, the difference between the quantization factor of the current coding block and the quantization factor of at least one coding block within its predetermined neighborhood can be reduced, the adjusted quantization factors corresponding to different coding blocks simultaneously reserve the characteristics that the more dynamic and more texture blocks have higher quantization factors to obtain higher compression ratio, and the more static and flatter blocks have lower quantization factors to reserve the information quantity of the blocks to a greater extent, the quantization factors of different coding blocks of the image are more uniform and reasonable in distribution, and the difference characteristics of the quantization factors of different blocks with different image characteristics are kept.
The quantization factor adjustment method of the present embodiment determines the consistency of the current coding block of the image and at least one coding block within the predetermined neighborhood range thereof, and determining a quantization factor offset value for the current coding block of the image based on the determined correspondence status expression information, further, the current coding block of the image is adjusted based on the quantization factor offset value, the quantization factors of the coding blocks are adjusted and updated by combining the consistency of the reference coding block and the adjacent coding block, the calculation process of the quantization factor of one block is associated with other blocks, the quantization factors of different coding blocks of the image can be more reasonable in distribution, the distortion degree of the image obtained based on the adjusted quantization factors is more smooth and uniform when the image is observed by human eyes, and visual fault/visual inconsistency caused by distortion with larger difference degree in the image or video picture is avoided.
The following further describes a specific implementation procedure of the quantization factor adjustment method according to the present application by using another alternative embodiment.
Referring to another flow diagram of the quantization factor adjustment method shown in fig. 7, the quantization factor adjustment method may be specifically implemented as:
step 701, determining texture consistency between the current coding block and the at least one coding block, and obtaining texture consistency information between the current coding block and the at least one coding block.
As shown in fig. 8, this step 701 may specifically determine texture consistency of the current encoding block and the at least one encoding block by performing the following processing:
step 801, determining texture complexity of the current coding block and the at least one coding block respectively.
Optionally, in the present embodiment, the average gradient amplitude of the coding block is used to represent the texture complexity, and the inventor finds that, for a coding block with a large number of textures, the gradient of the coding block is large, the average gradient amplitude is also large, and the texture complexity can be objectively reflected based on the average gradient amplitude of the coding block.
For the current encoding block and each encoding block in the at least one encoding block, the calculation process of the average gradient amplitude is specifically as follows:
1) calculating gradient x-grad of brightness component of the coding block on an x axis and gradient y-grad on a y axis under a two-dimensional coordinate system;
2) calculating the magnitude grad of the gradient of the coding block based on the following calculation formula:
grad=sqrt(x-grad*x-grad+y-grad*y-grad);
3) calculating the average gradient amplitude grad of the coding block based on the following calculation formulaaver
gradaver=grad/size;
Wherein the content of the first and second substances,
the calculated average gradient amplitude grad of the coding blockaverAs texture complexity of the coding block; the size represents the size of the coding block, such as 8x8, 16x16, etc.
In implementation, when calculating the texture complexity of the coding block, the method may not be limited to the above manner of representing the texture complexity by using the average gradient amplitude of the coding block, and may also use other methods to measure the texture complexity of the coding block, for example, using a las operator or other operators to measure the texture complexity of the coding block instead of the gradient operator, which is not limited in this embodiment.
Step 802, determining a mean and a standard deviation of texture complexity of the current coding block and the at least one coding block.
Since the present embodiment utilizes the average gradient amplitude of the coding block to represent the texture complexity thereof, the average and standard deviation of the texture complexity of the current coding block and the at least one coding block within the predetermined neighborhood thereof can be obtained by calculating the average and standard deviation of the average gradient amplitude of the current coding block and the at least one coding block.
Assuming that the number of blocks of the current coding block and the at least one coding block is N, and assuming that the average gradient magnitude of the N blocks is represented as gradaver1、gradaver2…gradaverN
Then:
the average of the average gradient magnitudes of the N blocks (i.e., the current encoding block and the at least one encoding block) is: mgrad ═ gradaver1+gradaver2+…+gradaverN)/N;
The standard deviation of the average gradient magnitudes of the N blocks (i.e., the current encoding block and the at least one encoding block) is: sigma ═ sqrt (((grad)aver1-mgrad)2+…+(gradaverN-mgrad)2)/N)。
And respectively taking the mgrad and the sigma obtained by calculation as the mean value and the standard deviation of the texture complexity of the current coding block and the at least one coding block.
And 803, determining the similarity between the texture complexity of the current coding block and the texture complexity of the at least one coding block according to the texture complexity of the current coding block, the mean and the standard deviation, and using the similarity as texture consistency information between the current coding block and the at least one coding block.
Specifically, the similarity sim-tex between the texture complexity of the current encoding block and the texture complexity of the at least one encoding block may be calculated based on the following calculation:
sim-tex=exp(-(gradaver-mgrad)2/2sigma2);
in the above calculation formula:
gradaverrepresenting the average gradient amplitude of the current coding block;
mgrad and sigma respectively represent the mean value and standard deviation of the texture complexity of the current coding block and the at least one coding block.
And the calculated sim-tex is used as the texture consistency information of the current coding block and the at least one coding block.
Similar to the texture complexity of the coding blocks, in the implementation, when the similarity degrees of different coding blocks are calculated, other measures may also be used, for example, measure the similarity degrees of different coding blocks by using mse (mean-square error), sad (Sum of absolute differences), and other methods instead of ssim.
Step 702, determining the area correlation between the current coding block and the at least one coding block to obtain the area correlation information between the current coding block and the at least one coding block;
wherein the consistency status expression information includes: the texture consistency information, and/or the region correlation information.
The embodiment specifically calculates the similarity between the current coding block and each coding block in the at least one coding block, and takes the similarity between the current coding block and the one coding block as the area correlation information under the condition that the at least one coding block only comprises one coding block; and under the condition that the at least one coding block comprises a plurality of coding blocks, further calculating the average value of the similarity between the current coding block and each coding block, and taking the calculated average value of the similarity as the area correlation information.
In an implementation, the SIMilarity of different coding blocks may be measured by calculating a value of ssim (Structural SIMilarity) SIMilarity between the different coding blocks.
In the implementation mode, for N blocks formed by a current coding block and at least one coding block in a predetermined neighborhood range of the current coding block, firstly, respectively calculating the similarity of the current coding block and other N-1 blocks to obtain N-1 similarity values, wherein the similarity of the ssim is in a value range of 0-1, the larger the value is, the more similar the value is, then, calculating the average value mssim of the N-1 similarity values, and taking the average value mssim as the regional correlation information of the current coding block and the at least one coding block. If the value of N-1 is 1, mssim is an ssim obtained by calculation.
Step 703, calculating a quantization factor offset value of the current coding block according to the quantization factor of the current coding block, the average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information and the region correlation information;
wherein the size of the quantization factor offset value of the current coding block is in a negative correlation with the consistency of the current coding block and the at least one coding block characterized by the texture consistency information and the region correlation information.
Alternatively, the quantization factor Offset value QP _ Offset for the current coding block may be calculated according to the following calculation:
QP_Offset=[(mQP-QP)*(sim-tex*mssim)];
in the above calculation formula:
QP represents the quantization factor of the current coding block;
mQP represents an average of the quantization factors of the current coding block and the at least one coding block;
sim-tex represents the texture consistency information; mssim represents the area correlation information;
[] Indicating a rounding operation.
From the experimental results, the quantization factor adjustment value QP _ Offset calculated by the above calculation formula conforms to the two-point principle that the above quantization factor Offset value needs to follow.
Step 704, calculating a sum of the quantization factor of the current coding block and the quantization factor offset value to obtain an adjusted quantization factor of the current coding block.
After the quantization factor offset value of the current coding block is obtained, summing the quantization factor offset value and the quantization factor of the current coding block to obtain the adjusted quantization factor of the current coding block, wherein the specific calculation formula is as follows:
QPadj=QP+QP_Offset
wherein:
QPadjrepresenting the adjusted quantization factor of the coding block;
QP denotes the unadjusted quantization factor of the coding block, i.e. the original quantization factor that the encoder generated for the coding block based on the existing coding flow.
It should be noted that the above-mentioned series of calculation formulas (excluding the standard formulas corresponding to the mean and standard deviation calculation formulas) referred to in the present application are only exemplary and non-limiting embodiments provided in the embodiments of the present application, and in practical applications, the required calculation formulas may be flexibly set or reasonably modified in combination with the technical idea of the method of the present application, the above principles to be followed, and/or the parameters and parameters provided in each step and each link of the embodiments of the present application, and these are all within the scope of the embodiments of the present application.
Based on the above processing procedures, for an independent image or a video frame image, the original quantization factor of each coding block of the image can be finally adjusted and updated (without excluding the situation that the original value is not changed due to the high consistency of the coding block and the neighborhood coding block), so that the quantization factors of different coding blocks of the image are more uniform and reasonable in distribution, and in the method, two evaluation dimensions are adopted when representing the corresponding visual consistency caused by the consistency among different coding blocks: the texture complexity and the similarity between different blocks of the image are more consistent with the human eye sensory characteristics, so that the adjustment of the quantization factor based on the consistency of the two dimension characterizations makes the distortion degree of the adjusted image more smooth and uniform when observed by the human eye, thereby avoiding visual inconsistency caused by distortion with a large difference degree in the image or video picture, see fig. 9(a) - (c), the comparison schematic diagram of the original image, the image obtained by decoding and recovering based on the existing coding flow and the coding flow based on the quantization factor adjustment logic basis and the image obtained by decoding and recovering based on the quantization factor adjustment logic of the present application is provided in sequence from (a) - (c) (left → right), and it is obvious that fig. 9(c) is more smooth and uniform when observed by the human eye than fig. 9(b) through comparison, there is no apparent visual fault/visual inconsistency, while 9(b) has apparent visual fault/visual inconsistency.
Corresponding to the above quantization factor adjusting method, the embodiment of the present application further discloses a quantization factor adjusting device, and referring to the schematic structural diagram of the quantization factor adjusting device shown in fig. 10, the device may include:
a first determining unit 1001, configured to determine consistency between a current coding block of an image and at least one coding block in a predetermined neighborhood range of the current coding block, and obtain consistency status expression information of the current coding block and the at least one coding block;
a second determining unit 1002, configured to determine, according to the consistency status expression information, a quantization factor offset value of the current coding block;
an adjusting unit 1003, configured to adjust the quantization factor of the current coding block according to the quantization factor offset value, to obtain an adjusted quantization factor of the current coding block.
In an optional implementation manner of the embodiment of the present application, the first determining unit 1001 is specifically configured to:
determining texture consistency of the current coding block and the at least one coding block to obtain texture consistency information of the current coding block and the at least one coding block;
determining the area correlation of the current coding block and the at least one coding block to obtain the area correlation information of the current coding block and the at least one coding block;
wherein the consistency status expression information includes: the texture consistency information, and/or the region correlation information.
In an optional implementation manner of the embodiment of the present application, in determining texture consistency between the current coding block and the at least one coding block, and obtaining texture consistency information of the current coding block and the at least one coding block, the first determining unit 1001 is specifically configured to:
respectively determining the texture complexity of the current coding block and the at least one coding block;
determining a mean and a standard deviation of texture complexity of the current coding block and the at least one coding block;
and determining the similarity degree of the texture complexity degree of the current coding block and the texture complexity degree of the at least one coding block according to the texture complexity degree of the current coding block, the mean value and the standard deviation, and using the similarity degree as texture consistency information of the current coding block and the at least one coding block.
In an optional implementation manner of the embodiment of the present application, the first determining unit 1001 is specifically configured to, in terms of determining the texture complexity of the current coding block and the texture complexity of the at least one coding block respectively:
for each of the current coding block and the at least one coding block, calculating a gradient x-grad of a brightness component of the coding block on an x axis and a gradient y-grad of the brightness component on a y axis under a two-dimensional coordinate system;
calculating the amplitude grad of the gradient of the coding block based on the formula grad ═ sqrt (x-grad + y-grad);
based on the formula gradaverCalculating the average gradient magnitude grad of the coding blockaverAs the texture complexity of the coding block; the size represents a size of the encoded block;
the first determining unit 1001, configured to determine, according to the texture complexity of the current coding block, the mean and the standard deviation, a similarity between the texture complexity of the current coding block and the texture complexity of the at least one coding block, as texture consistency information of the current coding block and the at least one coding block, specifically configured to:
calculating the similarity degree sim-tex of the texture complexity degree of the current encoding block and the texture complexity degree of the at least one encoding block based on the following calculation formula:
sim-tex=exp(-(gradaver-mgrad)2/2sigma2);
in the above calculation formula: gradaverRepresenting the average gradient amplitude of the current coding block; mgrad and sigma respectively represent the mean and standard deviation of the texture complexity of the current coding block and the at least one coding block.
In an optional implementation manner of the embodiment of the present application, in determining the area correlation between the current coding block and the at least one coding block, and obtaining the area correlation information between the current coding block and the at least one coding block, the first determining unit 1001 is specifically configured to:
calculating the similarity between the current coding block and each coding block in the at least one coding block;
if the at least one coding block comprises one coding block, taking the similarity between the current coding block and the coding block as the area correlation information;
if the at least one coding block comprises a plurality of coding blocks, calculating an average value of the similarity between the current coding block and each coding block, and taking the average value as the area correlation information.
In an optional implementation manner of the embodiment of the present application, the second determining unit 1002 is specifically configured to:
calculating a quantization factor offset value of the current coding block according to the quantization factor of the current coding block, an average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information and the region correlation information;
wherein the size of the quantization factor offset value of the current coding block is in a negative correlation with the consistency of the current coding block and the at least one coding block characterized by the texture consistency information and the region correlation information.
In an optional implementation manner of the embodiment of the present application, the second determining unit 1002, in calculating the quantization factor offset value of the current coding block according to the quantization factor of the current coding block, the average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information, and the region correlation information, is specifically configured to:
calculating a quantization factor Offset value QP _ Offset for the current coding block according to the following calculation:
QP_Offset=[(mQP-QP)*(sim-tex*mssim)];
in the above calculation formula: QP represents the quantization factor of the current coding block; mQP represents an average of the quantization factors of the current coding block and the at least one coding block; sim-tex represents the texture consistency information; mssim represents the area correlation information; [] Indicating a rounding operation.
In an optional implementation manner of the embodiment of the present application, the adjusting unit 1003 is specifically configured to:
and calculating the sum of the quantization factor of the current coding block and the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
The quantization factor adjusting device disclosed in the embodiments of the present application is relatively simple in description because it corresponds to the quantization factor adjusting method disclosed in any of the embodiments above, and for the relevant similarities, please refer to the description of the quantization factor adjusting method in the embodiments above, and details are not described here.
The embodiment of the application also discloses an electronic device, which can be, but is not limited to, a terminal device such as a mobile phone, a tablet computer, a personal PC (e.g., a notebook, an all-in-one machine, and a desktop) having an image/video frame coding and decoding processing function, or a physical machine corresponding to a private cloud/public cloud platform, a server (an image processing server), and the like having an image/video frame coding and decoding processing function.
The composition structure of the electronic device is shown in fig. 11, and at least includes:
a memory 1101 for storing a set of computer instructions;
the set of computer instructions may be embodied in the form of a computer program.
The memory 1101 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
A processor 1102 for implementing the quantization factor adjustment method as disclosed in the above method embodiments by executing the set of instructions stored in the memory.
The processor 1102 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, etc.
Besides, the electronic device may further include a communication interface, a communication bus, and the like. The memory, the processor and the communication interface communicate with each other via a communication bus.
The communication interface is used for communication between the electronic device and other devices. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like, and may be divided into an address bus, a data bus, a control bus, and the like.
In this embodiment, when a processor in an electronic device executes a computer instruction set stored in a memory, the processor determines consistency of a current coding block of an image and at least one coding block in a predetermined neighborhood range of the current coding block, determines a quantization factor offset value for the current coding block of the image according to determined consistency status expression information, and adjusts the current coding block of the image based on the quantization factor offset value, so as to adjust and update quantization factors of the coding blocks in combination with consistency of a reference coding block and a neighborhood coding block, and establish association between a calculation process of the quantization factor of one block and other blocks, so that the quantization factors of different coding blocks of the image are more reasonably distributed, and accordingly, the distortion degree of the image processed based on the adjusted quantization factor is more smooth and uniform when viewed by human eyes, thereby avoiding visual fault/visual inconsistency caused by distortion with a large difference degree in the image or video picture Causing sexual disorder.
In addition, the present application further discloses a computer-readable storage medium, in which a set of computer instructions is stored, and when the set of computer instructions is executed by a processor, the quantization factor adjustment method disclosed in the above method embodiments is implemented.
The instructions stored in the computer-readable storage medium, when executed, cause the apparatus to determine a block of the image based on the determined correspondence of the current encoded block of the image with at least one encoded block within a predetermined neighborhood of the current encoded block, and determining a quantization factor offset value for the current coding block of the image based on the determined correspondence representation information, further, the current coding block of the image is adjusted based on the quantization factor offset value, the quantization factors of the coding blocks are adjusted and updated by combining the consistency of the reference coding block and the adjacent coding block, the calculation process of the quantization factor of one block is associated with other blocks, the quantization factors of different coding blocks of the image can be more reasonable in distribution, the distortion degree of the image obtained based on the adjusted quantization factors is more smooth and uniform when the image is observed by human eyes, and visual fault/visual inconsistency caused by distortion with larger difference degree in the image or video picture is avoided.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units in terms of functions, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it should also be noted that, in this document, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (11)

1. A quantization factor adjustment method, comprising:
determining the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range of the current coding block to obtain consistency status expression information of the current coding block and the at least one coding block;
determining a quantization factor deviation value of the current coding block according to the consistency status expression information;
and adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
2. The method of claim 1, wherein the determining the consistency between a current coding block of a picture and at least one coding block in a predetermined neighborhood of the current coding block to obtain the consistency status expression information of the current coding block and the at least one coding block comprises:
determining texture consistency of the current coding block and the at least one coding block to obtain texture consistency information of the current coding block and the at least one coding block;
determining the area correlation of the current coding block and the at least one coding block to obtain the area correlation information of the current coding block and the at least one coding block;
wherein the consistency status expression information includes: the texture consistency information, and/or the region correlation information.
3. The method of claim 2, wherein the determining texture consistency of the current coding block and the at least one coding block to obtain texture consistency information of the current coding block and the at least one coding block comprises:
respectively determining the texture complexity of the current coding block and the at least one coding block;
determining the mean value and the standard deviation of the texture complexity of the current coding block and the at least one coding block;
and determining the similarity degree of the texture complexity degree of the current coding block and the texture complexity degree of the at least one coding block according to the texture complexity degree of the current coding block, the mean value and the standard deviation, and using the similarity degree as texture consistency information of the current coding block and the at least one coding block.
4. The method of claim 3, wherein the determining the texture complexity of the current coding block and the at least one coding block respectively comprises:
for each of the current coding block and the at least one coding block, calculating a gradient x-grad of a brightness component of the coding block on an x axis and a gradient y-grad of the brightness component on a y axis under a two-dimensional coordinate system;
calculating the amplitude grad of the gradient of the coding block based on the formula grad ═ sqrt (x-grad + y-grad);
based on formula gradaverCalculating the average gradient magnitude grad of the coding blockaverAs the texture complexity of the coding block; the size represents a size of the encoded block;
the determining the similarity between the texture complexity of the current coding block and the texture complexity of the at least one coding block according to the texture complexity of the current coding block, the mean and the standard deviation comprises:
calculating the similarity degree sim-tex of the texture complexity degree of the current encoding block and the texture complexity degree of the at least one encoding block based on the following calculation formula:
sim-tex=exp(-(gradaver-mgrad)2/2sigma2);
in the above calculation formula: gradaverRepresenting an average gradient magnitude of the current coding block; mgrad and sigma respectively represent the mean and standard deviation of the texture complexity of the current coding block and the at least one coding block.
5. The method of claim 2, wherein the determining the regional correlation between the current coding block and the at least one coding block to obtain the regional correlation information between the current coding block and the at least one coding block comprises:
calculating the similarity between the current coding block and each coding block in the at least one coding block;
if the at least one coding block comprises one coding block, taking the similarity between the current coding block and the coding block as the area correlation information;
if the at least one coding block comprises a plurality of coding blocks, calculating an average value of the similarity between the current coding block and each coding block, and taking the average value as the area correlation information.
6. The method of claim 2, wherein the determining the quantization factor offset value of the current coding block according to the consistency status expression information comprises:
calculating a quantization factor offset value of the current coding block according to the quantization factor of the current coding block, an average value of the quantization factors of the current coding block and the at least one coding block, the texture consistency information and the region correlation information;
wherein the size of the quantization factor offset value of the current coding block is in a negative correlation with the consistency of the current coding block and the at least one coding block characterized by the texture consistency information and the region correlation information.
7. The method of claim 6, wherein the calculating the quantization factor offset value of the current coding block according to the quantization factor of the current coding block, the average of the quantization factors of the current coding block and the at least one coding block, the texture consistency information, and the region correlation information comprises:
calculating a quantization factor Offset value QP _ Offset for the current coding block according to the following calculation:
QP_Offset=[(mQP-QP)*(sim-tex*mssim)];
in the above calculation formula: QP represents the quantization factor of the current coding block; mQP represents an average of the quantization factors of the current coding block and the at least one coding block; sim-tex represents the texture consistency information; mssim represents the area correlation information; [] Indicating a rounding operation.
8. The method of claim 1, wherein the adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain an adjusted quantization factor of the current coding block comprises:
and calculating the sum of the quantization factor of the current coding block and the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
9. A quantization factor adjustment apparatus, comprising:
the device comprises a first determining unit, a second determining unit and a judging unit, wherein the first determining unit is used for determining the consistency of a current coding block of an image and at least one coding block in a preset neighborhood range of the current coding block to obtain consistency status expression information of the current coding block and the at least one coding block;
a second determining unit, configured to determine a quantization factor offset value of the current coding block according to the consistency status expression information;
and the adjusting unit is used for adjusting the quantization factor of the current coding block according to the quantization factor offset value to obtain the adjusted quantization factor of the current coding block.
10. An electronic device, comprising:
a memory for storing a set of computer instructions;
a processor for implementing the method of quantization factor adjustment of any one of claims 1-8 by executing a set of instructions stored on the memory.
11. A computer-readable storage medium having stored therein a set of computer instructions which, when executed by a processor, implement the quantization factor adjustment method of any one of claims 1-8.
CN202110002860.9A 2021-01-04 2021-01-04 Quantization factor adjusting method and device, electronic equipment and storage medium Pending CN114727108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110002860.9A CN114727108A (en) 2021-01-04 2021-01-04 Quantization factor adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110002860.9A CN114727108A (en) 2021-01-04 2021-01-04 Quantization factor adjusting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114727108A true CN114727108A (en) 2022-07-08

Family

ID=82234996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110002860.9A Pending CN114727108A (en) 2021-01-04 2021-01-04 Quantization factor adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114727108A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1668107A (en) * 2004-03-10 2005-09-14 Lg电子有限公司 Device and method for controlling bit rate of an image
CN101287112A (en) * 2008-06-03 2008-10-15 方春 Optimizing method controlled by fast high effective code rate
CN101416512A (en) * 2006-04-07 2009-04-22 微软公司 Quantization adjustment based on texture level
CN108206951A (en) * 2016-12-20 2018-06-26 安讯士有限公司 The method encoded to the image for including privacy screen
CN108900840A (en) * 2018-07-10 2018-11-27 珠海亿智电子科技有限公司 For hard-wired H264 macro-block level bit rate control method
CN109756733A (en) * 2017-11-06 2019-05-14 华为技术有限公司 video data decoding method and device
CN110139109A (en) * 2018-02-08 2019-08-16 北京三星通信技术研究有限公司 The coding method of image and corresponding terminal
CN110495174A (en) * 2018-04-04 2019-11-22 深圳市大疆创新科技有限公司 Coding method, device, image processing system and computer readable storage medium
CN111277829A (en) * 2020-02-25 2020-06-12 西安万像电子科技有限公司 Encoding and decoding method and device
CN111385577A (en) * 2020-04-07 2020-07-07 广州市百果园信息技术有限公司 Video transcoding method, device, computer equipment and computer readable storage medium
CN112073723A (en) * 2020-11-16 2020-12-11 北京世纪好未来教育科技有限公司 Video information processing method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1668107A (en) * 2004-03-10 2005-09-14 Lg电子有限公司 Device and method for controlling bit rate of an image
CN101416512A (en) * 2006-04-07 2009-04-22 微软公司 Quantization adjustment based on texture level
CN101287112A (en) * 2008-06-03 2008-10-15 方春 Optimizing method controlled by fast high effective code rate
CN108206951A (en) * 2016-12-20 2018-06-26 安讯士有限公司 The method encoded to the image for including privacy screen
CN109756733A (en) * 2017-11-06 2019-05-14 华为技术有限公司 video data decoding method and device
CN110139109A (en) * 2018-02-08 2019-08-16 北京三星通信技术研究有限公司 The coding method of image and corresponding terminal
CN110495174A (en) * 2018-04-04 2019-11-22 深圳市大疆创新科技有限公司 Coding method, device, image processing system and computer readable storage medium
CN108900840A (en) * 2018-07-10 2018-11-27 珠海亿智电子科技有限公司 For hard-wired H264 macro-block level bit rate control method
CN111277829A (en) * 2020-02-25 2020-06-12 西安万像电子科技有限公司 Encoding and decoding method and device
CN111385577A (en) * 2020-04-07 2020-07-07 广州市百果园信息技术有限公司 Video transcoding method, device, computer equipment and computer readable storage medium
CN112073723A (en) * 2020-11-16 2020-12-11 北京世纪好未来教育科技有限公司 Video information processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP5283628B2 (en) Video decoding method and video encoding method
DE69434862T2 (en) SEGMENTATION-BASED REMOVAL OF ARTIFACTS FROM A JPEG IMAGE
JP2960386B2 (en) Signal adaptive filtering method and signal adaptive filter
JP4870743B2 (en) Selective chrominance decimation for digital images
CN112913237A (en) Artificial intelligence encoding and decoding method and apparatus using deep neural network
CN104378636B (en) A kind of video encoding method and device
US20190014325A1 (en) Video encoding method, video decoding method, video encoder and video decoder
JP2001320586A (en) Post-processing method for expanded image and post- processing method for interlaced moving picture
WO2021098030A1 (en) Method and apparatus for video encoding
WO2006133613A1 (en) Method for reducing image block effects
JP2000152241A (en) Method and device for restoring compressed moving image for removing blocking and ring effects
US6393061B1 (en) Method for reducing blocking artifacts in digital images
JP3105335B2 (en) Compression / expansion method by orthogonal transform coding of image
CN110740316A (en) Data coding method and device
CN112584153B (en) Video compression method and device based on just noticeable distortion model
ZA200400075B (en) Interframe encoding method and apparatus.
JP4243218B2 (en) Quantization control device, method and program thereof, and adaptive quantization encoding device
JP2000059782A (en) Compression method for spatial area digital image
CN114727108A (en) Quantization factor adjusting method and device, electronic equipment and storage medium
CN114708157A (en) Image compression method, electronic device, and computer-readable storage medium
JPH05176173A (en) Picture data compression method
JP2004343334A (en) Image processing apparatus, image forming apparatus, image processing method, image processing program, and recording medium
CN115988201B (en) Method, apparatus, electronic device and storage medium for encoding film grain
JP2000152229A (en) Method and device for restoring compressed image of image processing system
CN116248895B (en) Video cloud transcoding method and system for virtual reality panorama roaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination