CN113382246B - Encoding method, encoding device, electronic device and computer readable storage medium - Google Patents

Encoding method, encoding device, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113382246B
CN113382246B CN202110427021.1A CN202110427021A CN113382246B CN 113382246 B CN113382246 B CN 113382246B CN 202110427021 A CN202110427021 A CN 202110427021A CN 113382246 B CN113382246 B CN 113382246B
Authority
CN
China
Prior art keywords
pixel
reconstructed image
pixel points
neighborhood
combination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110427021.1A
Other languages
Chinese (zh)
Other versions
CN113382246A (en
Inventor
粘春湄
方瑞东
江东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110427021.1A priority Critical patent/CN113382246B/en
Publication of CN113382246A publication Critical patent/CN113382246A/en
Application granted granted Critical
Publication of CN113382246B publication Critical patent/CN113382246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Abstract

The invention provides an encoding method, an encoding device, an electronic device and a computer readable storage medium. The coding method comprises the following steps: classifying the reconstructed image by using the pixel value of each pixel point and the pixel point of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed images based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, at least part of the preset intervals contain different pixel values in number; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points; obtaining an optimal compensation value according to the first class combination and the second class combination; and compensating the reconstructed image by using the optimal compensation value. Thereby enabling to improve the compensation accuracy.

Description

Encoding method, encoding device, electronic device and computer readable storage medium
Technical Field
The present invention relates to the field of video encoding technology, and in particular, to an encoding method, apparatus, electronic device, and computer readable storage medium.
Background
In the prior art, when the reconstructed image is compensated, when the pixels are required to be classified, only the pixel values of the current pixels and the corresponding first-level neighborhood pixels are considered, the pixel correlation in the classification process is insufficient, the types of the pixels cannot be classified more finely, and the compensation precision is influenced, so the prior art needs to be improved.
Disclosure of Invention
The invention provides an encoding method, an encoding device, an electronic device and a computer readable storage medium. For improving compensation accuracy and coding efficiency.
In order to solve the technical problems, the first technical scheme provided by the invention is as follows: there is provided an encoding method including: classifying the reconstructed image by using the pixel value of each pixel point and the pixel point of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, the number of the pixel values contained in at least part of the preset intervals in the preset intervals is different; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points; obtaining an optimal compensation value according to the first class combination and the second class combination; compensating the reconstructed image by using the optimal compensation value; and encoding the reconstructed image after compensation, and further obtaining a code stream.
The method for classifying the reconstructed image by using the pixel value of each pixel point and the neighborhood pixel point in the reconstructed image to obtain a first class combination comprises the following steps: and classifying the reconstructed image by using pixel values of at least part of pixel points in the first-stage neighborhood pixel points and the second-stage neighborhood pixel points of the pixel points to obtain a first class combination.
At least part of the pixels in the second-level neighborhood pixels are pixels in the first direction and the second direction of the pixels in the second-level neighborhood pixels.
At least some of the second-level neighborhood pixel points are pixel points located right above, right below and right left Fang Yiji of the second-level neighborhood pixel points.
Dividing the value range of the pixel value to obtain a plurality of preset intervals, wherein the method comprises the following steps: acquiring a value range of a corresponding pixel value based on the coding bit depth; and carrying out non-uniform division on the value range of the pixel value, thereby obtaining a plurality of preset intervals.
Dividing the value range of the pixel value to obtain a plurality of preset intervals, wherein the method comprises the following steps: acquiring a value range of a corresponding pixel value based on a coding bit depth, wherein the coding bit depth is 10 bits; the value range of the pixel values is uniformly divided, and then preset intervals with the number more than 16 are obtained.
Wherein obtaining the optimal compensation value according to the first category combination and the second category combination comprises: combining the first class combination and the second class combination by using a Cartesian algorithm, and further obtaining a third class combination and a compensation value corresponding to each third class in the third class combination; and calculating the optimal compensation value based on the compensation value corresponding to each third category in the third category combination by using a rate distortion cost calculation method.
Wherein the reconstructed image includes any one of a luminance reconstructed image and a chrominance reconstructed image.
Wherein the reconstructed image is the chrominance reconstructed image in response to the reconstructed image; the obtaining the best compensation value according to the first category combination and the second category combination comprises the following steps: combining the first class combination and the second class combination by using a Cartesian algorithm, and further obtaining a third class combination and a compensation value corresponding to each third class in the third class combination; obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up mode based on the compensation value corresponding to each third category in the third category combination; or based on the third category combination and the compensation value corresponding to each third category in the third category combination, obtaining the optimal compensation value corresponding to the chromaticity reconstructed image in a non-table look-up mode.
The code stream includes a filtering flag and a syntax element, the filtering flag indicates a coding unit in the reconstructed image that needs to be compensated, and the syntax element includes an optimal compensation value.
In order to solve the technical problems, a second technical scheme provided by the invention is as follows: there is provided an encoding apparatus including: the classification module is used for classifying the reconstructed image by using the pixel value of each pixel point and the neighborhood pixel point in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, the number of the pixel values contained in at least part of the preset intervals in the preset intervals is different; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points; the acquisition module is used for obtaining an optimal compensation value according to the first class combination and the second class combination; the compensation module is used for compensating the reconstructed image by utilizing the optimal compensation value; and the coding module is used for coding the reconstructed image after compensation so as to obtain a code stream.
In order to solve the technical problems, a third technical scheme provided by the invention is as follows: there is provided an electronic device comprising a processor and a memory coupled to each other, wherein the memory is for storing program instructions implementing the encoding method of any one of the above; the processor is configured to execute the program instructions stored in the memory.
In order to solve the technical problems, a fourth technical scheme provided by the invention is as follows: there is provided a computer readable storage medium storing a program file executable to implement the encoding method of any one of the above.
The method has the beneficial effects that, unlike the prior art, the method classifies the reconstructed image by using the pixel values of each pixel point in the reconstructed image and the first-stage neighborhood pixel points and the second-stage neighborhood pixel points of the pixel points to obtain a first class combination, so that the correlation between the two pixel points is fully utilized to classify by combining the relation of more surrounding pixel points of the pixel points. In addition, the method divides the value range of the pixel values to obtain a plurality of preset intervals, classifies the reconstructed image based on the preset intervals where the pixel values of the pixel points are located, so that the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different, and further a second class combination is obtained, so that the method can be more in line with the fact of real image information, and the pixel classification is more accurate. By the coding method, the classification precision of pixels can be improved, and further the compensation precision and coding efficiency of the reconstructed image are improved.
Drawings
For a clearer description of the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the description below are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1 is a flow chart of a first embodiment of the encoding method of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a pixel A, a first-level neighborhood pixel, and a second-level neighborhood pixel;
FIG. 3 is a schematic structural diagram of a first embodiment of a pixel A, a first-stage neighboring pixel, and a portion of a second-stage neighboring pixel;
FIG. 4 is a schematic structural diagram of a second embodiment of the pixel A, the first-stage neighboring pixel, and a portion of the second-stage neighboring pixel;
FIG. 5 is a schematic structural diagram of a third embodiment of a pixel A, a first-level neighborhood pixel, and a portion of a second-level neighborhood pixel;
fig. 6 is a flowchart of the first embodiment of step S11;
fig. 7 is a flowchart of the second embodiment of step S11;
FIG. 8 is a schematic diagram illustrating an embodiment of an encoding apparatus according to the present invention;
FIG. 9 is a schematic diagram of an embodiment of an electronic device of the present invention;
fig. 10 is a schematic structural view of a computer-readable storage medium of the present invention.
Detailed description of the preferred embodiments
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which are obtained by a person of ordinary skill in the art without making any inventive effort, are within the scope of the present application based on the embodiments herein.
The present invention will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, a schematic structural diagram of a first embodiment of the encoding method of the present invention specifically includes:
step S11: classifying the reconstructed image by using the pixel value of each pixel point and the pixel point of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, the number of the pixel values contained in at least part of the preset intervals in the preset intervals is different; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points.
Specifically, in this embodiment, the reconstructed image is classified by using the pixel value of each pixel point and the neighboring pixel point in the reconstructed image, so as to obtain the first class combination. In an embodiment, when classifying the pixels, the neighborhood pixels only consider the first-level neighborhood pixels of the pixels, that is, the pixels are classified by using the relationship between each pixel in the reconstructed image and the pixel value of the corresponding first-level neighborhood pixels, which can only classify the pixels into 17 classes or 9 classes at most. In the application, the secondary neighborhood pixel points of the pixel points are also added into the classification basis. Specifically, in the present application, each pixel point in the reconstructed image, and the pixel values of the first-stage neighborhood pixel point and the second-stage neighborhood pixel point of the pixel point are used to classify the reconstructed image, so as to obtain a first class combination.
Referring to fig. 2, pixel points B1, B2, B3, B4, B5, B6, B7, and B8 are first-order neighboring pixel points of pixel point a; the pixel points C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15 and C16 are secondary neighborhood pixel points of the pixel point A. In this embodiment, the reconstructed image is classified by using the pixel values of each pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel points of the pixel point a, and all the pixel points (C1 to C16) of the second-stage neighborhood pixel points, so as to obtain the first-class combination. The method can divide the pixel points into 49 classes or 25 classes at most so as to lead the classification of the pixel points in the reconstructed image to be more accurate.
In an embodiment, the first class combination may be obtained by classifying the reconstructed image by using pixel values of at least some of the pixel points in the first-class neighborhood pixel points and the second-class neighborhood pixel points. Please refer to fig. 3, fig. 4 and fig. 5. As shown in fig. 3, in the present embodiment, the pixel points of the reconstructed image are classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and the partial pixel points (C1, C2, C5, C7, C10, C12, C15, and C16) of the second-stage neighborhood pixel point, so as to obtain a first class combination. As shown in fig. 4, in this embodiment, the pixel points of the reconstructed image may be classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and part of the pixel points (C1, C5, C12, C16) of the second-stage neighborhood pixel point, so as to obtain the first-class combination. In this embodiment, part of the pixels of the second-level neighboring pixel are pixels located in the first direction L1 and the second direction L2 of the pixel a in the second-level neighboring pixel, as shown in fig. 4, the first direction L1 and the second direction L2 are diagonal directions of the pixel a, and the first direction L1 is perpendicular to the second direction L2. As shown in fig. 5, the pixel points of the reconstructed image may be classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and the partial pixel points (C3, C8, C9, C14) of the second-stage neighborhood pixel point, so as to obtain the first-class combination. In this embodiment, at least some of the two-stage neighborhood pixel points are a pixel point C3 located directly above the pixel point B, a pixel point C14 located directly below, a pixel point C8 located directly left, and a pixel point C9 located directly right in the two-stage neighborhood pixel points. The manner of this embodiment can classify the pixels of the reconstructed image into at most 25 classes or 13 classes.
Specifically, as shown in fig. 2, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and all the second-level neighborhood pixel points (C1 to C16) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 49 classes or 25 classes at most. Specifically, in an embodiment, when the pixel point a is classified by using the pixel point a and the pixel values of all the first-level neighborhood pixel points (B1 to B8) of the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the pixel values of the two pixel points (B5 and C9) on the right side of the pixel point a are larger than the pixel value of the pixel point a in this way, the existing classification number is added by 1. In the case of class 49, if the pixel values of the two pixels (B5 and C9) on the right side of the horizontal direction of the pixel a are smaller than the pixel value of the pixel a, the existing classification number is reduced by 1, and in the case of class 25, no operation is performed, and the other directions are the same.
As shown in fig. 5, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and part of the second-level neighborhood pixel points (C3, C8, C9, C14) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 25 classes or 13 classes at most. Specifically, in an embodiment, when the pixel point a is classified by using the pixel point a and the pixel values of all the first-level neighborhood pixel points (B1 to B8) of the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the pixel values of the two pixel points (B5 and C9) on the right side of the pixel point a are larger than the pixel value of the pixel point a in this way, the existing classification number is added by 1. In the case of class 25, if the pixel values of the two pixels (B5 and C9) on the right side of the horizontal direction of the pixel a are smaller than the pixel value of the pixel a, the existing classification number is reduced by 1, and in the case of class 13, no operation is performed, and the other directions are the same.
By means of the method, the neighborhood pixels of the pixel points are expanded, the pixel points are classified, spatial correlation can be fully utilized, and therefore classification accuracy of the pixels is improved.
In this embodiment, the value range of the pixel value is further divided, so as to obtain a plurality of preset intervals, and the reconstructed image is classified based on the preset intervals where the pixel values of the pixel points are located, so as to obtain a second class combination. Wherein, at least part of the preset intervals contain different pixel values in the preset intervals. In an embodiment, the number of pixel values included in all preset intervals among the preset intervals is the same, which does not conform to the characteristic of uneven distribution of pixels in the real image. In this embodiment, the number of pixel values included in at least some of the preset intervals is set to be different, so that the classification result more accords with the characteristic of uneven distribution of pixels in the real image, and the accuracy of the classification result is improved.
For example, the coded bit depth is 8 bits, and the corresponding pixel value range is [0, 255], and if the coded bit depth is uniformly divided into 16 preset sections, the coded bit depth is [0,15], [16,31], [32,47] … … [239,255]. The number of pixel values in each preset interval is 16. The method cannot fully embody the characteristic of uneven distribution of pixels in a real image, so that the scheme of the application adopts an uneven dividing mode to divide the value range of the pixel values into a plurality of preset intervals, so that the number of the pixel values contained in at least part of the preset intervals is different.
Referring to fig. 6, step S11 specifically includes:
step S61: and acquiring a value range of the corresponding pixel value based on the coded bit depth.
Specifically, taking an example of an encoded bit depth of 8 bits as an example, a value range of a corresponding pixel value is obtained. In one embodiment, the range of values for pixel values with 8 bits of coded bit depth is [0, 255].
Step S62: and carrying out non-uniform division on the value range of the pixel value, thereby obtaining a plurality of preset intervals.
In this embodiment, the value range [0, 255] of the pixel value is unevenly divided into 16 preset intervals, specifically: [0, 15], [16,32], [33,50] … … [233,255], whereby the number of pixel values of the first preset interval [0, 15] is 16, the number of pixel values of the second preset interval [16,32] is 17, and the number of pixel values of the third preset interval [33,50] is 18. It should be noted that, the pixel value in each preset interval is a continuous pixel value.
In an embodiment, the value range [0, 255] of the pixel values may be divided into 16 preset intervals without adopting a uniform division manner, specifically, the first preset interval has 16-N continuous pixel values, the second category has 16+n continuous pixel values, and so on, where N is less than or equal to 16. Alternatively, there may be 16-N consecutive pixel values for the first predetermined interval, 16-n+1 consecutive pixel values for the second category, 16-n+2 consecutive pixel values for the third category, and so on, 15 consecutive pixel values for the eighth category, 17 consecutive pixel values for the ninth category, 16+n-6 consecutive pixel values for the tenth category, 16+n-5 consecutive pixel values for the eleventh category, and so on, where n=8.
And classifying the pixel points according to the relation between the pixel value of each pixel point in the reconstructed image and each preset interval, and further classifying the reconstructed image to obtain a second class combination.
In this embodiment, the value range of the pixel values is divided in an uneven division manner to obtain a plurality of preset intervals, and the reconstructed images are classified based on the preset intervals in which the pixel values of the pixel points are located to obtain a second class combination, so that the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different. By the method, the characteristic of uneven distribution of the pixels in the real image can be fully reflected by the classification of the pixels, and the accuracy of the classification of the pixels is improved.
In one embodiment, for coding bit depths of 8 bits and 10 bits, the range of values of the pixel values is generally divided into 16 classes, so that the coding cost can be reduced. However, classifying the pixels into 16 classes cannot sufficiently describe the fineness of the texture, and therefore, the present application proposes to divide the range of values of the pixel values having the coded bit depth of 10 bits into a number of preset sections larger than 16. Referring specifically to fig. 7, step S11 includes:
step S71: and acquiring a value range of a corresponding pixel value based on a coding bit depth, wherein the coding bit depth is 10 bits.
Taking an example of the coded bit depth of 10 bits, the corresponding pixel value range is [0, 1023].
Step S72: and uniformly dividing the value range of the pixel values to obtain preset intervals with the number more than 16.
In one embodiment, the pixel value range [0, 1023] can be uniformly divided to obtain the preset intervals with the number greater than 16. For example, the value ranges [0, 1023] of the pixel values are uniformly divided, so as to obtain 17 preset sections; for another example, the value ranges [0, 1023] of the pixel values are uniformly divided, so as to obtain 32 preset intervals. Wherein the number of pixel values in each preset interval is the same.
By the method of the embodiment, the fineness of textures can be fully described, and the classification accuracy can be further improved.
In another embodiment, the value range [0, 1023] of the pixel value may be unevenly divided, so as to obtain the preset intervals with the number greater than 16. For example, the value range [0, 1023] of the pixel value is unevenly divided, so as to obtain 17 preset sections; for another example, the value ranges [0, 1023] of the pixel values are unevenly divided, so as to obtain 32 preset intervals. Wherein the number of pixel values of at least part of the preset intervals is different.
By the method of the embodiment, on one hand, the fineness of textures can be fully described, on the other hand, the characteristic of uneven pixel distribution in a real image can be fully reflected, and the classification accuracy can be further improved.
In an embodiment, the reconstructed image comprises any one of a luminance reconstructed image and a chrominance reconstructed image. That is, the method of step S11 described above may be applied to both luminance and chrominance reconstructed images.
Step S12: and obtaining the optimal compensation value according to the first class combination and the second class combination.
After the first class combination and the second class combination are obtained, the first class combination and the second class combination may be further combined to obtain a third class combination and a compensation value corresponding to each third class in the third class combination, for example, the first class combination and the second class combination may be combined by using a cartesian algorithm to obtain the third class combination and the compensation value corresponding to each third class in the third class combination. The method for calculating the utilization rate distortion cost calculates an optimal compensation value based on the compensation value corresponding to each third category in the third category combination. For example, the compensation values corresponding to all the third categories in the third category combination are traversed to compensate the reconstructed image, and cost comparison is performed according to the compensation result, so that the compensation value with the smallest cost is selected to be used as the optimal compensation value.
For example, if there are 49 first categories in the obtained first category combination and 16 second categories in the obtained second category combination, 49×16=784 third categories are obtained by combining the 49 first categories with the 16 second categories, and the compensation values corresponding thereto are 784 third category combinations.
In an embodiment, if the reconstructed image is a chromaticity reconstructed image, the first class combination and the second class combination may be combined by using a cartesian algorithm, so as to obtain a third class combination and a compensation value corresponding to each third class in the third class combination; and obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The method comprises the steps of classifying reconstructed images by using pixel values of each pixel point and each neighborhood pixel point in the reconstructed images to obtain a first class combination, namely C1, and classifying the reconstructed images based on a preset interval where the pixel values of the pixel points are located to obtain a second class combination, namely C2. The specific formula for obtaining the optimal compensation value is as follows:
C y =(Y (i,j) *C 2 )>>bitdepth;
C C =tableUV[C 1 *C y /15]。
wherein Y is (i,j) Cy is the result of classifying the pixel point (i, j) in the C2 mode, namely a second class combination; bitdepth is the coded bit depth; cc is the best compensation value and tableUV is the chroma classification table.
In another embodiment, the first class combination and the second class combination may be further combined by using a cartesian algorithm, so as to obtain a third class combination and a compensation value corresponding to each third class in the third class combination; and obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up-free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The specific formula is as follows:
C y =(Y (i,j) *C 2 )>>bitdepth;
C C =C 1 *(C y +4*15)/15。
for example, cy is a result of classifying the pixel (i, j) by the C2 method, that is, the second class combination, and if the pixel is classified by the C2 method, cy is (0 to 15) at most and if the pixel is classified by the C1 method, cy is 17, the chrominance components can be classified by 85 at most. An optimal compensation value is further calculated from these 85 classes.
Step S13: and compensating the reconstructed image by using the optimal compensation value.
And after the optimal compensation value is calculated, compensating the reconstructed image by using the calculated optimal compensation value.
Step S14: and encoding the reconstructed image after compensation, and further obtaining a code stream.
After the reconstructed image is compensated, the compensated reconstructed image is further encoded, and a code stream is obtained, wherein the code stream comprises a filtering mark and a syntax element, the filtering mark represents a coding unit which needs to be compensated in the reconstructed image, and the syntax element comprises an optimal compensation value. Specifically, when the reconstructed image is compensated, the switching condition of each coding unit in the reconstructed image needs to be further judged, and then the coding unit needing to be compensated in the reconstructed image is determined, and the coding unit needing to be compensated in the reconstructed image is compensated by utilizing the optimal compensation value.
According to the method, the spatial correlation can be fully utilized to classify the pixel points, so that the classification accuracy is improved, the non-uniformity of the real image pixels can be further reflected, the classification accuracy is further improved, the fineness of textures can be fully described, and the compensation accuracy and the coding efficiency are improved.
Referring to fig. 8, a schematic structural diagram of an embodiment of an encoding apparatus of the present invention specifically includes: classification module 801, acquisition module 802, compensation module 803, encoding module 804.
The classification module 801 is configured to classify the reconstructed image by using a pixel value of each pixel point and a pixel point of the neighborhood in the reconstructed image, so as to obtain a first class combination. Dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination. Wherein, at least part of the preset intervals contain different pixel values in number; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points.
Specifically, in this embodiment, the reconstructed image is classified by using the pixel value of each pixel point and the neighboring pixel point in the reconstructed image, so as to obtain the first class combination. In an embodiment, when classifying the pixels, the neighborhood pixels only consider the first-level neighborhood pixels of the pixels, that is, the pixels are classified by using the relationship between each pixel in the reconstructed image and the pixel value of the corresponding first-level neighborhood pixels, which can only classify the pixels into 17 classes or 9 classes at most. In the application, the secondary neighborhood pixel points of the pixel points are also added into the classification basis. Specifically, in the present application, each pixel point in the reconstructed image, and the pixel values of the first-stage neighborhood pixel point and the second-stage neighborhood pixel point of the pixel point are used to classify the reconstructed image, so as to obtain a first class combination.
Referring to fig. 2, pixel points B1, B2, B3, B4, B5, B6, B7, and B8 are first-order neighboring pixel points of pixel point a; the pixel points C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15 and C16 are secondary neighborhood pixel points of the pixel point A. In this embodiment, the reconstructed image is classified by using the pixel values of each pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel points of the pixel point a, and all the pixel points (C1 to C16) of the second-stage neighborhood pixel points, so as to obtain the first-class combination. The method can divide the pixel points into 49 classes or 25 classes at most so as to lead the classification of the pixel points in the reconstructed image to be more accurate.
In an embodiment, the first class combination may be obtained by classifying the reconstructed image by using pixel values of at least some of the pixel points in the first-class neighborhood pixel points and the second-class neighborhood pixel points. Please refer to fig. 3, fig. 4 and fig. 5. As shown in fig. 3, in the present embodiment, the pixel points of the reconstructed image are classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and the partial pixel points (C1, C2, C5, C7, C10, C12, C15, and C16) of the second-stage neighborhood pixel point, so as to obtain a first class combination. As shown in fig. 4, in this embodiment, the pixel points of the reconstructed image may be classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and part of the pixel points (C1, C5, C12, C16) of the second-stage neighborhood pixel point, so as to obtain the first-class combination. In this embodiment, part of the pixels of the second-level neighboring pixel are pixels located in the first direction L1 and the second direction L2 of the pixel a in the second-level neighboring pixel, as shown in fig. 4, the first direction L1 and the second direction L2 are diagonal directions of the pixel a, and the first direction L1 is perpendicular to the second direction L2. As shown in fig. 5, the pixel points of the reconstructed image may be classified by using the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1 to B8) of the first-stage neighborhood pixel point of the pixel point a, and the partial pixel points (C3, C8, C9, C14) of the second-stage neighborhood pixel point, so as to obtain the first-class combination. In this embodiment, at least some of the two-stage neighborhood pixel points are a pixel point C3 located directly above the pixel point B, a pixel point C14 located directly below, a pixel point C8 located directly left, and a pixel point C9 located directly right in the two-stage neighborhood pixel points. The manner of this embodiment can classify the pixels of the reconstructed image into at most 25 classes or 13 classes.
Specifically, as shown in fig. 2, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and all the second-level neighborhood pixel points (C1 to C16) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 49 classes or 25 classes at most. Specifically, in an embodiment, when the pixel point a is classified by using the pixel point a and the pixel values of all the first-level neighborhood pixel points (B1 to B8) of the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the pixel values of the two pixel points (B5 and C9) on the right side of the pixel point a are larger than the pixel value of the pixel point a in this way, the existing classification number is added by 1. If the pixel values of the pixel points a are smaller in all cases of class 49, the number of the existing classifications is reduced by 1, and if the pixel values of the pixel points a are smaller in class 25, no operation is performed, and the other directions are the same.
As shown in fig. 5, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and part of the second-level neighborhood pixel points (C3, C8, C9, C14) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 25 classes or 13 classes at most. Specifically, in an embodiment, when the pixel point a is classified by using the pixel point a and the pixel values of all the first-level neighborhood pixel points (B1 to B8) of the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the pixel values of the two pixel points (B5 and C9) on the right side of the pixel point a are larger than the pixel value of the pixel point a in this way, the existing classification number is added by 1. If the pixel values of all the classes 25 are smaller than the pixel value of the pixel point a, the conventional classification number is reduced by 1, and if the class 13 is not operated, the other directions are the same.
By means of the method, the neighborhood pixels of the pixel points are expanded, the pixel points are classified, spatial correlation can be fully utilized, and therefore classification accuracy of the pixels is improved.
In this embodiment, the value range of the pixel value is further divided, so as to obtain a plurality of preset intervals, and the reconstructed image is classified based on the preset intervals where the pixel values of the pixel points are located, so as to obtain a second class combination. Wherein, at least part of the preset intervals contain different pixel values in the preset intervals. In an embodiment, the number of pixel values included in all preset intervals among the preset intervals is the same, which does not conform to the characteristic of uneven distribution of pixels in the real image. In this embodiment, the number of pixel values included in at least some of the preset intervals is set to be different, so that the classification result more accords with the characteristic of uneven distribution of pixels in the real image, and the accuracy of the classification result is improved.
For example, the coded bit depth is 8 bits, the corresponding pixel value range is [0, 255], and the pixel value range is uniformly divided into 16 preset sections, namely [0,15], [16,31], [32,47] … … [239,255]. The number of pixel values in each preset interval is 16. The method cannot fully embody the characteristic of uneven distribution of pixels in a real image, so that the scheme of the application adopts an uneven dividing mode to divide the value range of the pixel values into a plurality of preset intervals, so that the number of the pixel values contained in at least part of the preset intervals is different.
The classification module 801 is further configured to obtain a corresponding value range of the pixel value based on the encoding bit depth, and unevenly divide the value range of the pixel value, thereby obtaining a plurality of preset intervals.
The classification module 801 is further configured to obtain a corresponding value range of pixel values based on a coded bit depth, where the coded bit depth is 10 bits, and uniformly divide the value range of pixel values, so as to obtain a preset interval with a number greater than 16.
The obtaining module 802 is configured to obtain an optimal compensation value according to the first category combination and the second category combination. In an embodiment, the obtaining module 802 is further configured to combine the first class combination and the second class combination by using a cartesian algorithm, so as to obtain a third class combination and a compensation value corresponding to each third class in the third class combination. The method for calculating the utilization rate distortion cost calculates an optimal compensation value based on the compensation value corresponding to each third category in the third category combination. In an embodiment, the obtaining module 802 is configured to perform compensation on the reconstructed image by traversing the compensation values corresponding to all the third categories in the third category combination, and perform cost comparison according to the compensation result, so as to select the compensation value with the smallest cost as the optimal compensation value.
In an embodiment, the obtaining module 802 is further configured to combine the first class combination and the second class combination by using a cartesian algorithm, so as to obtain a third class combination and a compensation value corresponding to each third class in the third class combination; and obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The method comprises the steps of classifying reconstructed images by using pixel values of each pixel point and each neighborhood pixel point in the reconstructed images to obtain a first class combination, namely C1, and classifying the reconstructed images based on a preset interval where the pixel values of the pixel points are located to obtain a second class combination, namely C2. The specific formula for obtaining the optimal compensation value is as follows:
C y =(Y (i,j) *C 2 )>>bitdepth;
C C =tableUV[C 1 *C y /15]。
wherein Y is (i,j) Cy is the result of classifying the pixel point (i, j) in the C2 mode, namely a second class combination; bitdepth is the coded bit depth; cc is the best compensation value and tableUV is the chroma classification table.
In an embodiment, the obtaining module 802 is further configured to combine the first class combination and the second class combination by using a cartesian algorithm, so as to obtain a third class combination and a compensation value corresponding to each third class in the third class combination; and obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up-free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The specific formula is as follows:
C y =(Y (i,j) *C 2 )>>bitdepth;
C C =C 1 *(C y +4*15)/15。
The compensation module 803 is configured to compensate the reconstructed image with the optimal compensation value.
The encoding module 804 is configured to encode the compensated reconstructed image, and further obtain a code stream, where the code stream includes a filter flag and a syntax element, and the filter flag indicates an encoding unit in the reconstructed image that needs to be compensated, and the syntax element includes an optimal compensation value.
Through the device of this application, it can make full use of airspace relativity to classify the pixel to improve classification accuracy, can also embody real image pixel's non-uniformity, further improve classification accuracy, can also fully describe the careful degree of texture, improve compensation precision and coding efficiency.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device comprises a memory 82 and a processor 81 connected to each other.
The memory 82 is used to store program instructions for implementing the method of any of the above.
The processor 81 is arranged to execute program instructions stored in the memory 82.
The processor 81 may also be referred to as a CPU (Central Processing Unit ). The processor 81 may be an integrated circuit chip with signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 82 may be a memory bank, TF card, etc., and may store all information in the electronic device, including input raw data, computer programs, intermediate operation results, and final operation results, stored in the memory. It stores and retrieves information according to the location specified by the controller. With the memory, the electronic equipment has a memory function and can ensure normal operation. The memories of electronic devices can be classified into main memories (memories) and auxiliary memories (external memories) according to the purpose, and also classified into external memories and internal memories. The external memory is usually a magnetic medium, an optical disk, or the like, and can store information for a long period of time. The memory refers to a storage component on the motherboard for storing data and programs currently being executed, but is only used for temporarily storing programs and data, and the data is lost when the power supply is turned off or the power is turned off.
In the several embodiments provided in this application, it should be understood that the disclosed methods and apparatus may be implemented by other methods. For example, the apparatus implementations described above are merely illustrative, and the partitioning of modules or elements is merely a logical functional partitioning, and other partitioning methods may be implemented in practice, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not implemented. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a system server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application.
Fig. 10 is a schematic structural diagram of a computer readable storage medium according to the present invention. The storage medium of the present application stores a program file 91 capable of implementing all the methods described above, where the program file 91 may be stored in the storage medium in the form of a software product, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods implemented in the present application. The aforementioned storage device includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
The foregoing is only an implementation method of the present invention, and is not limited to the patent scope of the present invention, and all equivalent structures or equivalent processes using the descriptions and the contents of the drawings of the present invention or direct or indirect application to other related technical fields are included in the scope of the present invention.

Claims (12)

1. A method of encoding, comprising:
Classifying the reconstructed image by using the pixel value of each pixel point and the pixel point of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, the number of the pixel values contained in at least part of the preset intervals in the preset intervals is different; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points; the secondary neighborhood pixel point is positioned in a region of the primary neighborhood pixel point far away from the pixel point;
obtaining an optimal compensation value according to the first class combination and the second class combination;
compensating the reconstructed image by using the optimal compensation value;
encoding the reconstructed image after compensation, and further obtaining a code stream;
the classifying the reconstructed image by using the pixel value of each pixel point and the neighborhood pixel point in the reconstructed image to obtain a first class combination comprises:
and classifying the reconstructed image by using the pixel values of at least part of the pixel points in the pixel points, the first-stage neighborhood pixel points of the pixel points and the second-stage neighborhood pixel points to obtain the first-class combination.
2. The encoding method according to claim 1, wherein,
at least part of the pixel points in the secondary neighborhood pixel points are pixel points in the secondary neighborhood pixel points, which are located in the first direction and the second direction of the pixel points.
3. The encoding method according to claim 1, wherein,
at least part of the pixel points in the secondary neighborhood pixel points are pixel points which are positioned right above, right below and right left Fang Yiji of the pixel points in the secondary neighborhood pixel points.
4. The encoding method according to claim 1, wherein the dividing the value range of the pixel value to obtain a plurality of preset intervals includes:
acquiring a value range of a corresponding pixel value based on the coding bit depth;
and carrying out non-uniform division on the value range of the pixel value, and further obtaining a plurality of preset intervals.
5. The encoding method according to claim 1, wherein the dividing the value range of the pixel value to obtain a plurality of preset intervals includes:
acquiring a value range of a corresponding pixel value based on a coding bit depth, wherein the coding bit depth is 10 bits;
And uniformly dividing the value range of the pixel values to obtain preset intervals with the number more than 16.
6. The encoding method according to claim 1, wherein the obtaining the optimal compensation value according to the first category combination and the second category combination includes:
combining the first class combination and the second class combination by using a Cartesian algorithm, and further obtaining a third class combination and a compensation value corresponding to each third class in the third class combination;
and calculating the optimal compensation value based on the compensation value corresponding to each third category in the third category combination by using a rate distortion cost calculation method.
7. The encoding method according to claim 1, wherein the reconstructed image includes any one of a luminance reconstructed image and a chrominance reconstructed image.
8. The encoding method according to claim 7, wherein,
responsive to the reconstructed image being the chroma reconstructed image;
the obtaining the best compensation value according to the first category combination and the second category combination comprises the following steps:
combining the first class combination and the second class combination by using a Cartesian algorithm, and further obtaining a third class combination and a compensation value corresponding to each third class in the third class combination;
Obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up mode based on the compensation value corresponding to each third category in the third category combination; or alternatively
And obtaining the optimal compensation value corresponding to the chromaticity reconstruction image in a table look-up-free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination.
9. The coding method according to claim 1 to 8, wherein,
the bitstream includes a filter flag indicating a coding unit in the reconstructed image that needs to be compensated, and a syntax element including the optimal compensation value.
10. An encoding device, comprising:
the classification module is used for classifying the reconstructed image by using the pixel value of each pixel point and the neighborhood pixel point in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset intervals where the pixel values of the pixel points are located to obtain a second class combination; wherein, the number of the pixel values contained in at least part of the preset intervals in the preset intervals is different; and/or the neighborhood pixel points comprise a first-stage neighborhood pixel point and a second-stage neighborhood pixel point of the pixel points; the secondary neighborhood pixel point is positioned in a region of the primary neighborhood pixel point far away from the pixel point; the method is also used for classifying the reconstructed image by utilizing the pixel values of at least part of the pixel points in the pixel points, the first-stage neighborhood pixel points and the second-stage neighborhood pixel points of the pixel points to obtain the first class combination;
The acquisition module is used for obtaining an optimal compensation value according to the first class combination and the second class combination;
a compensation module for compensating the reconstructed image using the optimal compensation value;
and the coding module is used for coding the reconstructed image after compensation so as to obtain a code stream.
11. An electronic device comprising a processor and a memory coupled to each other, wherein,
the memory is used for storing program instructions for implementing the encoding method according to any one of claims 1-9;
the processor is configured to execute the program instructions stored in the memory.
12. A computer readable storage medium, characterized in that a program file is stored, which program file is executable to implement the encoding method according to any of claims 1-9.
CN202110427021.1A 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium Active CN113382246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110427021.1A CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110427021.1A CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113382246A CN113382246A (en) 2021-09-10
CN113382246B true CN113382246B (en) 2024-03-01

Family

ID=77569927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110427021.1A Active CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113382246B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134600B (en) * 2022-09-01 2022-12-20 浙江大华技术股份有限公司 Encoding method, encoder, and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473280A (en) * 2009-07-31 2012-05-23 富士胶片株式会社 Image processing device and method, data processing device and method, program, and recording medium
CN104012095A (en) * 2011-12-22 2014-08-27 三星电子株式会社 Video Encoding Method Using Offset Adjustment According To Classification Of Pixels By Maximum Encoding Units And Apparatus Thereof, And Video Decoding Method And Apparatus Thereof
KR20150035943A (en) * 2015-03-12 2015-04-07 삼성전자주식회사 Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same
CN109416749A (en) * 2017-11-30 2019-03-01 深圳配天智能技术研究院有限公司 A kind of the gradient category method, apparatus and readable storage medium storing program for executing of image
CN110383837A (en) * 2018-04-02 2019-10-25 北京大学 Method for video processing and equipment
CN111866507A (en) * 2020-06-07 2020-10-30 咪咕文化科技有限公司 Image filtering method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120269458A1 (en) * 2007-12-11 2012-10-25 Graziosi Danillo B Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers
JP2018522445A (en) * 2015-05-12 2018-08-09 サムスン エレクトロニクス カンパニー リミテッド Video coding method and apparatus for sample value compensation, and video decoding method and apparatus for sample value compensation
WO2018070723A1 (en) * 2016-10-11 2018-04-19 엘지전자(주) Image encoding and decoding method and apparatus therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473280A (en) * 2009-07-31 2012-05-23 富士胶片株式会社 Image processing device and method, data processing device and method, program, and recording medium
CN104012095A (en) * 2011-12-22 2014-08-27 三星电子株式会社 Video Encoding Method Using Offset Adjustment According To Classification Of Pixels By Maximum Encoding Units And Apparatus Thereof, And Video Decoding Method And Apparatus Thereof
KR20150035943A (en) * 2015-03-12 2015-04-07 삼성전자주식회사 Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same
CN109416749A (en) * 2017-11-30 2019-03-01 深圳配天智能技术研究院有限公司 A kind of the gradient category method, apparatus and readable storage medium storing program for executing of image
CN110383837A (en) * 2018-04-02 2019-10-25 北京大学 Method for video processing and equipment
CN111866507A (en) * 2020-06-07 2020-10-30 咪咕文化科技有限公司 Image filtering method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Local Linear Spatial–Spectral Probabilistic Distribution for Hyperspectral Image Classification;Hong Huang;《IEEE Transactions on Geoscience and Remote Sensing》;20191023;全文 *
基于帧间相关性的HDR视频压缩研究;俞楷;《中国优秀硕士学位论文全文数据库》;20190715;全文 *

Also Published As

Publication number Publication date
CN113382246A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
KR101223983B1 (en) Bitrate reduction techniques for image transcoding
US20060204086A1 (en) Compression of palettized images
US8780996B2 (en) System and method for encoding and decoding video data
US11647195B2 (en) Image encoding device, image decoding device, and the programs thereof
CN112913237A (en) Artificial intelligence encoding and decoding method and apparatus using deep neural network
EP3657800A1 (en) Method and device for coding image, and method and device for decoding image
CN110087083B (en) Method for selecting intra chroma prediction mode, image processing apparatus, and storage apparatus
US11451799B2 (en) Transmission bit-rate control in a video encoder
US20180242023A1 (en) Image Coding and Decoding Method and Device
CN111131828B (en) Image compression method and device, electronic equipment and storage medium
US8369639B2 (en) Image processing apparatus, computer readable medium storing program, method and computer data signal for partitioning and converting an image
CN113382246B (en) Encoding method, encoding device, electronic device and computer readable storage medium
CN113099230B (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
US7065254B2 (en) Multilayered image file
CN115499632A (en) Image signal conversion processing method and device and terminal equipment
JP2000059782A (en) Compression method for spatial area digital image
CN113613024B (en) Video preprocessing method and device
CN113382257B (en) Encoding method, encoding device, electronic device and computer-readable storage medium
CN111787334B (en) Filtering method, filter and device for intra-frame prediction
CN113473150B (en) Image processing method and device and computer readable storage device
RU2767775C1 (en) Point cloud processing
JPH0774966A (en) Picture processor
CN117097900A (en) Image frame encoding method, image frame encoding device, electronic equipment and computer readable storage medium
PL242013B1 (en) System and method for processing images, in particular in devices for acquisition, processing and storage or transmission of images using digital image compression using the discrete wavelet transformation (DWT) and entropy coding
EP3713239A1 (en) Processing a point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant