CN113382246A - Encoding method, encoding device, electronic equipment and computer readable storage medium - Google Patents

Encoding method, encoding device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113382246A
CN113382246A CN202110427021.1A CN202110427021A CN113382246A CN 113382246 A CN113382246 A CN 113382246A CN 202110427021 A CN202110427021 A CN 202110427021A CN 113382246 A CN113382246 A CN 113382246A
Authority
CN
China
Prior art keywords
pixel
reconstructed image
category
combination
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110427021.1A
Other languages
Chinese (zh)
Other versions
CN113382246B (en
Inventor
粘春湄
方瑞东
江东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110427021.1A priority Critical patent/CN113382246B/en
Publication of CN113382246A publication Critical patent/CN113382246A/en
Application granted granted Critical
Publication of CN113382246B publication Critical patent/CN113382246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an encoding method, an encoding device, electronic equipment and a computer-readable storage medium. The encoding method comprises the following steps: classifying the reconstructed image by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixels comprise primary neighborhood pixels and secondary neighborhood pixels of the pixels; obtaining an optimal compensation value according to the first category combination and the second category combination; and compensating the reconstructed image by using the optimal compensation value. Thereby, the compensation accuracy can be improved.

Description

Encoding method, encoding device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video coding technologies, and in particular, to a coding method, a coding apparatus, an electronic device, and a computer-readable storage medium.
Background
In the prior art, when a reconstructed image is compensated and pixels need to be classified, only the size of the pixel value of the current pixel and the size of the pixel value of the corresponding first-level neighborhood pixel are considered, the correlation of the pixels in the classification process is insufficient, and the classification of the pixels cannot be more detailed, so that the compensation precision is influenced, so that the prior art needs to be improved.
Disclosure of Invention
The invention provides an encoding method, an encoding device, electronic equipment and a computer-readable storage medium. For improving the compensation accuracy and coding efficiency.
In order to solve the above technical problems, a first technical solution provided by the present invention is: there is provided an encoding method including: classifying the reconstructed image by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points; obtaining an optimal compensation value according to the first category combination and the second category combination; compensating the reconstructed image by using the optimal compensation value; and coding the compensated reconstructed image to further obtain a code stream.
The method for classifying the reconstructed image by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image to obtain a first class combination comprises the following steps: and classifying the reconstructed image by using the pixel values of the pixel points and the first-level neighborhood pixel points of the pixel points and at least partial pixel points in the second-level neighborhood pixel points to obtain a first-class combination.
At least part of the second-level neighborhood pixels are pixels located in the first direction and the second direction of the second-level neighborhood pixels.
At least part of the pixels in the secondary neighborhood pixels are pixels which are positioned right above, right below, right left and right of the pixels in the secondary neighborhood pixels.
Wherein, divide the value range of pixel value, obtain a plurality of intervals of predetermineeing, include: acquiring a value range of a corresponding pixel value based on the coded bit depth; and carrying out uneven division on the value range of the pixel value so as to obtain a plurality of preset intervals.
Wherein, divide the value range of pixel value, obtain a plurality of intervals of predetermineeing, include: acquiring a value range of a corresponding pixel value based on a coded bit depth, wherein the coded bit depth is 10 bits; and uniformly dividing the value range of the pixel values to obtain preset intervals with the number larger than 16.
Wherein, obtaining the optimal compensation value according to the first category combination and the second category combination comprises: combining the first category combination and the second category combination by utilizing a Cartesian algorithm to further obtain a third category combination and a compensation value corresponding to each third category in the third category combination; and calculating the optimal compensation value by using the utilization rate distortion cost calculation method based on the compensation value corresponding to each third category in the third category combination.
The reconstructed image includes any one of a luminance reconstructed image and a chrominance reconstructed image.
Wherein the chroma reconstructed image is in response to the reconstructed image; the obtaining an optimal compensation value according to the first category combination and the second category combination includes: combining the first category combination and the second category combination by utilizing a Cartesian algorithm to further obtain a third category combination and a compensation value corresponding to each third category in the third category combination; obtaining the optimal compensation value corresponding to the chrominance reconstruction image in a table look-up manner based on the third category combination and the compensation value corresponding to each third category in the third category combination; or obtaining the optimal compensation value corresponding to the chrominance reconstruction image in a table-look-up free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination.
The code stream comprises a filtering mark and a syntax element, the filtering mark represents a coding unit which needs to be compensated in the reconstructed image, and the syntax element comprises an optimal compensation value.
In order to solve the above technical problems, a second technical solution provided by the present invention is: there is provided an encoding device including: the classification module is used for classifying the reconstructed image by using the pixel values of each pixel point and the neighborhood pixel points in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points; the acquisition module is used for obtaining an optimal compensation value according to the first category combination and the second category combination; the compensation module is used for compensating the reconstructed image by using the optimal compensation value; and the coding module is used for coding the compensated reconstructed image so as to obtain a code stream.
In order to solve the above technical problems, a third technical solution provided by the present invention is: there is provided an electronic device comprising a processor and a memory coupled to each other, wherein the memory is configured to store program instructions implementing the encoding method of any one of the above; the processor is operable to execute program instructions stored by the memory.
In order to solve the above technical problems, a fourth technical solution provided by the present invention is: there is provided a computer readable storage medium storing a program file executable to implement the encoding method of any one of the above.
The method has the beneficial effects that the method is different from the prior art, the reconstructed image is classified by utilizing the pixel values of each pixel point in the reconstructed image and the first-level neighborhood pixel points and the second-level neighborhood pixel points of the pixel points to obtain the first-class combination, and the relationship of more surrounding pixel points of the pixel points is combined, so that the correlation between the pixel points and the surrounding pixel points is fully utilized for classification. In addition, the value range of the pixel values is divided to obtain a plurality of preset intervals, the reconstructed images are classified based on the preset intervals where the pixel values of the pixel points are located, so that the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different, and then a second category combination is obtained, so that the fact that the real image information can be more met, and the pixel classification is more accurate. By the encoding method, the classification precision of the pixels can be improved, and the compensation precision and the encoding efficiency of the reconstructed image can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained without inventive efforts, wherein:
FIG. 1 is a flowchart illustrating a first embodiment of an encoding method according to the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of a pixel A and a first-level neighborhood pixel and a second-level neighborhood pixel;
FIG. 3 is a schematic structural diagram of a first embodiment of a pixel A, a first-level neighborhood pixel and a part of a second-level neighborhood pixel;
FIG. 4 is a schematic structural diagram of a second embodiment of a pixel A, a first-level neighborhood pixel and a part of a second-level neighborhood pixel;
FIG. 5 is a schematic structural diagram of a third embodiment of a pixel A, a first-level neighborhood pixel and a part of a second-level neighborhood pixel;
FIG. 6 is a flowchart illustrating the first embodiment of step S11;
FIG. 7 is a flowchart illustrating a second embodiment of step S11;
FIG. 8 is a block diagram of an encoding apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an embodiment of an electronic device of the present invention;
fig. 10 is a schematic structural diagram of the computer-readable storage medium of the present invention.
Detailed description of the invention
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without any creative effort belong to the protection scope of the present application.
The present invention will be described in detail below with reference to the accompanying drawings and examples.
Please refer to fig. 1, which is a schematic structural diagram of a first embodiment of the encoding method of the present invention, specifically including:
step S11: classifying the reconstructed image by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points.
Specifically, in this embodiment, the reconstructed image is classified by using the pixel value of each pixel point and the pixel value of the neighborhood pixel point in the reconstructed image, so as to obtain the first class combination. In an embodiment, when classifying pixel points, the neighborhood pixel points only consider the first-level neighborhood pixel points of the pixel points, that is, the pixel points are classified by using the relationship between the pixel value of each pixel point and the corresponding first-level neighborhood pixel point in the reconstructed image, and the pixel points can only be classified into 17 classes or 9 classes at most. In the application, the classification basis is added to the secondary neighborhood pixel of the pixel. Specifically, in the present application, the reconstructed image is classified by using the pixel values of each pixel point in the reconstructed image, the first-level neighborhood pixel point of the pixel point, and the second-level neighborhood pixel point, so as to obtain the first-class combination.
Referring to fig. 2, the pixels B1, B2, B3, B4, B5, B6, B7 and B8 are first-level neighborhood pixels of the pixel a; the pixel points C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15 and C16 are secondary neighborhood pixel points of the pixel point A. In this embodiment, the reconstructed image is classified by using the pixel values of each pixel point a in the reconstructed image and all the pixel points (B1-B8) of the primary neighborhood pixel point of the pixel point a and all the pixel points (C1-C16) of the secondary neighborhood pixel point, so as to obtain the first class combination. By the method, the pixel points can be classified into 49 classes or 25 classes at most, so that the classification of the pixel points in the reconstructed image is more accurate.
In an embodiment, the reconstructed image may be further classified by using pixel values of the pixel points and the first-level neighborhood pixel points of the pixel points and pixel values of at least part of pixel points in the second-level neighborhood pixel points, so as to obtain a first-class combination. Please refer to fig. 3, fig. 4 and fig. 5. As shown in fig. 3, in this embodiment, pixel values of a pixel point a in the reconstructed image and all pixel points (B1-B8) of the primary neighborhood pixel point of the pixel point a and partial pixel points (C1, C2, C5, C7, C10, C12, C15, and C16) of the secondary neighborhood pixel points are used to classify the pixel points of the reconstructed image, so as to obtain a first class combination. As shown in fig. 4, in this embodiment, the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1-B8) of the first-level neighborhood pixel point of the pixel point a, and the pixel values of the partial pixel points (C1, C5, C12, C16) of the second-level neighborhood pixel points may be used to classify the pixel points of the reconstructed image, so as to obtain the first-class combination. In this embodiment, some of the pixels in the second-level neighborhood pixels are located in the second-level neighborhood pixels in the first direction L1 and the second direction L2 of the pixel a, as shown in fig. 4, the first direction L1 and the second direction L2 are diagonal directions of the pixel a, and the first direction L1 is perpendicular to the second direction L2. As shown in fig. 5, the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1-B8) of the first-level neighborhood pixel point of the pixel point a, and the pixel values of the partial pixel points (C3, C8, C9, C14) of the second-level neighborhood pixel point may be used to classify the pixel points of the reconstructed image, so as to obtain the first-class combination. In this embodiment, at least some of the second-level neighborhood pixels are the pixel C3 located right above the pixel B, the pixel C14 located right below, the pixel C8 located right left, and the pixel C9 located right. The method of this embodiment can classify the pixel points of the reconstructed image into 25 classes or 13 classes at most.
Specifically, as shown in fig. 2, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and all the second-level neighborhood pixel points (C1 to C16) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 49 classes or 25 classes at most. Specifically, in an embodiment, when the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) of the pixel point a are used to classify the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the mode is not changed, the current classification number is increased by 1 if the pixel values of the two pixel points (B5 and C9) on the horizontal right side of the pixel point a are greater than the pixel value of the pixel point a. For the 49-class case, if the pixel values of the two pixels (B5 and C9) on the horizontal right side of the pixel a are both smaller than the pixel value of the pixel a, the existing classification number is reduced by 1, and for the 25-class case, no operation is performed, and the other directions are the same.
As shown in fig. 5, the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1-B8) and part of the second-level neighborhood pixel points (C3, C8, C9, and C14) of the pixel point a are used to classify the pixel point a, so that the pixel points in the reconstructed image can be classified into 25 classes or 13 classes at most. Specifically, in an embodiment, when the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) of the pixel point a are used to classify the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the mode is not changed, the current classification number is increased by 1 if the pixel values of the two pixel points (B5 and C9) on the horizontal right side of the pixel point a are greater than the pixel value of the pixel point a. For the 25-class case, if the pixel values of the two pixels (B5 and C9) on the horizontal right side of the pixel a are both smaller than the pixel value of the pixel a, the number of existing classifications is reduced by 1, and for the 13-class case, no operation is performed, and the other directions are the same.
By means of the method, neighborhood pixels of the pixel points are expanded, the pixel points are classified, spatial correlation can be fully utilized, and therefore the classification accuracy of the pixels is improved.
In this embodiment, the value range of the pixel values is further divided to obtain a plurality of preset intervals, and the reconstructed image is classified based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination. The number of pixel values included in at least part of the preset intervals is different. In an embodiment, the number of pixel values included in all of the predetermined intervals is the same, which is not consistent with the characteristic of uneven distribution of pixels in the real image. In this embodiment, the number of pixel values included in at least some preset intervals in the plurality of preset intervals is set to be different, so that the classification result better conforms to the characteristic of uneven distribution of pixels in a real image, and the accuracy of the classification result is improved.
For example, the coded bit depth is 8 bits, and the corresponding pixel value has a value range of [0, 255], and if the coded bit depth is uniformly divided into 16 preset intervals, the value range is [0,15], [16,31], [32,47] … … [239,255 ]. The number of pixel values in each predetermined interval is 16. The method cannot fully reflect the characteristic of uneven pixel distribution in the real image, so that the value range of the pixel values is divided into a plurality of preset intervals in an uneven dividing mode according to the scheme of the application, and the number of the pixel values contained in at least part of the preset intervals is different.
Specifically, referring to fig. 6, step S11 specifically includes:
step S61: and acquiring the value range of the corresponding pixel value based on the coded bit depth.
Specifically, the example is described in which the coded bit depth is 8 bits, and the value range of the corresponding pixel value is obtained. In one embodiment, the coded bit depth is 8-bit pixel values with a value range of [0, 255 ].
Step S62: and carrying out uneven division on the value range of the pixel value so as to obtain a plurality of preset intervals.
In this embodiment, a value range [0, 255] of a pixel value is unevenly divided into 16 preset intervals, which specifically includes: [0,15], [16,32], [33,50] … … [233,255], so that the number of pixel values in the first preset interval [0,15] is 16, the number of pixel values in the second preset interval [16,32] is 17, and the number of pixel values in the third preset interval [33,50] is 18. It should be noted that the pixel values in each preset interval are consecutive pixel values.
In an embodiment, without adopting a uniform division manner, the value range [0, 255] of the pixel values may be divided into 16 preset intervals, specifically, the first preset interval has 16-N consecutive pixel values, the second category has 16+ N consecutive pixel values, and so on, where N is less than or equal to 16. Alternatively, it is also possible that there are 16-N consecutive pixel values for the first preset interval, 16-N +1 consecutive pixel values for the second category, 16-N +2 consecutive pixel values for the third category, and so on, 15 consecutive pixel values for the eighth category, 17 consecutive pixel values for the ninth category, 16+ N-6 consecutive pixel values for the tenth category, 16+ N-5 consecutive pixel values for the eleventh category, and so on, where N is 8.
And classifying the pixel points according to the relation between the pixel value of each pixel point in the reconstructed image and each preset interval, and further classifying the reconstructed image to obtain a second category combination.
In this embodiment, a value range of the pixel values is divided by using an uneven division manner to obtain a plurality of preset intervals, the reconstructed images are classified based on the preset intervals in which the pixel values of the pixel points are located to obtain a second category combination, and then the number of the pixel values contained in at least some of the preset intervals in the plurality of preset intervals is different. By the method, the characteristic of uneven distribution of pixels in the real image can be fully reflected by the classification of the pixels, and the accuracy of pixel classification is improved.
In one embodiment, for the coded bit depth of 8 bits and 10 bits, the value range of the pixel value is generally divided into 16 classes, so that the coding cost can be reduced. However, the division into 16 classes cannot sufficiently describe the fineness of the texture, and therefore, the present application proposes to divide the value range of the pixel value with the coded bit depth of 10 bits into preset intervals with the number greater than 16. Referring to fig. 7, step S11 includes:
step S71: and acquiring a value range of a corresponding pixel value based on a coded bit depth, wherein the coded bit depth is 10 bits.
Taking the coded bit depth of 10 bits as an example, the value range of the corresponding pixel value is [0, 1023 ].
Step S72: and uniformly dividing the value range of the pixel values to obtain preset intervals with the number larger than 16.
In an embodiment, the value ranges [0, 1023] of the pixel values may be uniformly divided, so as to obtain preset intervals with the number larger than 16. For example, the value range [0, 1023] of the pixel values is uniformly divided, and 17 preset intervals are obtained; for another example, the value range [0, 1023] of the pixel values is uniformly divided, and then 32 preset intervals are obtained. The number of the pixel values of each preset interval is the same.
By the method, the fineness of the texture can be fully described, and the classification accuracy can be further improved.
In another embodiment, the value range [0, 1023] of the pixel values may be divided unevenly, so as to obtain preset intervals with a number greater than 16. For example, the value range [0, 1023] of the pixel values is divided unevenly, and 17 preset intervals are obtained; for another example, the value range [0, 1023] of the pixel values is divided unevenly, and then 32 preset intervals are obtained. Wherein, the number of the pixel values of at least part of the preset intervals is different.
By the method, on one hand, the fineness of the texture can be fully described, on the other hand, the characteristic of uneven pixel distribution in a real image can be fully embodied, and the classification accuracy can be further improved.
In an embodiment, the reconstructed image includes any one of a luminance reconstructed image and a chrominance reconstructed image. That is, the method of step S11 may be applied to both the luminance reconstructed image and the chrominance reconstructed image.
Step S12: and obtaining the optimal compensation value according to the first category combination and the second category combination.
After the first category combination and the second category combination are obtained, the first category combination and the second category combination may be further combined to obtain a compensation value corresponding to each of the third category combination and the third category combination. And calculating the optimal compensation value by using the utilization rate distortion cost calculation method based on the compensation value corresponding to each third category in the third category combination. For example, the compensation values corresponding to all the third categories in the third category combination are traversed to compensate the reconstructed image, and cost comparison is performed according to the compensation result, so that the compensation value with the minimum cost is selected as the optimal compensation value.
For example, if 49 first classes are shared by the obtained first class combinations and 16 second classes are shared by the obtained second class combinations, 49 × 16 is 784 third classes and their corresponding offset values are obtained by combining the 49 first classes and the 16 second classes, and the 784 third classes are third class combinations.
In an embodiment, if the reconstructed image is a chrominance reconstructed image, the first category combination and the second category combination may be combined by using a cartesian algorithm, so as to obtain a compensation value corresponding to each of the third category combinations and the third category combinations; and obtaining the optimal compensation value corresponding to the chroma reconstructed image in a table look-up mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The reconstructed image is classified by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image, the mode of obtaining the first class combination is marked as C1, the reconstructed image is classified in the preset interval based on the pixel value of the pixel point, and the mode of obtaining the second class combination is marked as C2. The concrete formula for obtaining the optimal compensation value is as follows:
Cy=(Y(i,j)*C2)>>bitdepth;
CC=tableUV[C1*Cy/15]。
wherein, Y(i,j)The value of the pixel point (i, j) is, and Cy is the result of classifying the pixel point (i, j) by means of C2, that is, the second category combination; bitdepth is the coded bit depth; cc is the optimum compensation value and tableUV is the color classification table.
In another embodiment, a cartesian algorithm may be further used to combine the first category combination and the second category combination, so as to obtain a compensation value corresponding to each of the third category combinations and the third category combinations; and obtaining the optimal compensation value corresponding to the chroma reconstructed image in a table-look-up free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The concrete formula is as follows:
Cy=(Y(i,j)*C2)>>bitdepth;
CC=C1*(Cy+4*15)/15。
for example, Cy is a result of the classification of the pixel (i, j) by the C2 method, that is, the second category combination, assuming that the C2 method can be classified into 16 categories at most, Cy is (0-15), and assuming that the C1 method can be classified into 17 categories at most, the chrominance components can be classified into 85 categories at most by the method of this embodiment. An optimal compensation value is further calculated from these 85 classes.
Step S13: and compensating the reconstructed image by using the optimal compensation value.
And after the optimal compensation value is obtained through calculation, compensating the reconstructed image by using the optimal compensation value obtained through calculation.
Step S14: and coding the compensated reconstructed image to further obtain a code stream.
And after the reconstructed image is compensated, further coding the compensated reconstructed image to obtain a code stream, wherein the code stream comprises a filtering mark and a syntax element, the filtering mark represents a coding unit which needs to be compensated in the reconstructed image, and the syntax element comprises an optimal compensation value. Specifically, when the reconstructed image is compensated, the switching condition of each coding unit in the reconstructed image needs to be further judged, so as to determine the coding unit in the reconstructed image that needs to be compensated, and the coding unit in the reconstructed image that needs to be compensated is compensated by using the optimal compensation value.
By the method, the spatial correlation can be fully utilized to classify the pixel points so as to improve the classification accuracy, the nonuniformity of the real image pixels can be reflected, the classification accuracy is further improved, the fineness of the texture can be fully described, and the compensation accuracy and the coding efficiency are improved.
Please refer to fig. 8, which is a schematic structural diagram of an embodiment of the encoding apparatus of the present invention, specifically including: a classification module 801, an acquisition module 802, a compensation module 803 and an encoding module 804.
The classification module 801 is configured to classify the reconstructed image by using the pixel values of each pixel point and the neighborhood pixel points in the reconstructed image, so as to obtain a first class combination. And classifying the reconstructed image based on the preset interval in which the pixel value of the pixel point is located to obtain a second category combination. The number of pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points.
Specifically, in this embodiment, the reconstructed image is classified by using the pixel value of each pixel point and the pixel value of the neighborhood pixel point in the reconstructed image, so as to obtain the first class combination. In an embodiment, when classifying pixel points, the neighborhood pixel points only consider the first-level neighborhood pixel points of the pixel points, that is, the pixel points are classified by using the relationship between the pixel value of each pixel point and the corresponding first-level neighborhood pixel point in the reconstructed image, and the pixel points can only be classified into 17 classes or 9 classes at most. In the application, the classification basis is added to the secondary neighborhood pixel of the pixel. Specifically, in the present application, the reconstructed image is classified by using the pixel values of each pixel point in the reconstructed image, the first-level neighborhood pixel point of the pixel point, and the second-level neighborhood pixel point, so as to obtain the first-class combination.
Referring to fig. 2, the pixels B1, B2, B3, B4, B5, B6, B7 and B8 are first-level neighborhood pixels of the pixel a; the pixel points C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15 and C16 are secondary neighborhood pixel points of the pixel point A. In this embodiment, the reconstructed image is classified by using the pixel values of each pixel point a in the reconstructed image and all the pixel points (B1-B8) of the primary neighborhood pixel point of the pixel point a and all the pixel points (C1-C16) of the secondary neighborhood pixel point, so as to obtain the first class combination. By the method, the pixel points can be classified into 49 classes or 25 classes at most, so that the classification of the pixel points in the reconstructed image is more accurate.
In an embodiment, the reconstructed image may be further classified by using pixel values of the pixel points and the first-level neighborhood pixel points of the pixel points and pixel values of at least part of pixel points in the second-level neighborhood pixel points, so as to obtain a first-class combination. Please refer to fig. 3, fig. 4 and fig. 5. As shown in fig. 3, in this embodiment, pixel values of a pixel point a in the reconstructed image and all pixel points (B1-B8) of the primary neighborhood pixel point of the pixel point a and partial pixel points (C1, C2, C5, C7, C10, C12, C15, and C16) of the secondary neighborhood pixel points are used to classify the pixel points of the reconstructed image, so as to obtain a first class combination. As shown in fig. 4, in this embodiment, the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1-B8) of the first-level neighborhood pixel point of the pixel point a, and the pixel values of the partial pixel points (C1, C5, C12, C16) of the second-level neighborhood pixel points may be used to classify the pixel points of the reconstructed image, so as to obtain the first-class combination. In this embodiment, some of the pixels in the second-level neighborhood pixels are located in the second-level neighborhood pixels in the first direction L1 and the second direction L2 of the pixel a, as shown in fig. 4, the first direction L1 and the second direction L2 are diagonal directions of the pixel a, and the first direction L1 is perpendicular to the second direction L2. As shown in fig. 5, the pixel values of the pixel point a in the reconstructed image, all the pixel points (B1-B8) of the first-level neighborhood pixel point of the pixel point a, and the pixel values of the partial pixel points (C3, C8, C9, C14) of the second-level neighborhood pixel point may be used to classify the pixel points of the reconstructed image, so as to obtain the first-class combination. In this embodiment, at least some of the second-level neighborhood pixels are the pixel C3 located right above the pixel B, the pixel C14 located right below, the pixel C8 located right left, and the pixel C9 located right. The method of this embodiment can classify the pixel points of the reconstructed image into 25 classes or 13 classes at most.
Specifically, as shown in fig. 2, the pixel points a are classified by using the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) and all the second-level neighborhood pixel points (C1 to C16) of the pixel point a, so that the pixel points in the reconstructed image can be classified into 49 classes or 25 classes at most. Specifically, in an embodiment, when the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) of the pixel point a are used to classify the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the mode is not changed, the current classification number is increased by 1 if the pixel values of the two pixel points (B5 and C9) on the horizontal right side of the pixel point a are greater than the pixel value of the pixel point a. In the case of 49 classes, if the pixel values are all smaller than the pixel value of the pixel point a, the number of existing classes is reduced by 1, and in the case of 25 classes, no operation is performed, and the other directions are the same.
As shown in fig. 5, the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1-B8) and part of the second-level neighborhood pixel points (C3, C8, C9, and C14) of the pixel point a are used to classify the pixel point a, so that the pixel points in the reconstructed image can be classified into 25 classes or 13 classes at most. Specifically, in an embodiment, when the pixel values of the pixel point a and all the first-level neighborhood pixel points (B1 to B8) of the pixel point a are used to classify the pixel point a, the pixel points in the reconstructed image can be classified into 17 classes or 9 classes at most, and if the mode is not changed, the current classification number is increased by 1 if the pixel values of the two pixel points (B5 and C9) on the horizontal right side of the pixel point a are greater than the pixel value of the pixel point a. In the case of 25 classes, if the pixel values are all smaller than the pixel value of the pixel point a, the number of existing classes is reduced by 1, and in the case of 13 classes, no operation is performed, and the other directions are the same.
By means of the method, neighborhood pixels of the pixel points are expanded, the pixel points are classified, spatial correlation can be fully utilized, and therefore the classification accuracy of the pixels is improved.
In this embodiment, the value range of the pixel values is further divided to obtain a plurality of preset intervals, and the reconstructed image is classified based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination. The number of pixel values included in at least part of the preset intervals is different. In an embodiment, the number of pixel values included in all of the predetermined intervals is the same, which is not consistent with the characteristic of uneven distribution of pixels in the real image. In this embodiment, the number of pixel values included in at least some preset intervals in the plurality of preset intervals is set to be different, so that the classification result better conforms to the characteristic of uneven distribution of pixels in a real image, and the accuracy of the classification result is improved.
For example, the coded bit depth is 8 bits, the range of the corresponding pixel value is [0, 255], and the range is uniformly divided into 16 preset intervals, which are [0,15], [16,31], [32,47] … … [239,255 ]. The number of pixel values in each predetermined interval is 16. The method cannot fully reflect the characteristic of uneven pixel distribution in the real image, so that the value range of the pixel values is divided into a plurality of preset intervals in an uneven dividing mode according to the scheme of the application, and the number of the pixel values contained in at least part of the preset intervals is different.
The classification module 801 is further configured to obtain a value range of a corresponding pixel value based on the coded bit depth, and perform non-uniform division on the value range of the pixel value to obtain a plurality of preset intervals.
The classification module 801 is further configured to obtain a value range of a corresponding pixel value based on a coded bit depth, where the coded bit depth is 10 bits, and uniformly divide the value range of the pixel value, so as to obtain preset intervals with a number greater than 16.
The obtaining module 802 is configured to obtain an optimal compensation value according to the first category combination and the second category combination. In an embodiment, the obtaining module 802 is further configured to combine the first category combination and the second category combination by using a cartesian algorithm, so as to obtain a third category combination and a compensation value corresponding to each third category in the third category combination. And calculating the optimal compensation value by using the utilization rate distortion cost calculation method based on the compensation value corresponding to each third category in the third category combination. In an embodiment, the obtaining module 802 is configured to traverse compensation values corresponding to all third categories in the third category combination to compensate the reconstructed image, and perform cost comparison according to a compensation result, so as to select a compensation value with the minimum cost as an optimal compensation value.
In an embodiment, the obtaining module 802 is further configured to combine the first category combination and the second category combination by using a cartesian algorithm, so as to obtain a third category combination and a compensation value corresponding to each third category in the third category combination; and obtaining the optimal compensation value corresponding to the chroma reconstructed image in a table look-up mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The reconstructed image is classified by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image, the mode of obtaining the first class combination is marked as C1, the reconstructed image is classified in the preset interval based on the pixel value of the pixel point, and the mode of obtaining the second class combination is marked as C2. The concrete formula for obtaining the optimal compensation value is as follows:
Cy=(Y(i,j)*C2)>>bitdepth;
CC=tableUV[C1*Cy/15]。
wherein, Y(i,j)Is the image of the pixel point (i, j)The pixel value Cy is the result of the classification of the pixel point (i, j) by the C2 mode, namely the second category combination; bitdepth is the coded bit depth; cc is the optimum compensation value and tableUV is the color classification table.
In an embodiment, the obtaining module 802 is further configured to combine the first category combination and the second category combination by using a cartesian algorithm, so as to obtain a third category combination and a compensation value corresponding to each third category in the third category combination; and obtaining the optimal compensation value corresponding to the chroma reconstructed image in a table-look-up free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination. The concrete formula is as follows:
Cy=(Y(i,j)*C2)>>bitdepth;
CC=C1*(Cy+4*15)/15。
the compensation module 803 is used for compensating the reconstructed image with the optimal compensation value.
The encoding module 804 is configured to encode the compensated reconstructed image, and then obtain a code stream, where the code stream includes a filtering flag and a syntax element, the filtering flag indicates an encoding unit that needs to be compensated in the reconstructed image, and the syntax element includes an optimal compensation value.
Through the device, the spatial correlation can be fully utilized to classify the pixel points so as to improve the classification accuracy, the nonuniformity of real image pixels can be reflected, the classification accuracy is further improved, the fineness of textures can be fully described, and the compensation accuracy and the coding efficiency are improved.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device comprises a memory 82 and a processor 81 connected to each other.
The memory 82 is used to store program instructions implementing the method of any one of the above.
Processor 81 is operative to execute program instructions stored in memory 82.
The processor 81 may also be referred to as a CPU (Central Processing Unit). The processor 81 may be an integrated circuit chip having signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 82 may be a memory bank, TF card, etc. and may store all information in the electronic device, including the input raw data, computer programs, intermediate operation results, and final operation results, all stored in the memory. It stores and retrieves information based on the location specified by the controller. With the memory, the electronic device can only have the memory function to ensure the normal operation. The storage of electronic devices can be classified into a main storage (internal storage) and an auxiliary storage (external storage) according to the use, and also into an external storage and an internal storage. The external memory is usually a magnetic medium, an optical disk, or the like, and can store information for a long period of time. The memory refers to a storage component on the main board, which is used for storing data and programs currently being executed, but is only used for temporarily storing the programs and the data, and the data is lost when the power is turned off or the power is cut off.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented by other methods. For example, the above-described apparatus implementation methods are merely illustrative, e.g., the division of modules or units into only one logical functional division, and additional division methods may be implemented in practice, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment of the method.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a system server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the implementation method of the present application.
Please refer to fig. 10, which is a schematic structural diagram of a computer-readable storage medium according to the present invention. The storage medium of the present application stores a program file 91 capable of implementing all the methods, wherein the program file 91 may be stored in the storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of each implementation method of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above description is only an implementation method of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations performed by the present specification and drawings, or directly or indirectly applied to other related technologies, are included in the scope of the present invention.

Claims (13)

1. A method of encoding, comprising:
classifying the reconstructed image by using the pixel value of each pixel point and the pixel value of the neighborhood in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points;
obtaining an optimal compensation value according to the first category combination and the second category combination;
compensating the reconstructed image by using the optimal compensation value;
and coding the compensated reconstructed image to further obtain a code stream.
2. The encoding method according to claim 1, wherein the classifying the reconstructed image by using the pixel values of each pixel point and the neighborhood pixel points in the reconstructed image to obtain a first class combination comprises:
and classifying the reconstructed image by using the pixel points, the primary neighborhood pixel points of the pixel points and the pixel values of at least part of the pixel points in the secondary neighborhood pixel points to obtain the first class combination.
3. The encoding method according to claim 2,
at least part of the pixels in the secondary neighborhood pixels are pixels in the first direction and the second direction of the pixels in the secondary neighborhood pixels.
4. The encoding method according to claim 2,
at least part of the pixels in the secondary neighborhood pixels are pixels which are positioned right above, right below, right left and right of the pixels in the secondary neighborhood pixels.
5. The encoding method according to claim 1, wherein the dividing the value range of the pixel value into a plurality of preset intervals comprises:
acquiring a value range of a corresponding pixel value based on the coded bit depth;
and carrying out uneven division on the value range of the pixel values to obtain a plurality of preset intervals.
6. The encoding method according to claim 1, wherein the dividing the value range of the pixel value into a plurality of preset intervals comprises:
acquiring a value range of a corresponding pixel value based on a coded bit depth, wherein the coded bit depth is 10 bits;
and uniformly dividing the value range of the pixel values to obtain preset intervals with the number larger than 16.
7. The encoding method according to claim 1, wherein the obtaining the optimal compensation value according to the first class combination and the second class combination comprises:
combining the first category combination and the second category combination by utilizing a Cartesian algorithm to further obtain a third category combination and a compensation value corresponding to each third category in the third category combination;
and calculating the optimal compensation value by using the utilization rate distortion cost calculation method based on the compensation value corresponding to each third category in the third category combination.
8. The encoding method according to claim 1, wherein the reconstructed image includes any one of a luminance reconstructed image and a chrominance reconstructed image.
9. The encoding method according to claim 8,
responding to the reconstructed image as the chrominance reconstructed image;
the obtaining an optimal compensation value according to the first category combination and the second category combination includes:
combining the first category combination and the second category combination by utilizing a Cartesian algorithm to further obtain a third category combination and a compensation value corresponding to each third category in the third category combination;
obtaining the optimal compensation value corresponding to the chrominance reconstruction image in a table look-up manner based on the third category combination and the compensation value corresponding to each third category in the third category combination; or
And obtaining the optimal compensation value corresponding to the chrominance reconstruction image in a table-look-up free mode based on the third category combination and the compensation value corresponding to each third category in the third category combination.
10. The encoding method according to any one of claims 1 to 9,
the code stream comprises a filtering mark and a syntax element, the filtering mark represents a coding unit which needs to be compensated in the reconstructed image, and the syntax element comprises the optimal compensation value.
11. An encoding apparatus, comprising:
the classification module is used for classifying the reconstructed image by using the pixel values of each pixel point and the neighborhood pixel points in the reconstructed image to obtain a first class combination; dividing the value range of the pixel values to obtain a plurality of preset intervals, and classifying the reconstructed image based on the preset interval in which the pixel values of the pixel points are located to obtain a second category combination; the number of the pixel values contained in at least part of the preset intervals in the plurality of preset intervals is different; and/or the neighborhood pixel points comprise primary neighborhood pixel points and secondary neighborhood pixel points of the pixel points;
the acquisition module is used for obtaining an optimal compensation value according to the first category combination and the second category combination;
the compensation module is used for compensating the reconstructed image by using the optimal compensation value;
and the coding module is used for coding the compensated reconstructed image so as to obtain a code stream.
12. An electronic device comprising a processor and a memory coupled to each other, wherein,
the memory is for storing program instructions for implementing the encoding method of any one of claims 1-10;
the processor is configured to execute the program instructions stored by the memory.
13. A computer-readable storage medium, characterized in that a program file is stored, which can be executed to implement the encoding method according to any one of claims 1 to 10.
CN202110427021.1A 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium Active CN113382246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110427021.1A CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110427021.1A CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113382246A true CN113382246A (en) 2021-09-10
CN113382246B CN113382246B (en) 2024-03-01

Family

ID=77569927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110427021.1A Active CN113382246B (en) 2021-04-20 2021-04-20 Encoding method, encoding device, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113382246B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134600A (en) * 2022-09-01 2022-09-30 浙江大华技术股份有限公司 Encoding method, encoder, and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473280A (en) * 2009-07-31 2012-05-23 富士胶片株式会社 Image processing device and method, data processing device and method, program, and recording medium
US20120269458A1 (en) * 2007-12-11 2012-10-25 Graziosi Danillo B Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers
CN104012095A (en) * 2011-12-22 2014-08-27 三星电子株式会社 Video Encoding Method Using Offset Adjustment According To Classification Of Pixels By Maximum Encoding Units And Apparatus Thereof, And Video Decoding Method And Apparatus Thereof
KR20150035943A (en) * 2015-03-12 2015-04-07 삼성전자주식회사 Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same
US20180124408A1 (en) * 2015-05-12 2018-05-03 Samsung Electronics Co., Ltd. Image encoding method and device for sample value compensation and image decoding method and device for sample value compensation
CN109416749A (en) * 2017-11-30 2019-03-01 深圳配天智能技术研究院有限公司 A kind of the gradient category method, apparatus and readable storage medium storing program for executing of image
US20190320170A1 (en) * 2016-10-11 2019-10-17 Lg Electronics Inc. Image encoding and decoding method and apparatus therefor
CN110383837A (en) * 2018-04-02 2019-10-25 北京大学 Method for video processing and equipment
CN111866507A (en) * 2020-06-07 2020-10-30 咪咕文化科技有限公司 Image filtering method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120269458A1 (en) * 2007-12-11 2012-10-25 Graziosi Danillo B Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers
CN102473280A (en) * 2009-07-31 2012-05-23 富士胶片株式会社 Image processing device and method, data processing device and method, program, and recording medium
CN104012095A (en) * 2011-12-22 2014-08-27 三星电子株式会社 Video Encoding Method Using Offset Adjustment According To Classification Of Pixels By Maximum Encoding Units And Apparatus Thereof, And Video Decoding Method And Apparatus Thereof
KR20150035943A (en) * 2015-03-12 2015-04-07 삼성전자주식회사 Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same
US20180124408A1 (en) * 2015-05-12 2018-05-03 Samsung Electronics Co., Ltd. Image encoding method and device for sample value compensation and image decoding method and device for sample value compensation
US20190320170A1 (en) * 2016-10-11 2019-10-17 Lg Electronics Inc. Image encoding and decoding method and apparatus therefor
CN109416749A (en) * 2017-11-30 2019-03-01 深圳配天智能技术研究院有限公司 A kind of the gradient category method, apparatus and readable storage medium storing program for executing of image
CN110383837A (en) * 2018-04-02 2019-10-25 北京大学 Method for video processing and equipment
CN111866507A (en) * 2020-06-07 2020-10-30 咪咕文化科技有限公司 Image filtering method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONG HUANG: "Local Linear Spatial–Spectral Probabilistic Distribution for Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, 23 October 2019 (2019-10-23) *
俞楷: "基于帧间相关性的HDR视频压缩研究", 《中国优秀硕士学位论文全文数据库》, 15 July 2019 (2019-07-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134600A (en) * 2022-09-01 2022-09-30 浙江大华技术股份有限公司 Encoding method, encoder, and computer-readable storage medium
CN115134600B (en) * 2022-09-01 2022-12-20 浙江大华技术股份有限公司 Encoding method, encoder, and computer-readable storage medium

Also Published As

Publication number Publication date
CN113382246B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US20220132127A1 (en) Image encoding device, image decoding device, and the programs thereof
CN110087083B (en) Method for selecting intra chroma prediction mode, image processing apparatus, and storage apparatus
CN108737875B (en) Image processing method and device
CN111310727B (en) Object detection method and device, storage medium and electronic device
CN113099230B (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113382246A (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
US7873226B2 (en) Image encoding apparatus
CN114998122A (en) Low-illumination image enhancement method
CN111179370A (en) Picture generation method and device, electronic equipment and storage medium
CN113781334A (en) Method, device, terminal and storage medium for comparing difference between images based on colors
US20120008861A1 (en) Image processing apparatus and compression method therefor
CN110213595B (en) Intra-frame prediction based encoding method, image processing apparatus, and storage device
CN108765503B (en) Skin color detection method, device and terminal
CN115499632A (en) Image signal conversion processing method and device and terminal equipment
CN109308690B (en) Image brightness balancing method and terminal
CN113613024B (en) Video preprocessing method and device
CN113691811B (en) Coding block dividing method, device, system and storage medium
CN115278225A (en) Method and device for selecting chroma coding mode and computer equipment
CN112437307B (en) Video coding method, video coding device, electronic equipment and video coding medium
CN109640086B (en) Image compression method, image compression device, electronic equipment and computer-readable storage medium
CN113422955B (en) HEIF image encoding method, HEIF image decoding method, HEIF image encoding device, HEIF image decoding device, HEIF image encoding program, HEIF image decoding program, and HEIF image decoding program
CN113382257A (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
CN114339305B (en) Virtual desktop image processing method and related device
CN117097900A (en) Image frame encoding method, image frame encoding device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant