CN115278251A - Encoding method based on motion prediction encoding, display module and storage medium - Google Patents

Encoding method based on motion prediction encoding, display module and storage medium Download PDF

Info

Publication number
CN115278251A
CN115278251A CN202210826306.7A CN202210826306A CN115278251A CN 115278251 A CN115278251 A CN 115278251A CN 202210826306 A CN202210826306 A CN 202210826306A CN 115278251 A CN115278251 A CN 115278251A
Authority
CN
China
Prior art keywords
value
pixel
quantization
target pixel
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210826306.7A
Other languages
Chinese (zh)
Inventor
孙林举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aoxian Technology Co ltd
Original Assignee
Shanghai Aoxian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aoxian Technology Co ltd filed Critical Shanghai Aoxian Technology Co ltd
Priority to CN202210826306.7A priority Critical patent/CN115278251A/en
Publication of CN115278251A publication Critical patent/CN115278251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/88Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides an encoding method based on motion prediction coding, a display module and a storage medium, wherein the encoding method comprises the following steps: acquiring a first pixel reconstruction value adjacent to a first dimension direction of a target pixel, a second pixel reconstruction value adjacent to a second dimension direction of the target pixel, and a third pixel reconstruction value adjacent to a first dimension and a second dimension included angle direction of the target pixel; and carrying out quantization coding on the error value of the target pixel according to a preset rule so as to obtain a quantization value of the target pixel. The coding method, the display module and the storage medium based on the motion prediction coding can realize faster data storage with large-scale compression, and are convenient for compensation of display pixels.

Description

Encoding method based on motion prediction encoding, display module and storage medium
Technical Field
The application relates to the technical field of display panels, in particular to a coding method based on motion prediction coding, a display module and a storage medium.
Background
Due to the limitation of crystallization process, LTPS TFTs fabricated on large-area glass substrates, TFTs at different positions often have non-uniformity in electrical parameters such as threshold voltage, mobility, etc., which are converted into current difference and brightness difference of OLED display devices and perceived by human eyes, i.e., mura phenomenon. The term Mura originates from japan, and originally means uneven brightness, and then extends to any human-eye recognizable color difference on the panel. In the production process of the AMOLED display screen, due to the reasons of materials, processes and the like, the phenomenon of uneven brightness of picture display of partial products, namely Mura, can bring discomfort to vision due to spot traces with uneven brightness, and products with the traces can not meet the specification requirements of terminal customers and can only be scrapped or subjected to degradation treatment generally.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: according to the external compensation system for the AMOLED production process, mura stripes of a display screen with poor Mura are eliminated through an advanced sub-pixel level optical imaging technology and a software algorithm, so that the display quality of the display screen meets the requirement of the shipment specification of a panel factory, and when an area containing more pixels is used as a basic block to perform data compression in the process of improving the yield of display screen mass production, the data volume is rapidly increased, and a large amount of storage space is consumed.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The application provides an encoding method based on motion prediction encoding, a display module and a storage medium, which are used for relieving the problem of uneven brightness of a display picture.
In one aspect, the present application provides a coding method based on motion prediction coding, and in particular, the coding method includes:
acquiring a first pixel reconstruction value adjacent to a first dimension direction of a target pixel, a second pixel reconstruction value adjacent to a second dimension direction of the target pixel, and a third pixel reconstruction value adjacent to an included angle direction of the first dimension and the second dimension of the target pixel;
calculating the sum of the first pixel reconstruction value and the second pixel reconstruction value, and subtracting the difference of the third pixel reconstruction value to obtain a predicted value of the target pixel;
acquiring an original value of the target pixel in a display image of preset display data, and calculating the difference between the original value and the predicted value as an error value of the target pixel;
and carrying out quantization coding on the error value of the target pixel according to a preset rule so as to obtain a quantization value of the target pixel.
Optionally, the preset display data in the encoding method includes a gray scale picture or an RGBW picture.
Optionally, the original values of the target pixels in the encoding method comprise luminance values and/or chrominance values.
Optionally, the step of performing quantization coding on the error value of the target pixel according to the preset rule in the coding method includes:
and dividing the error value of the target pixel by a quantization parameter so as to quantize the error value to a preset value interval.
Optionally, the encoding method, when performing the step of dividing the error value of the target pixel by the quantization parameter, includes:
selecting the quantization parameter in a parameter table according to a parameter selection rule;
and dividing the error value of the target pixel by the quotient of the quantization parameter to obtain the quantized value of the target pixel.
Optionally, the step of executing the acquiring a first pixel reconstruction value adjacent to the target pixel in the first dimension direction, a second pixel reconstruction value adjacent to the target pixel in the second dimension direction, and a third pixel reconstruction value adjacent to the target pixel in the first dimension and second dimension included angle direction includes:
reading the first pixel quantization value, the second pixel quantization value, and the third pixel re-quantization value;
and performing inverse quantization on the first pixel quantized value, the second pixel quantized value and the third pixel quantized value according to the reverse direction of the preset rule, and respectively and correspondingly acquiring a first pixel reconstruction value, a second pixel reconstruction value and a third pixel reconstruction value.
Optionally, the step of performing inverse quantization according to the preset rule in the encoding method includes:
reading a quantization value of a reference pixel and the quantization parameter;
and calculating the product of the quantization value and the quantization parameter as the reconstruction value of the reference pixel.
Optionally, before the step of obtaining a first pixel reconstruction value adjacent to the target pixel in the first dimension direction, a second pixel reconstruction value adjacent to the target pixel in the second dimension direction, and a third pixel reconstruction value adjacent to the target pixel in the first dimension and second dimension included angle directions, the encoding method includes:
in response to the acquisition of a preset initial value of a first row of pixels, acquiring an original value of a first pixel in the first row of pixels;
taking the difference between the original value of the first pixel and the preset initial value as an error value of the first pixel, and obtaining a quantization value and a reconstruction value of the first pixel according to the error value of the first pixel and the preset rule;
obtaining an original value of a second pixel of the first row of pixels, taking a difference between a reconstructed value of the first pixel and the original value of the second pixel as an error value of the second pixel, and obtaining a quantized value and a reconstructed value of the second pixel according to the error value of the second pixel and the preset rule.
On the other hand, the application also provides a display module, which specifically comprises a processor and a memory connected with the processor;
the memory having stored thereon a computer program;
the processor is used for executing the computer program read from the memory to realize the coding method.
In another aspect, the present application further provides a storage medium, in particular, the storage medium stores thereon a computer program, and the computer program, when executed by a processor, implements the encoding method as described above.
As described above, the encoding method, the display module and the storage medium based on motion prediction encoding provided by the present application can realize faster data storage with large-scale compression, and facilitate compensation of display pixels.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an encoding method based on motion prediction coding according to an embodiment of the present application.
Fig. 2 is a flowchart of an encoding method based on motion prediction coding according to another embodiment of the present application.
FIG. 3 is a diagram of a target pixel according to an embodiment of the present application.
FIG. 4 is a diagram of a target pixel according to another embodiment of the present application.
The implementation, functional features and advantages of the object of the present application will be further explained with reference to the embodiments, and with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of a claim "comprising a" 8230a "\8230means" does not exclude the presence of additional identical elements in the process, method, article or apparatus in which the element is incorporated, and further, similarly named components, features, elements in different embodiments of the application may have the same meaning or may have different meanings, the specific meaning of which should be determined by its interpretation in the specific embodiment or by further combination with the context of the specific embodiment.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
First embodiment
In one aspect, the present application provides a method for coding based on motion prediction coding, and fig. 1 is a flowchart of a method for coding based on motion prediction coding according to an embodiment of the present application.
Referring to fig. 1, in an embodiment, an encoding method includes:
s10: and acquiring a first pixel reconstruction value adjacent to the first dimension direction of the target pixel, a second pixel reconstruction value adjacent to the second dimension direction of the target pixel, and a third pixel reconstruction value adjacent to the first dimension direction and the second dimension direction of the target pixel.
For example, the first dimension direction may be a horizontal direction, the second dimension direction may be a vertical direction, which is not limited in this application, and the third pixel is located at an included angle between the horizontal direction and the vertical direction and is adjacent to the target pixel. Generally, the neighboring pixels have linearly changing saliency values, and thus have a reference meaning when predicting and compressing the saliency compensation value of the target pixel. The color rendering value may be a luminance value or a gray scale value for each color level.
S20: and calculating the sum of the first pixel reconstruction value and the second pixel reconstruction value, and subtracting the difference of the third pixel reconstruction value to obtain the predicted value of the target pixel.
Optionally, the predicted value of the target pixel is a sum of reconstructed values of any two pixels of the first pixel reconstructed value, the second pixel reconstructed value, and the third pixel reconstructed value minus a reconstructed value of the remaining one pixel. For example, the predicted value of the target pixel may be the sum of the reconstructed value of the first pixel and the reconstructed value of the third pixel, minus the difference of the reconstructed value of the second pixel.
S30: and acquiring an original value of the target pixel in a display image of preset display data, and calculating the difference between the original value and the predicted value as an error value of the target pixel.
Through the calculation of the deviation of the predicted value between the original value of the target pixel and the peripheral reference pixel, the data volume of the quantitative display degree compensation value can be reduced, and the storage or the transmission is convenient.
S40: and carrying out quantization coding on the error value of the target pixel according to a preset rule so as to obtain a quantization value of the target pixel.
In this embodiment, the encoding method predicts a predicted value of the target pixel to be compensated according to the compensation values of other pixels around the target pixel, and then performs quantization encoding on the error value of the target pixel by using a difference between the original compensation value and the predicted value as an error value. By analogy, each pixel is subjected to motion prediction one by one, and then the error value of each pixel is coded, so that the data compression and decompression with higher speed and higher efficiency are realized, and the storage or transmission of the target pixel display compensation value is facilitated. In the working process of the display panel, when the display panel needs to be compensated, the quantized value is read for reconstruction, the reconstruction value of each pixel is obtained, and accordingly each pixel corresponding to each reconstruction value is compensated.
In an embodiment, the predetermined display data in the encoding method includes a gray scale image or an RGBW image.
For example, according to different user technologies and requirements, only the gray scale frames of the displayed image need to be detected if only the brightness difference is compensated and the color difference is not compensated. If the difference in chrominance is supplemented as well as the difference in luminance, it is necessary to detect the RGBW picture of the display image.
In an embodiment, the original values of the target pixels in the encoding method comprise luminance values and/or chrominance values.
For example, if a gray scale frame of the display image is detected, the original value compensated by the target pixel includes a brightness value. If the RGBW picture of the display picture is detected, the original value of the target pixel comprises a chromatic value of a gray scale picture of the display image.
Exemplarily, general procedure for De-Mura:
a drive IC lights up a panel (TV/mobile/Tablet) and displays several pictures (usually gray scale or RGB).
b. The above-mentioned picture is photographed using a high-resolution and high-precision CCD camera.
c. And analyzing pixel color distribution characteristics according to camera acquisition data, and identifying Mura according to a related algorithm.
d. And generating the Demura data according to the mura data and a corresponding Demura compensation algorithm.
e. Burning the Demura data into a Flash ROM, re-shooting the compensated picture, and confirming that the Mura is eliminated.
Illustratively, the AMOLED Demura includes the following detailed steps:
1. drawing: lightening the AMOLED screen body, importing different pictures, collecting pictures by using a CCD camera on compensation equipment, and automatically identifying the arrangement relation of sub-pixels;
the image to be detected after lighting the panel is generally different according to the requirements of different panel factories, and there are common RGB images with 32, 64, 96, 160, 192, and 224 gray levels, and 18 images in total.
Some panel factories' Demura only compensates Luminance difference, does not compensate color difference, and this kind of luminence Demura generally only needs to detect the grey scale picture, and because the Mura that presents when different grey scales is different, generally can detect the Mura of high low grey scale in addition, and last Demura data is average, and different panel factories of concrete setting can select according to own actual demand certainly. Some panel factories do a more comprehensive Color Demura, i.e. not only the brightness but also the chrominance differences are compensated. Some detection pictures of the color Demura adopt gray-scale pictures, some detection pictures adopt RGBW pictures, and different panel factories select different pictures according to the technology and the requirements.
2. The collected pictures are imported into a high-performance PC (Demura tool software is installed on the PC).
3. And extracting original data, calculating a Mura region, detecting a Mura boundary and generating compensation data by using Demura tool software on a PC (personal computer).
4. When the Demura function is started, extracting complete compensation data, overlapping the compensation data with original display data sent by an application end, generating new data, transmitting the new data to Panel for display, and confirming the Demura compensation effect.
In one embodiment, the encoding method performs S40: the step of carrying out quantization coding on the error value of the target pixel according to a preset rule comprises the following steps:
s41: the error value of the target pixel is divided by the quantization parameter so that the error value is quantized to a preset value interval.
Optionally, the size of the quantization parameter is not limited, and a proper quantization parameter is selected according to a preset numerical interval and the quantization accuracy. Illustratively, the quantization parameter may be selected within a given data interval according to the degree of compression required.
In an embodiment, the encoding method performs S41: the step of dividing the error value of the target pixel by the quantization parameter comprises:
s42: selecting a quantization parameter from the parameter table according to a parameter selection rule;
s43: and dividing the error value of the target pixel by the quotient of the quantization parameter to obtain the integer of the quantization value of the target pixel.
For example, assuming that the error value of the target pixel is 9, if the error value is quantized to [ -3,4], the quantization parameter can be selected to be 2, and the quantization value of the target pixel is 4 by rounding the error value of the target pixel divided by the quotient of the quantization parameter. In the quantization calculation, the remainder is discarded as a compression error.
Illustratively, the quantization parameter may be selected from 0, 2, 4, 8, etc. as the quantization parameter in consideration of the number of stored bits, so that all the encoded values fall within the range of-4, -3, -2, -1, 2, 3,4 after each compensation fluctuation value is divided by the quotient rounding of the quantization parameter. Illustratively, the original compensation values of eight pixels within a pixel block of one object are: 32. 31, 32, 31, 32, 34, the compensation error values are 0, -1, 0, -1, 0, 2, respectively. When the quantization parameter is selected to be 2, the quantization results of the error values are: 0. 0, 1. Alternatively, multiple quantization may be selected, the data with relatively low fluctuation is quantized to 0, and the data with relatively high fluctuation is reserved for quantization storage.
In one embodiment, the encoding method performs S10: the step of obtaining a first pixel reconstruction value adjacent to the target pixel in the first dimension direction, a second pixel reconstruction value adjacent to the target pixel in the second dimension direction, and a third pixel reconstruction value adjacent to the target pixel in the first dimension and second dimension included angle direction comprises:
s11: reading the first pixel quantized value, the second pixel quantized value and the third pixel re-quantized value;
s12: and performing inverse quantization on the first pixel quantized value, the second pixel quantized value and the third pixel quantized value according to the reverse direction of a preset rule, and respectively and correspondingly acquiring a first pixel reconstruction value, a second pixel reconstruction value and a third pixel reconstruction value.
Optionally, the encoding method performs inverse quantization on the first pixel quantization value, the second pixel quantization value, and the third pixel re-quantization value according to a reverse direction of a preset rule to obtain a compensation value reconstructed after decoding, so as to implement compensation on the display picture pixel.
In one embodiment, the encoding method performs S12: the inverse quantization step according to the reverse direction of the preset rule comprises the following steps:
s13: reading a quantization value and a quantization parameter of a reference pixel;
s14: and calculating the product of the quantization value and the quantization parameter as the reconstruction value of the reference pixel.
When it is necessary to calculate the reconstruction value of the reference pixel, such as the first pixel, the second pixel, or the third pixel, for example, it is assumed that the quantization value of the reference pixel is 4, the quantization parameter is 2, and the product of the quantization value and the quantization parameter is 8, that is, the reconstruction value of the reference pixel. In the quantization calculation, the residue 1 will have been discarded as a compression error with respect to the original compensation value 9, so that the difference of the reconstructed value and the original value with residue 1.
Fig. 2 is a flowchart of an encoding method based on motion prediction coding according to another embodiment of the present application.
Referring to fig. 2, in an embodiment, the encoding method performs S10: the steps of obtaining a first pixel reconstruction value adjacent to the target pixel in the first dimension direction, a second pixel reconstruction value adjacent to the target pixel in the second dimension direction, and a third pixel reconstruction value adjacent to the target pixel in the first dimension and second dimension included angle direction comprise:
s15: in response to acquiring the preset initial value of the first row of pixels, acquiring an original value of a first pixel in the first row of pixels.
Optionally, the preset initial values of the pixels in the first row are not limited, and a suitable preset initial value is selected according to the requirement of quantization precision. For example, referring to fig. 4, it is assumed that the RGB compensation value of the PP position pixel is 128, so that the compensation value of the P1 position pixel can be predicted. Optionally, in the encoding process, the first pixel value in the first row of pixels may be a reference for the entire motion prediction, and in the decoding process, the original value of the first pixel value in the first row of pixels may also be used as a decoding reference, so as to implement decoding of the compensation error value for other pixels.
S16: and taking the difference between the original value of the first pixel and a preset initial value as an error value of the first pixel, and acquiring a quantization value and a reconstruction value of the first pixel according to the error value of the first pixel and a preset rule.
Illustratively, the error value of the first pixel is divided by the quotient of the quantization parameter to obtain the quantized value of the first pixel, and the product of the quantized value of the first pixel and the quantization parameter is the reconstructed value of the first pixel. With reference to fig. 4, assuming that the RGB compensation value of the PP position pixel is 128, when the compensation value of the P1 position pixel is predicted, an error value error1 of the P1 position pixel compensation can be obtained, and then the error value error1 of the P1 position pixel compensation is quantized and encoded, and then decoded by inverse quantization, and a reconstructed value of the P1 position pixel compensation can be obtained by using 128+ error 1.
S17: and acquiring an original value of a second pixel of the first row of pixels, taking the difference between the reconstructed value of the first pixel and the original value of the second pixel as an error value of the second pixel, and acquiring a quantized value and a reconstructed value of the second pixel according to a preset rule according to the error value of the second pixel.
Optionally, the quantized value and the reconstructed value of the third pixel are obtained according to the original value of the second pixel, and so on until the quantized value and the reconstructed value of all the pixels in the first row are obtained. Illustratively, continuing with reference to FIG. 4, the compensated value for the P2 loxel can be predicted from the reconstructed value compensated for the P1 loxel. When the compensation value of the pixel at the P1 position and the compensation value of the pixel at the P2 position are relatively flat, good prediction can be obtained, the error value error2 of the compensation of the pixel at the P2 position is relatively small, and the number of bits required by the quantization coding error2 is correspondingly small, so that the compensation data can be stored or transmitted conveniently. Illustratively, assuming that the P1 loxel compensated error value error1 is not quantized, then the P1 loxel compensated error value error1 is not lost in quantization compression in the data stream. At this time, the reconstructed value of the P1-position pixel compensation is equal to P1.
Alternatively, the first column of pixels may be processed in the manner referred to above for the first row.
Second embodiment
On the other hand, this application still provides a display module assembly, specifically, includes the treater and with the memory of treater connection.
The memory has stored thereon a computer program. The processor is used to execute the computer program read from the memory to implement the encoding method as described above.
FIG. 3 is a diagram of a target pixel according to an embodiment of the present application.
Referring to fig. 3, the exemplary steps of the display module executing the encoding method include:
the predicted value of the target pixel d position needing compensation is equal to reconD = reconA + (reconB-reconC); the reconstructed values of the pixels a, b and c are respectively represented by reconA, reconB and reconC, an error value error is obtained by subtracting a predicted value from an original value of a position d, the error interpolation is quantized, codes are stored in a RAM, the error value is quantized to a value between [ -3 and 4], then inverse quantization is performed to obtain a reconstructed value of the position d, if the error value is 9 and the quantization factor is equal to 2, the quantization value is 4 (discarded 1), the inverse quantization is 4 x 2=8, namely 8 is the reconstructed value, similarly, the reconstructed value of a region with the size of 4 x 8 can be obtained, and the quantized value and the quantization factor of the region are coded and stored in the RAM. Alternatively, the quantized value of the region may be retrieved, sharing the largest quantization factor. By analogy, the compensation value of each pixel of the whole area can be subjected to quantization coding with large-scale compression, so that the storage or transmission of compensation data is facilitated.
FIG. 4 is a diagram of a target pixel according to another embodiment of the present application.
Referring to fig. 4, the exemplary steps of the display module executing the encoding method include:
the prediction mode of the top row or the top column can be given an initial value, assuming that the first column pixel of the top row in the image is P1 and the second column pixel is P2, assuming that the RGB value of the preset PP position is 128, P1 is predicted by 128, error errr 1 can be obtained, P1Recon can be obtained by 128+ err 1 after the error is encoded and decoded, assuming that no quantization is performed on error r1, then error1 is not lost in the data stream, P1Recon = P1, then P1Recon is used to predict P2, when P1 and P2 are relatively flat, a good prediction is obtained, error2 of the obtained P2 pixel is relatively small, the number of bits required for coding error2 is relatively small, and other subsequent pixels can be analogized.
In the working process of the display panel, when the display panel needs to be compensated, the quantized values are read for reconstruction, the reconstruction value of each pixel is obtained, and therefore each reconstruction value is correspondingly used for compensating each corresponding pixel.
Third embodiment
On the other hand, the present application also provides a storage medium, in particular, a storage medium having a computer program stored thereon, wherein the computer program realizes the above encoding method when being executed by a processor.
In the embodiments of the display module and the storage medium provided in the present application, all technical features of any one of the above method embodiments may be included, and the expanding and explaining contents of the specification are substantially the same as those of each embodiment of the above method, and are not described herein again.
As described above, the encoding method, the display module and the storage medium based on motion prediction coding provided by the present application implement faster and large-scale compressed data storage, and facilitate compensation of display pixels.
It should be noted that, step numbers such as S10 and S20 are used in the present application for the purpose of more clearly and briefly describing corresponding contents, and no substantial limitation on the sequence is made, and those skilled in the art may perform S20 first and then S10, etc. in the specific implementation, but these should be within the protection scope of the present application.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with an emphasis on the description, and reference may be made to the description of other embodiments for parts that are not described or recited in any embodiment.
All possible combinations of the technical features in the embodiments are not described in the present application for the sake of brevity, but should be considered as the scope of the present application as long as there is no contradiction between the combinations of the technical features.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A coding method based on motion prediction coding, the coding method comprising:
acquiring a first pixel reconstruction value adjacent to a first dimension direction of a target pixel, a second pixel reconstruction value adjacent to a second dimension direction of the target pixel, and a third pixel reconstruction value adjacent to a first dimension and a second dimension included angle direction of the target pixel;
calculating the sum of the first pixel reconstruction value and the second pixel reconstruction value, and subtracting the difference of the third pixel reconstruction value to obtain a predicted value of the target pixel;
acquiring an original value of the target pixel in a display image of preset display data, and calculating the difference between the original value and the predicted value as an error value of the target pixel;
and carrying out quantization coding on the error value of the target pixel according to a preset rule so as to obtain a quantization value of the target pixel.
2. The encoding method as claimed in claim 1, wherein the predetermined display data comprises a gray scale picture or an RGBW picture.
3. The encoding method of claim 1, wherein the original value of the target pixel comprises a luminance value and/or a chrominance value.
4. The encoding method as claimed in claim 1, wherein the step of quantization-encoding the error value of the target pixel according to the preset rule comprises:
and dividing the error value of the target pixel by a quantization parameter so as to quantize the error value to a preset value interval.
5. The encoding method of claim 4, wherein the step of dividing the error value of the target pixel by a quantization parameter comprises:
selecting the quantization parameter in a parameter table according to a parameter selection rule;
and dividing the error value of the target pixel by the quotient of the quantization parameter to obtain the quantized value of the target pixel.
6. The encoding method according to any one of claims 1 to 4, wherein the step of obtaining a first pixel reconstruction value adjacent to the target pixel in a first dimension direction, a second pixel reconstruction value adjacent to the target pixel in a second dimension direction, and a third pixel reconstruction value adjacent to the target pixel in the first dimension direction and the second dimension direction at an included angle comprises:
reading the first pixel quantization value, the second pixel quantization value, and the third pixel re-quantization value;
and performing inverse quantization on the first pixel quantization value, the second pixel quantization value and the third pixel quantization value according to the reverse direction of the preset rule, and respectively and correspondingly obtaining a first pixel reconstruction value, a second pixel reconstruction value and a third pixel reconstruction value.
7. The encoding method of claim 6, wherein the inverse quantization step according to the inverse of the preset rule comprises:
reading a quantization value of a reference pixel and the quantization parameter;
and calculating the product of the quantization value and the quantization parameter as the reconstruction value of the reference pixel.
8. The encoding method according to any one of claims 1 to 7, wherein the step of obtaining a first pixel reconstruction value of the target pixel adjacent to the first dimension direction, a second pixel reconstruction value of the target pixel adjacent to the second dimension direction, and a third pixel reconstruction value of the target pixel adjacent to the first dimension direction and the second dimension direction at an angle comprises:
in response to the acquisition of a preset initial value of a first row of pixels, acquiring an original value of a first pixel in the first row of pixels;
taking the difference between the original value of the first pixel and the preset initial value as an error value of the first pixel, and obtaining a quantization value and a reconstruction value of the first pixel according to the error value of the first pixel and the preset rule;
obtaining an original value of a second pixel of the first row of pixels, taking a difference between a reconstructed value of the first pixel and the original value of the second pixel as an error value of the second pixel, and obtaining a quantized value and a reconstructed value of the second pixel according to the error value of the second pixel and the preset rule.
9. A display module is characterized by comprising a processor and a memory connected with the processor;
the memory having stored thereon a computer program;
the processor is configured to execute the computer program read from the memory to implement the encoding method according to any one of claims 1 to 8.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, implements the encoding method according to any one of claims 1 to 8.
CN202210826306.7A 2022-07-14 2022-07-14 Encoding method based on motion prediction encoding, display module and storage medium Pending CN115278251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210826306.7A CN115278251A (en) 2022-07-14 2022-07-14 Encoding method based on motion prediction encoding, display module and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210826306.7A CN115278251A (en) 2022-07-14 2022-07-14 Encoding method based on motion prediction encoding, display module and storage medium

Publications (1)

Publication Number Publication Date
CN115278251A true CN115278251A (en) 2022-11-01

Family

ID=83764765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210826306.7A Pending CN115278251A (en) 2022-07-14 2022-07-14 Encoding method based on motion prediction encoding, display module and storage medium

Country Status (1)

Country Link
CN (1) CN115278251A (en)

Similar Documents

Publication Publication Date Title
US8244049B2 (en) Method, medium and apparatus effectively compressing and restoring edge position images
US8526730B2 (en) Image processing apparatus and method of processing color image data that perform overdrive
CN106898286A (en) Mura defect-restoration method therefors and device based on specified location
CN109479151B (en) Pixel processing with color components
JP2010011386A (en) Image processing circuit and display panel driver packaging the same, and display
CN110349537B (en) Display compensation method, device, computer equipment and storage medium
KR102103730B1 (en) Display driving device and display device including the same
EP3252748B1 (en) Method for compressing data and display device using the same
KR102393724B1 (en) Display device, method of detecting and compensating a mura thereof
EP3594933B1 (en) Device and method of color transform for rgbg subpixel format
CN115206234A (en) Display panel compensation data coding method, display module and storage medium
CN111833795B (en) Display device and mura compensation method of display device
EP1784810A2 (en) Method, device and system of response time compensation
US9076408B2 (en) Frame data shrinking method used in over-driving technology
US9123090B2 (en) Image data compression device, image data decompression device, display device, image processing system, image data compression method, and image data decompression method
EP3255628B1 (en) Method for compressing data and organic light emitting diode display device using the same
US8983215B2 (en) Image processing device and image processing method
CN115278251A (en) Encoding method based on motion prediction encoding, display module and storage medium
CN112527223A (en) Method and system for stress compensation in a display device
CN115394249B (en) OLED display panel driving method, OLED display panel driving device, electronic equipment and computer storage medium
CN113870768B (en) Display compensation method and device
KR20140043673A (en) Compressor, driving device, and display device
US10504414B2 (en) Image processing apparatus and method for generating display data of display panel
KR20150028716A (en) Image encoding apparatus and image encoding method
US8576246B2 (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination