CN116170686A - Video stream dithering processing method and device - Google Patents

Video stream dithering processing method and device Download PDF

Info

Publication number
CN116170686A
CN116170686A CN202111382331.2A CN202111382331A CN116170686A CN 116170686 A CN116170686 A CN 116170686A CN 202111382331 A CN202111382331 A CN 202111382331A CN 116170686 A CN116170686 A CN 116170686A
Authority
CN
China
Prior art keywords
carry
frame
dithering
look
lookup table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111382331.2A
Other languages
Chinese (zh)
Inventor
黄龙
杨锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glenfly Tech Co Ltd
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN202111382331.2A priority Critical patent/CN116170686A/en
Publication of CN116170686A publication Critical patent/CN116170686A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a video stream dithering processing method and a device thereof. The method comprises the following steps: acquiring image data for video stream dithering; determining a lookup table according to the acquired image data; judging whether the color channel data of the video stream need carry according to the carry sign of the lookup table; finishing dithering according to the carry sign, and outputting dithering color channel data; the video stream carries out dithering processing on the color channel data of each frame image by taking a frame refreshing interval as a period. The video stream-based dithering processing method and the video stream-based dithering processing device can effectively solve the distortion phenomenon after image data mapping, so that the image data mapping is more random.

Description

Video stream dithering processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a video stream Dithering (Dithering) method and apparatus for a flat panel display.
Background
In the field of LED screen display, high-bit (bit) image data is required to obtain finer color information. For example, the common three primary colors (RED, GREEN, BLUE) channels are 8bits (0-255 gray levels), and in order to obtain more abundant color information, high-bit 14bits (0-4095 gray levels) of image data are required to display more vivid color layers. However, due to the limitation of hardware resources, the color gray scale (0-255 gray scales) which can be displayed by the LED display screen is far smaller than the color gray scale (0-4095 gray scales) which is represented by high bits. When the LED screen displays, the high-bit data is mapped (also called 'conversion' or 'quantization') to the low-bit data, so that color data is lost, color banding occurs, and the image quality is affected.
In order to improve the color banding, the conventional methods include random number dithering and error diffusion dithering. However, the random number dithering method may cause image noise; the error diffusion dithering method breaks the independence of all sub-pixels of the original LED screen and the matrix searching algorithm is fixed because the neighborhood pixel errors are mutually transferred, so that random noise distribution errors are not generated. In view of this, the present invention provides a video stream-based dithering method and apparatus thereof, which effectively solve the distortion phenomenon after image data mapping, so that the image data mapping is more random.
Disclosure of Invention
The invention provides a dithering processing method and a dithering processing device based on a video stream, which better solve the distortion phenomenon after color quantization, so that the color quantization compression is more random and dynamic.
According to an embodiment of the invention, the invention provides a video stream dithering processing method. The method comprises the following steps: acquiring image data for video stream dithering; determining a lookup table according to the acquired image data; judging whether the color channel data of the video stream need carry according to the carry sign of the lookup table; finishing dithering according to the carry sign, and outputting dithering color channel data; the video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
In some embodiments, the look-up table is a binary data to be shifted based on the color channel data, the number of look-up tables being determined; the binary data to be shifted of the color channel data are converted into decimal values and then used for determining the number of the lookup tables; determining the number of carry marks on each lookup table based on a plurality of values in the decimal value range; and correspondingly generating a plurality of lookup tables according to the number of the lookup tables and the number of the carry marks.
In some embodiments, the number of look-up tables is the maximum of the decimal values multiplied by the frame refresh interval.
In some embodiments, the number of carry flags is a ratio determined by decimal values and the number of carry flags on each look-up table is determined from the ratio.
In some embodiments, the ratio is equal to the decimal value divided by the size of the image tile.
In some embodiments, the number of carry flags in the lookup table satisfies: the number of carry marks of the sub-pixels of the lookup table row is consistent with the number of carry marks of the sub-pixels of the column corresponding to the sub-pixels of the lookup table row in the size range of the image block.
In some embodiments, the number of carry flags in the lookup table satisfies: dividing a plurality of cells in the size range of the image block, and determining the number of carry marks in the cells according to the ratio.
In some embodiments, determining the look-up table as based on a frame number of the image data, and taking the frame number over a frame refresh interval to obtain a remainder; a lookup table corresponding to the frame number is determined based on the remainder and the plurality of lookup tables.
In some embodiments, determining the carry flag is based on a ratio, based on a look-up table corresponding to the frame to be processed.
In some embodiments, the carry is a binary data shift of the color channel data and then based on a lookup table, it is determined whether the carry is required.
In some embodiments, the lookup table is a carry flag lookup table for a gear stage.
In some embodiments, the lookup table is a carry data amount lookup table.
According to an embodiment of the present invention, there is provided a video stream jitter processing apparatus, including: the image processing interface unit module is used for acquiring image data for video stream dithering processing; the lookup table unit module is used for determining a lookup table according to the acquired image data; the dithering processing module is used for judging whether the color channel data of the video stream need carry according to the carry mark of the lookup table; finishing dithering based on the carry flag, and outputting dithering-processed color channel data; the video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
Drawings
FIG. 1 is a diagram illustrating a jitter processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a video stream dithering method according to an embodiment of the present invention;
FIG. 3A is a flow chart illustrating a video stream dithering method according to another embodiment of the present invention;
FIG. 3B is a flowchart illustrating a video stream dithering method according to another embodiment of the present invention;
FIG. 4A is a schematic diagram of a front half of an image BLOCK according to an embodiment of the present invention;
FIG. 4B is a diagram of a second half of a BLOCK of images according to an embodiment of the present invention;
FIG. 5A is a diagram of a first half of a lookup table corresponding to an image BLOCK according to an embodiment of the present invention;
FIG. 5B is a diagram of a second half of a lookup table corresponding to an image BLOCK according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a video stream dithering method according to a generated lookup table according to another embodiment of the present invention;
FIG. 7A is a schematic diagram illustrating a hardware module of an apparatus for dithering video stream according to an embodiment of the present invention;
fig. 7B is a schematic diagram of a hardware module of an apparatus for dithering a video stream according to another embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, or include, but not limited to.
Certain terms are used throughout the description and claims to refer to particular components or modules. Those skilled in the art will appreciate that a hardware manufacturer may refer to the same component or module by different names. The description and claims do not take the form of an element or module with the difference in name, but rather the difference in function of the element or module as a criterion for distinguishing.
Fig. 1 is a schematic diagram of a jitter processing method according to an embodiment of the present invention. As shown in fig. 1, 14bits of image data is input in step S102. It should be noted that, throughout the specification and claims, 14bits of image data are used for illustration, but it should be understood by those skilled in the art that the size of the bit data is increased with the development of display technology, and the present invention is not limited thereto. In the description and claims throughout, the 6bits, 8bits, 10bits and 14bits described are expressed as 6bits, 8bits, 10bits or 14bits for each color channel of the three primary colors (red, green and blue). For example, 14bits of image data is input, that is, the input is image data with 14bits of red channel, 14bits of green channel and 14bits of blue channel, which is 42 bits. Also, the 14bits of image data input thereto may be received by an image processing interface unit of the intelligent electronic device. For example: the image processing interface unit of the smart phone, computer, VR device or other electronic device receives the image data, to which the present invention is not limited.
Next, in step S104, the input 14bits image data is subjected to a dithering method, and then 10bits image data is output. It should be noted here that the most straightforward way to map (also referred to as "convert" or "quantize") 14bits of image data to 10bits of image data is to delete the least four significant bits of each color channel directly. However, directly deleting the least four significant bits means sacrificing the resolution of the portion, which will cause distortion phenomena perceptible to the human eye, and reduce the display quality of the display. In order to compensate for the distortion problem of the image, a dithering processing method can be used to improve the visual quality of the image. For a detailed description of the jitter processing method in the present invention, please refer to fig. 2 to 6, which are not described here.
Finally, in step S106, the format of the image data receivable by the electronic device such as the smart phone, the notebook computer, etc. is typically 8bits (i.e. the data of 8bits represents three channels of red, green and blue). However, the image data output after the dither processing is 10bits. How to display the 10bits of image data subjected to dithering processing on an 8bits LED display screen after performing corresponding conversion processing belongs to the technical scope known to those skilled in the art, and does not fall into the technical scope of the present invention, which is not described herein. However, it should be emphasized that, within the scope of not departing from the technical ideas of the video stream dithering method described in the present invention, those skilled in the art should understand that the input high-bit image data may also be directly used for the display screen of the intelligent electronic device after being subjected to dithering. For example: if the input 14bits of image data is 10bits, 8bits or 6bits after dithering, the input 14bits of image data can be directly displayed through a 10bits, 8bits or 6bits display screen. If the input image data of 10bits is 8bits or 6bits after dithering, the input image data can be directly displayed through a display screen of 8bits or 6 bits. If 8bits of image data is input and 6bits of image data is subjected to dithering, the image data can be directly displayed through a 6-bit display screen. In other words, after the high-bit image data is subjected to dithering, the high-bit image data can be directly displayed without other conversion processing only by conforming to the data format/standard requirements of the corresponding electronic display screen.
Fig. 2 is a flowchart of a video stream dithering processing method according to an embodiment of the invention. As shown in fig. 2, in step S202, image data for video stream dither processing is acquired. The image data includes data for image processing, such as the current FRAME number frame_cur of the video stream, color channel data (or referred to as: color channel pixel values), and pixel position data (i.e., the coordinates of the sub-pixel ROW numbers cur_row and column numbers cur_col or other definable pixel positions).
Next, in step S204, a lookup table is determined from the acquired image data.
Next, in step S206, it is determined whether the color channel data of the video stream needs to be carried according to the carry flag of the lookup table.
Next, in step S208, dithering is completed according to the carry flag, and dithering-processed color channel data is output; the video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
Details of steps S204, S206 and S208 are described in fig. 3 to 6, and refer to the following description.
Fig. 3A is a flowchart of a video stream jitter processing method according to another embodiment of the present invention. As shown in fig. 3A, in step S302A, R/G/B channel data of high bits is input. For example, in the present embodiment, the high-bit R/G/B channel data is binary data (i.e., 14bits of binary data).
Next, in step S304, the current FRAME number frame_cur, the preset FRAME refresh interval frame_n, the subpixel ROW number cur_row, and the column number cur_col of the channel data are acquired. In one embodiment of the present invention, the current FRAME number frame_cur for obtaining the channel data is the FRAME number frame_cur obtained after the frame_n is left with respect to the FRAME refresh interval frame_n. For example, assuming that the FRAME refresh interval frame_n is equal to 16 FRAMEs, when the current FRAME number frame_cur for obtaining the channel data is 34 th FRAME, the FRAME number frame_cur obtained after the 34 th FRAME of the current FRAME number frame_cur and the FRAME refresh interval frame_n are equal to 16 FRAMEs is 2 nd FRAME; that is, it means that the 34 th FRAME of the current FRAME number frame_cur is actually the jitter processing period with the FRAME refresh interval frame_n equal to 16 FRAMEs, and the current FRAME number frame_cur is the 2 nd FRAME in a new jitter processing period. In other words, in this embodiment, the current FRAME number frame_cur uses a FRAME refresh interval frame_n as a period to perform different dithering processes on each FRAME of image data in the video stream, and in each FRAME of image data based on the video stream, the dithering process manner of each FRAME of image data is different in the FRAME refresh interval frame_n as a dithering process period; in the FRAME refresh interval frame_n as the dithering processing period, the dithering processing manner of the image data of each FRAME is random.
It should be noted that, within the scope of the technical idea of the present invention, a person skilled in the art may perform different dithering processing on each frame of image data in the video stream in other manners. For example: different dithering processing modes are adopted for the odd and even frame image data; or different dithering modes are adopted for the tail number of the frame number to be a certain specific number/a non-specific number (for example, one dithering mode is adopted for the tail number of the frame number to be 3, 5, 7 and 9, and the other dithering mode is adopted for the tail number of the frame number to be 3, 5, 7 and 9); or adding special marks for each frame in the video stream when the video stream is encoded, respectively adopting different dithering processing modes according to the special marks, and the like. It should be understood by those skilled in the art that, in addition to the above-mentioned three modes of performing different dithering processes on each frame of image data in the video stream, it is also within the scope of the technical idea of the present invention to use a random dithering process on each frame of image data in the video stream in other different manners.
The sub-pixel ROW numbers cur_row and column numbers cur_col of the channel data are acquired in step S304 to acquire the position of the sub-pixel in the lookup table and its carry flag in steps S306 and S308.
Next, in step S306, the position of the R/G/B channel data in the lookup table is acquired. How the look-up table is generated and the positions of the sub-pixels in the look-up table and their carry flags are obtained will be described in detail next.
Assuming that the width and height of a FRAME of image in the existing video stream are frame_w_frame_h, each FRAME of image is divided into a plurality of BLOCKs BLOCK with 16×16 pixels. A BLOCK of 16 x 16 pixels size, comprising 16 x 48 sub-pixels (16 x 16 pixels, each consisting of 3 sub-pixels (R, G, B), i.e., 16 x 3 sub-pixels), i.e., 16 rows by 48 columns, is shown in fig. 4A and 4B after being combined.
Each BLOCK is subdivided into a plurality of M cells, as indicated by the dotted lines in fig. 4A, line0-Line3 Pixel1-Pixel 4.
Fig. 4A is a schematic diagram of a front half of an image BLOCK according to an embodiment of the invention.
Fig. 4B is a schematic diagram of a second half of a BLOCK according to an embodiment of the invention.
A three-point explanation is required here for the illustrated portions of fig. 4A and 4B. First, the top of fig. 4A and 4B is denoted by Pixel1-Pixel16, and the BLOCK is 16×3=48 columns according to column count; whereas, the leftmost Line0-Line15 in fig. 4A indicates that the BLOCK is 16 rows by row count, so the BLOCK is 16 rows by 48 columns. Next, the second column in fig. 4A (or the first column of Pixel1, shown by the dotted line in fig. 4A) collectively represents the subpixel R of Pixel1 by R1. It will be appreciated by those skilled in the art that the sub-Pixel R, which collectively represents Pixel1, is not the same for all sub-pixels R representing the first column of Pixel1 in the BLOCK, but is represented by a uniform symbol for ease of illustration. In other words, if R1 of the first column of Pixel1 is known, only the corresponding row number need be known, then R1, which is a sub-Pixel corresponding to the Pixel in the BLOCK, can be known. It is also considered that in the 16 row by 48 column BLOCK, only the sub-pixel row number and column number need to be known to locate the value of a sub-pixel. Finally, in the illustrations of fig. 4A, 4B, 5A and 5B, the addition of the base color marks is performed by individual sub-pixels, and the addition of the base color marks will be described in detail in the following look-up table carry marks, which are not described here.
Next, the generation of a lookup table for a BLOCK of 16 x 16 pixel size will be described with emphasis.
In the embodiment of the invention, the 14bits of data are shifted by 4bits after being dithered, and the 10bits of image data are obtained. For example, if the image in the video stream is 14bits of data, when mapping the data to 10bits of data, 4bits of binary data need to be discarded, but the discarded 4bits of binary data are not simply deleted directly and discarded, and the dithering processing method provided by the present invention needs to be satisfied.
For example, the range of possible 4-bit binary data to be discarded is 0000-1111, and the corresponding decimal value is { X|0.ltoreq.X.ltoreq.15, X.epsilon.N }.
When the decimal value corresponding to the discarded 4-bit binary data is 1, the value "lighting number" is 1/16, that is, the ratio of the R sub-pixels to the R sub-pixels in one row in fig. 4A and 4B is 1/16 of the total R sub-pixels in one row, the ratio of the G sub-pixels to the G sub-pixels is 1/16 of the total G sub-pixels in one row, and the ratio of the B sub-pixels to the B sub-pixels is 1/16 of the total B sub-pixels (e.g., in the first row dot-dash frame line after the combination of fig. 4A and 4B, the number of R1 sub-pixel added ground color marks is 1, the number of the G1 sub-pixel added ground color marks is 1, and the number of the B1 sub-pixel added ground color marks is 1). Similarly, as in the case of the columns of fig. 4A and 4B, the proportion of "lighting" of the R sub-pixels in a column is 1/16 of the total number of R sub-pixels in a column, the proportion of "lighting" of the G sub-pixels is 1/16 of the total number of G sub-pixels in a column, and the proportion of "lighting" of the B sub-pixels is 1/16 of the total number of B sub-pixels in a column (e.g., in the first column dotted line of fig. 4A, the number of R1 sub-pixels added with ground color marks is 1). Meanwhile, in the cells of m×m divided in each BLOCK, the ratio of "lighting" of R sub-pixels accounts for 1/16 of the total number of R sub-pixels in the cells of m×m, the ratio of "lighting" of G sub-pixels accounts for 1/16 of the total number of G sub-pixels in the cells of m×m, and the ratio of "lighting" of B sub-pixels accounts for 1/16 of the total number of B sub-pixels in the cells of m×m (e.g., in the Line0-Line3×pixel1-Pixel4 dot frame Line of fig. 4A, the number of R1 sub-Pixel added ground color marks is 1, the number of G1 sub-Pixel added ground color marks is 1, and the number of B1 sub-Pixel added ground color marks is 1).
When the decimal value corresponding to the discarded 4-bit binary data is 2, the value "lighting number" is 2/16, that is, the ratio of the R sub-pixels to "lighting" in one row accounts for 2/16 of the total R sub-pixels in one row, the ratio of the G sub-pixels to "lighting" in one row accounts for 2/16 of the total G sub-pixels in one row, and the ratio of the B sub-pixels to "lighting" accounts for 2/16 of the total B sub-pixels in one row (not shown) in fig. 4A and 4B. Similarly, as in the case of the columns of FIGS. 4A and 4B, the R sub-pixels "light up" ratio in a column is 2/16 of the total number of R sub-pixels in a column, the G sub-pixels "light up" ratio is 2/16 of the total number of G sub-pixels in a column, and the B sub-pixels "light up" ratio is 2/16 of the total number of B sub-pixels in a column (not shown). Meanwhile, the proportion of "lighting" R sub-pixels in the M cells divided in each BLOCK is 2/16 of the total number of R sub-pixels in the M cells, the proportion of "lighting" G sub-pixels is 2/16 of the total number of G sub-pixels in the M cells, and the proportion of "lighting" B sub-pixels is 2/16 of the total number of B sub-pixels in the M cells (not shown).
Similarly, when the decimal value corresponding to the discarded 4-bit binary data is 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, there are 2+13=15 kinds of "lighting" sub-pixels.
To sum up, in this embodiment, "the number of lights" =the decimal value corresponding to the discarded 4-bit binary data divided by BLOCK.
Fig. 5A is a schematic diagram of a first half of a lookup table corresponding to an image BLOCK according to an embodiment of the invention.
Fig. 5B is a schematic diagram of a second half of a lookup table corresponding to an image BLOCK according to an embodiment of the invention.
According to the above method for "lighting" the sub-pixels, when the decimal value corresponding to the discarded 4-bit binary data is 1, the generated sub-pixel lookup table requiring "lighting" in the BLOCK is shown after the combination of fig. 5A and 5B, that is, the combination of fig. 5A and 5B is the lookup table of the BLOCK generated when the decimal value corresponding to the discarded 4-bit binary data is 1. In the lookup table, "no light" and "light" are indicated by binary data 0 and 1; however, the subpixel "on" and carry flags may be represented using a or B letters, or other characters such as identifiers. In this embodiment, each pixel on the lookup table is identified by a numeral 1 as a "light-up" (or "carry") flag, and a numeral 0 as a "do not light-up" (or "do not carry") flag.
In some embodiments of the invention, each lookup table may be stored in other forms of TXT text, EXCEL tables, and the like.
Based on the above-described processing of "lighting" the sub-pixels, when the "lighting" processing of the other BLOCK in the frame image is completed, there are 15 kinds of processing of "lighting" the sub-pixels for one frame image in the video stream. That is, according to the above method of "lighting" the sub-pixels, one frame of video stream image data will generate 15 kinds of lookup tables (i.e. when the decimal values corresponding to the discarded 4-bit binary data are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, i.e. the ratio of "lighting number" is 1/16, 2/16 … …/16, and 15 kinds are all used). In one FRAME refresh interval frame_n, there are 15 types of lookup tables (for example, there are 15 types of lookup tables with FRAME refresh interval frame_n being 16 FRAMEs, for example).
The term "lighting" is used for explanation in this embodiment, and is described with reference to fig. 4A, 4B, 5A and 5B, that is, when the decimal value corresponding to the discarded 4-bit binary data is 1 in the portion of fig. 4A, 4B, 5A and 5B, the sub-pixel that needs to be "lit" in the BLOCK is indicated by adding the ground color (for example, the first row R1 sub-pixel in fig. 4A and 5A is indicated by adding the ground color).
It should be noted that, in fig. 4A, 4B, 5A, and 5B, when the decimal value corresponding to the discarded 4-bit binary data is 1, the selection of "lighting" the first row and first column R1 sub-pixel is only for illustration, and does not necessarily mean that the first row and first column R1 sub-pixel is "lighted" when the decimal value corresponding to the discarded 4-bit binary data is 1. When the decimal value corresponding to the discarded 4-bit binary data is 1, the proportion of the R sub-pixels in one row to be lightened is only required to be 1/16 of the total number of the R sub-pixels in one row. For example, when the decimal value corresponding to the discarded 4-bit binary data is 1, the R subpixel "on" in the second row is R9 (e.g. the bottom color processing mark is added at R9 in the second row in fig. 4B). That is, the "lighting" processing in this embodiment is randomly generated, and does not require any manual setting.
It should be noted that, when the range of decimal values corresponding to the discarded 4-bit binary data is 0, the present invention does not do the above-mentioned processing, i.e. when the range of decimal values corresponding to the discarded 4-bit binary data is 0, the discarded 4-bit binary data can be directly discarded or the discarded 4-bit binary data is selected to be discarded to be 1 after the discarded 4-bit binary data. For example, if a sub-pixel discards the last four bits and then 0110001001, and the decimal value of the discarded last 4 bits of binary data is 0, the carry processing is performed, i.e., 1 is added; becomes 0110001010. If a subpixel discards the last four bits and then 0110001001, and the decimal value of the discarded last 4 bits of binary data is 0, then no carry is required according to the no-carry processing, and no carry is required.
In this embodiment, a BLOCK is taken as an example when the decimal value corresponding to the discarded 4-bit binary data is 1, and the processing of the image divided by BLOCK for each frame in the video stream is the same as the technical idea of the BLOCK, and will not be described again here. In this embodiment, when the decimal value corresponding to the discarded 4-bit binary data is 1, the BLOCK generates a lookup table as shown in fig. 5A and 5B in combination.
Next, in step S308, according to the generated lookup table and the current FRAME number frame_cur, the ROW number cur_row, the column number cur_col, and the FRAME refresh interval frame_n of the R/G/B sub-pixel, the position of the sub-pixel in the lookup table may be determined, and whether carry processing is required when discarding the 4-bit binary data may be determined according to the carry flag in the lookup table.
Next, in step S310, a dithering method is performed according to the above-described lookup table, and dithering is completed.
Regarding the step S306, S308 and S310, the process of obtaining the position of the R/G/B channel data in the lookup table and the carry flag thereof to complete the dithering is further described in fig. 6, and the description is made in fig. 6.
Fig. 3B is a flowchart illustrating a video stream jitter processing method according to another embodiment of the present invention. As shown in fig. 3B, in step S302B, R/G/B channel data of high bits of Degamma is input. In an embodiment of the present invention, the input high-bit R/G/B channel data may be linear R/G/B channel data directly, and thus does not need to be Degamma processed. In this embodiment, if the input high-bit R/G/B channel data is Gamma-processed channel data, then Degamma processing is required to obtain linear R/G/B channel data to facilitate the jitter processing method. How to perform Gamma and DeGamma processing belongs to common knowledge of a person skilled in the art, and is not described herein. Other steps shown in fig. 3B are the same as those of fig. 3A, and are highlighted in fig. 3A and not repeated here.
Fig. 6 is a flowchart of a video stream dithering method according to a generated lookup table according to another embodiment of the present invention. In step S602, R/G/B channel data for video stream dithering processing, a current FRAME number frame_cur of the R/G/B channel data, a FRAME refresh interval frame_n, a sub-pixel ROW number cur_row, and a column number cur_col are acquired.
Next, in step S604, a lookup table is determined according to the obtained R/G/B channel data and the current FRAME number frame_cur, the FRAME refresh interval frame_n, the subpixel ROW number cur_row, and the column number cur_col.
For example, as described in the sections of fig. 4A, 4B, 5A, and 5B, 15×16=240 look-up tables have been generated; wherein each frame of video stream corresponds to 15 kinds of lookup tables.
After the current FRAME number frame_cur and the FRAME refresh interval frame_n for the video stream dithering process are acquired. Assuming that the acquired current FRAME number frame_cur is 34 th FRAME, taking the FRAME refresh interval frame_n equal to 16 FRAMEs as an example, when the FRAME number frame_cur obtained after the 34 th FRAME of the current FRAME number frame_cur is the remainder of the 16 FRAMEs for the FRAME refresh interval frame_n is 2 nd FRAME, 15 kinds of lookup tables belonging to the 2 nd FRAME are found.
And acquiring the position of the sub-pixel in each frame of image according to the sub-pixel ROW number CUR_ROW and the column number CUR_COL. In this embodiment, only the acquisition of the position of the sub-pixel in each frame of the image according to the sub-pixel ROW numbers cur_row and column numbers cur_col will be described. However, it should be understood by those skilled in the art that after a frame of image is divided into a plurality of BLOCKs, the positions of the sub-pixels may be defined according to various coordinates, and how the sub-pixels can be defined and searched in a frame of image according to the coordinates of the sub-pixels should be within the knowledge of those skilled in the art, and will not be explained here too much.
And searching one of the 15 lookup tables belonging to the 2 nd frame according to the decimal value corresponding to the last 4 bits of binary data of which the sub-pixel is discarded. Assuming that the ratio of decimal values corresponding to the discarded last 4-bit binary data is 5/16, the lookup table with the "lighting number" of 5/16 among the 15 lookup tables belonging to the 2 nd frame is continuously searched.
Next, in step S606, it is determined whether the sub-pixel needs to be carried according to the lookup table. That is, the carry flag of the sub-pixel is obtained by judging the carry flag of the same position in the lookup table with the "lighting number" of 5/16 among the above-mentioned 15 lookup tables belonging to the 2 nd frame according to the sub-pixel ROW number cur_row and column number cur_col. For example, when the obtained sub-pixel ROW numbers cur_row and column numbers cur_col are 3 rd ROW and 5 th column, the carry flag of the sub-pixel is obtained by searching the carry flag of the 3 rd ROW and 5 th column in the above-mentioned 15 kinds of lookup tables belonging to the 2 nd frame, wherein the "lighting number" is 5/16.
Next, in step S608, when the carry flag of the sub-pixel in the lookup table is found to be "1" (i.e., yes judgment), carry operation is performed. For example, if a sub-pixel is discarded after the last four bits by 0110001001 and the decimal value of the discarded last 4 bits binary data is 5, the carry flag of the sub-pixel in the lookup table in which the "lighting number" is 5/16 among the 15 lookup tables belonging to the 2 nd frame is found to be "1". According to carry processing, namely adding 1; to 0110001010 (010001001+1).
In contrast, in step S610, when the carry flag of the sub-pixel in the lookup table is found to be "0" (i.e., no determination), the carry operation is not performed. For example, if a sub-pixel is discarded after four bits are discarded as 0110001001 and the decimal value of the discarded binary data of the last 4 bits is 5, the carry flag of the sub-pixel in the lookup table in which the "lighting number" is 5/16 among the 15 lookup tables belonging to the 2 nd frame is found as "0".
Finally, in step S612, the dithering process is completed, and 10bits of image data is output.
It should be noted that, in the video stream-based dithering processing method provided by the invention, dithering processing is performed on each frame of video stream image data according to the lookup table corresponding to the frame in a frame refreshing interval period; meanwhile, in the FRAME video stream image data, the dithering processing mode of each sub-pixel with different positions is different along with the different values of the discarded last 4-bit decimal, and even if the current FRAME number FRAME_CUR of the video stream is the same FRAME after the FRAME refreshing interval is remained in a plurality of FRAME refreshing interval periods, the dithering processing of the video stream image data pixels is more random and dynamic because one FRAME also contains 15 different 'lighting' modes.
For example, assume that the current FRAME number frame_cur of the existing video stream is two pins of FRAME 2 and FRAME 18. Although both the 2 nd and 18 th frames are selected for the 15 kinds of lookup tables corresponding to the 2 nd frame in the case of 16 frames at the frame refresh interval. However, the last 4 bits of the 2 nd frame and the 18 th frame are different because of the different decimal values of the same positions of the sub-pixels, and different lookup tables in the 15 lookup tables are further selected (for example, the decimal value of the 4 bits of the 2 nd frame after the sub-pixel R value of the first row and the first column of the 2 nd frame is 2, and the decimal value of the 4 bits of the 18 th frame after the sub-pixel R value of the first row and the first column of the 18 th frame is 8, and two different lookup tables with the "lighting number" of 2/16 and 8/16 are respectively selected).
In the embodiments described in fig. 2 to 6, the dithering process is performed by discarding 4 bits to determine whether a 1-in operation is required. For example, when the decimal values corresponding to the last 4 bits of binary data are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, there are 15 kinds. According to the 15×16=240 lookup tables, it is determined whether the reserved high-bit 10-bit binary system needs to perform 1 operation after discarding the sub-pixel data in each frame of image by 4 bits.
In another embodiment of the present invention, the above-mentioned determination of whether a carry operation is required may be a carry operation using a stepping position according to the discarded last 4-bit value. For example, when the decimal values corresponding to the last 4 bits of binary data are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, there are 15 kinds. 1 operation can be performed according to the lighting number of 1/16, 2/16 and 3/16; and 4/16, 5/16 and 6/16 operate as per step 2. By analogy, 13/16, 14/15 and 15/16 can be found as per step 5. More specifically, assume that the existing 4-bit discarded sub-pixel has a binary value of 100111001101. The decimal value corresponding to the sub-pixel binary data after the last 4 bits that are discarded is 6/16. Assuming that the subpixel is "1" in the lookup table, the subpixel is 100111001110 (i.e., 100111001101+1) after carry according to the embodiments described in fig. 2-5. And in accordance with the carry operation of the gear stage, it is also assumed that the sub-pixel is a carry flag in the lookup table, and the sub-pixel is 100111001111 (i.e., 100111001101+10) after carry.
In another embodiment of the present invention, the lookup table may be a carry data amount lookup table. For example, when the decimal values corresponding to the last 4 bits of binary data are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, there are 15 kinds. Every 16 frames is a jitter period, there are 16×16=240 look-up tables in a jitter period. The look-up table in this embodiment is not a look-up table for a carry flag, but a look-up table for the amount of carry data. Specifically, assume that the existing 4-bit discarded sub-pixel has a binary value of 100111001101. The decimal value corresponding to the sub-pixel binary data after the last 4 bits that are discarded is 6/16. Assuming that the sub-pixel carries a data amount of "3" in the lookup table, the sub-pixel is 100111010000 (i.e., 100111001101+11) after carry according to this embodiment.
The above-mentioned method can determine the lookup table of every sub-pixel and judge the carry sign according to the lookup table. However, in another embodiment of the present invention, the 15×16=240 look-up tables may be subjected to corresponding data storage processing for simplicity of data storage. The manner/method of determining the look-up table described above will also vary due to the different data storage management. Therefore, the method for performing data storage management on the lookup table and correspondingly changing and determining the lookup table should belong to common knowledge of a person skilled in the art, and any method for performing data storage management on the lookup table and correspondingly changing and determining the lookup table should belong to the protection scope of the present invention without departing from the technical spirit of the present invention.
The carry flag lookup table, the carry flag lookup table of the shift stage, and the carry data amount lookup table may be generated and stored in advance or when executing the dithering operation process.
Fig. 7A is a schematic diagram of a hardware module of an apparatus for dithering a video stream according to an embodiment of the invention. As shown in fig. 7A, the dithering apparatus includes an image processing interface unit module 700, a dithering processing module 702, a lookup table unit module 704, and a display module 708.
The image processing interface unit module 700 is coupled to the dithering processing module 702, and the image processing interface unit module 700 obtains image data for dithering of a video stream. The image data includes data for image processing, such as the current FRAME number frame_cur of the video stream, color channel data (or referred to as: color channel pixel values), and pixel position data (i.e., the coordinates of the sub-pixel ROW numbers cur_row and column numbers cur_col or other definable pixel positions).
The above-described lookup table unit module 704 may generate and store a lookup table corresponding to a video stream randomly generated based on image data in advance or when performing a dithering operation process.
The above-mentioned lookup table unit module 704 obtains binary data corresponding to each channel data of R/G/B in the frame to be processed. For example, if the image in the video stream is 14bits of data, when the data is mapped to 10bits of data and the discarded 4bits of binary data is needed, the look-up table module 704 randomly generates a look-up table corresponding to the video stream based on the discarded 4bits of binary data. Specifically, when the 14bits data 01100010011010 of the frame to be processed is 14bits data, the lookup table generating module 704 obtains the last 4bits binary data in the frame data, namely: 1010.
The lookup table unit module 704 determines the number of lookup tables based on binary data to be shifted by the color channel data; the binary data to be shifted of the color channel data are converted into decimal values and then used for determining the number of the lookup tables; determining the number of carry marks on each lookup table based on a plurality of values in the decimal value range; and correspondingly generating a plurality of lookup tables according to the number of the lookup tables and the number of the carry marks.
The dithering processing module 702 determines whether the color channel data of the video stream needs to be carried according to the carry flag of the lookup table; finishing dithering according to the carry sign, and outputting dithering color channel data; the video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
Regarding how the look-up table unit module 704 generates the look-up table and how the dithering processing module 702 obtains the corresponding sub-pixel carry flag and performs the content of the corresponding dithering processing section according to the look-up table, please refer to the method sections of fig. 2-6, which are not repeated herein.
In some embodiments, the dithering processing module 702 further includes a bit separation module 7022 and a carry module 7024. The bit separation module 7022 is configured to separate the high bit of the data of each channel of R/G/B from the low bit to be shifted. The carry module 7024 is used for carrying out carry operation on the R/G/B channel data. Specifically, the carry module 7024 can be an adder.
The display module 708 is configured to display the jittered image data. The display module 708 may be an electronic display such as LCD, LED, AMOLED.
Fig. 7B is a schematic diagram of a hardware module of an apparatus for dithering a video stream according to another embodiment of the present invention. Compared to fig. 7A, fig. 7B adds a conversion processing module 706. The conversion processing module 706 is configured to convert the image data after the dithering process into image data suitable for the display module 708, and other modules are the same as those in fig. 7A, and will not be described again.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of the invention should be assessed as that of the appended claims.

Claims (24)

1. A method for video stream dithering, the method comprising:
acquiring image data for video stream dithering;
determining a lookup table according to the acquired image data;
judging whether the color channel data of the video stream need carry according to the carry sign of the lookup table;
finishing dithering according to the carry sign, and outputting dithering color channel data;
The video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
2. The method of claim 1, wherein the look-up table is a number of look-up tables determined based on binary data to be shifted by the color channel data; the binary data to be shifted of the color channel data are converted into decimal values and then used for determining the number of the lookup tables;
determining the number of carry marks on each lookup table based on a plurality of values in the decimal value range;
and correspondingly generating a plurality of lookup tables according to the number of the lookup tables and the number of the carry marks.
3. The method of claim 2, wherein the number of look-up tables is the maximum of the decimal values multiplied by a frame refresh interval.
4. A method according to claim 2, wherein the number of carry flags is a ratio determined by the decimal value and the number of carry flags on each look-up table is determined from the ratio.
5. The method of claim 4, wherein the ratio is equal to the decimal value divided by the size of the image block.
6. The method of claim 5, wherein the number of carry flags in the lookup table satisfies: and the number of the carry marks of the sub-pixels of the lookup table row in the size range of the image block is consistent with the number of the carry marks of the sub-pixels of the column corresponding to the sub-pixels of the lookup table row.
7. The method of claim 6, wherein the number of carry flags in the lookup table satisfies: dividing a plurality of cells in the size range of the image block, and determining the number of carry marks in the cells according to the ratio.
8. The method of claim 1, wherein the determining the look-up table is based on a frame number of the image data, and wherein the frame number is used to leave a frame refresh interval to obtain a remainder;
and determining a lookup table corresponding to the frame number based on the remainder and a plurality of lookup tables.
9. The method of claim 5, wherein the determining the carry flag is determining the carry flag based on the ratio according to a lookup table corresponding to the frame to be processed.
10. The method of claim 1, wherein the carry is a binary data shift of the color channel data, and determining whether the carry is needed is based on a lookup table.
11. The method of claim 1, wherein the look-up table is a carry flag look-up table for a gear stage.
12. The method of claim 1, wherein the look-up table is a carry data amount look-up table.
13. A video stream dithering processing apparatus, the apparatus comprising:
the image processing interface unit module is used for acquiring image data for video stream dithering processing;
the lookup table unit module is used for determining a lookup table according to the acquired image data;
the dithering processing module is used for judging whether the color channel data of the video stream need carry according to the carry mark of the lookup table; finishing dithering based on the carry flag, and outputting dithering-processed color channel data;
the video stream carries out dithering processing on color channel data of each frame of image by taking a frame refreshing interval as a period.
14. The apparatus of claim 13, wherein the look-up table unit module determines the number of look-up tables based on binary data to be shifted by the color channel data; the binary data to be shifted of the color channel data are converted into decimal values and then used for determining the number of the lookup tables;
Determining the number of carry marks on each lookup table based on a plurality of values in the decimal value range;
and correspondingly generating a plurality of lookup tables according to the number of the lookup tables and the number of the carry marks.
15. The apparatus of claim 14, wherein the number of look-up tables is the maximum of the decimal values multiplied by a frame refresh interval.
16. The apparatus of claim 14 wherein the number of carry flags is a ratio determined by the decimal value and the number of carry flags on each look-up table is determined based on the ratio.
17. The apparatus of claim 16, wherein the ratio is equal to the decimal value divided by the size of the image block.
18. The apparatus of claim 17, wherein a number of carry flags in the lookup table satisfies: and the number of the carry marks of the sub-pixels of the lookup table row in the size range of the image block is consistent with the number of the carry marks of the sub-pixels of the column corresponding to the sub-pixels of the lookup table row.
19. The apparatus of claim 18, wherein a number of carry flags in the lookup table satisfies: dividing a plurality of cells in the size range of the image block, and determining the number of carry marks in the cells according to the ratio.
20. The apparatus of claim 13, wherein the determining the look-up table is based on a frame number of the image data, and wherein the frame number is used to leave a frame refresh interval to obtain a remainder;
and determining a lookup table corresponding to the frame number based on the remainder and a plurality of lookup tables.
21. The apparatus of claim 17, wherein the determining the carry flag is determining the carry flag based on the ratio according to a look-up table corresponding to the frame to be processed.
22. The apparatus of claim 13, wherein the carry is a binary data shift of the color channel data, and wherein the determination of whether the carry is needed is based on a look-up table.
23. The apparatus of claim 13, wherein the look-up table is a carry flag look-up table for a gear stage.
24. The apparatus of claim 13 wherein the look-up table is a carry data amount look-up table.
CN202111382331.2A 2021-11-22 2021-11-22 Video stream dithering processing method and device Pending CN116170686A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111382331.2A CN116170686A (en) 2021-11-22 2021-11-22 Video stream dithering processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111382331.2A CN116170686A (en) 2021-11-22 2021-11-22 Video stream dithering processing method and device

Publications (1)

Publication Number Publication Date
CN116170686A true CN116170686A (en) 2023-05-26

Family

ID=86409983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111382331.2A Pending CN116170686A (en) 2021-11-22 2021-11-22 Video stream dithering processing method and device

Country Status (1)

Country Link
CN (1) CN116170686A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1734739A1 (en) * 2005-06-13 2006-12-20 THOMSON Licensing Apparatus and method for image processing of digital image pixels
US20070024636A1 (en) * 2005-08-01 2007-02-01 Jui-Lin Lo Apparatus and method for color dithering
CN104240672A (en) * 2014-09-12 2014-12-24 京东方科技集团股份有限公司 Video processing device and method
CN112614457A (en) * 2020-04-27 2021-04-06 西安诺瓦星云科技股份有限公司 Display control method, device and system
CN113590853A (en) * 2021-07-13 2021-11-02 深圳市洲明科技股份有限公司 Gray scale self-adaptive expansion method, FPGA system, device and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1734739A1 (en) * 2005-06-13 2006-12-20 THOMSON Licensing Apparatus and method for image processing of digital image pixels
US20070024636A1 (en) * 2005-08-01 2007-02-01 Jui-Lin Lo Apparatus and method for color dithering
CN104240672A (en) * 2014-09-12 2014-12-24 京东方科技集团股份有限公司 Video processing device and method
CN112614457A (en) * 2020-04-27 2021-04-06 西安诺瓦星云科技股份有限公司 Display control method, device and system
CN113590853A (en) * 2021-07-13 2021-11-02 深圳市洲明科技股份有限公司 Gray scale self-adaptive expansion method, FPGA system, device and medium

Similar Documents

Publication Publication Date Title
JP4845825B2 (en) Multicolor display device
US11900852B2 (en) Method and device for obtaining display compensation information, and display compensation method and device
JP4613702B2 (en) Gamma correction, image processing method and program, and gamma correction circuit, image processing apparatus, and display apparatus
US7184053B2 (en) Method for processing video data for a display device
US9024964B2 (en) System and method for dithering video data
US7796144B2 (en) Gamma correction device of display apparatus and method thereof
US20090322713A1 (en) Image processing circuit, and display panel driver and display device mounting the circuit
KR100554580B1 (en) Image processing apparatus, image processing method, image display apparatus, and mobile electronic device
CN101388950B (en) Content-adaptive contrast improving method and apparatus for digital image
KR102103730B1 (en) Display driving device and display device including the same
CN1389840A (en) Method and apparatus for treating video picture data displayed on display device
KR20040086600A (en) Video processor with a gamma correction memory of reduced size
KR20130002960A (en) Display and display control circuit
JP4473971B2 (en) Method and apparatus for scanning a plasma panel
GB2403089A (en) More uniform luminance correction
WO2022032919A1 (en) Grayscale-adaptive correction data control method and apparatus, and led display screen
CN111128065B (en) Dithering method of display panel, display device and storage medium
CN109979386B (en) Driving method and device of display panel
CN116170686A (en) Video stream dithering processing method and device
CN112634822A (en) LED gray scale display control method based on common constant current source driving chip
CN113658564A (en) Image color depth conversion method and electronic equipment
US6819335B2 (en) Number-of-gradation-levels decreasing method, image displaying method, and image display
TWI328793B (en) High quality picture in low performance display
CN116469336B (en) Digital driving method for color micro-display chip
US11756473B2 (en) Digital gamma circuit and source driver including the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination