CN113409719B - Video source display method, system, micro display chip and storage medium - Google Patents

Video source display method, system, micro display chip and storage medium Download PDF

Info

Publication number
CN113409719B
CN113409719B CN202110954210.4A CN202110954210A CN113409719B CN 113409719 B CN113409719 B CN 113409719B CN 202110954210 A CN202110954210 A CN 202110954210A CN 113409719 B CN113409719 B CN 113409719B
Authority
CN
China
Prior art keywords
pixel
sub
video source
data stream
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110954210.4A
Other languages
Chinese (zh)
Other versions
CN113409719A (en
Inventor
孟雄飞
刘元开
何军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xinshiyuan Electronics Co ltd
Original Assignee
Nanjing Xinshiyuan Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xinshiyuan Electronics Co ltd filed Critical Nanjing Xinshiyuan Electronics Co ltd
Priority to CN202110954210.4A priority Critical patent/CN113409719B/en
Publication of CN113409719A publication Critical patent/CN113409719A/en
Application granted granted Critical
Publication of CN113409719B publication Critical patent/CN113409719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2074Display of intermediate tones using sub-pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The application relates to a video source display method, a video source display system, a micro display chip and a storage medium. The method comprises the following steps: carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream; adopting a pre-configured sampling mode to carry out pixel sampling on the converted data stream to obtain each sub data frame; processing each sub data frame through a data recombination fusion algorithm, fusing partial pixel information of one pixel point to adjacent or similar physical pixel points for recombination, and obtaining a processed data stream; and after the processed data stream is cached and analyzed, outputting the pixel value of the physical pixel point for displaying. The micro display chip can be free from the limitation of physical pixels when displaying a video source, can be used for displaying the video source with the resolution higher than the self resolution by the micro display chip with the low resolution, can also be used for displaying the occasion with the special resolution, and improves the universality of the micro display chip.

Description

Video source display method, system, micro display chip and storage medium
Technical Field
The present application relates to the field of micro display technologies, and in particular, to a method and a system for displaying a video source, a micro display chip, and a storage medium.
Background
With the rapid development of AR, VR, and MR, microdisplay has become one of the key technologies. In the field of micro-display, there are two display modes of spatial color and time-sequential color. Compared with the traditional display, the micro display has the advantages of small size, high resolution and the like, but with the higher and higher resolution of a video source and the comprehensive consideration of the aspects of the process, the physical size of pixels, the cost and the like, the area of a micro display chip is difficult to be smaller.
The most common display mode of the LCoS chip is RGB spatial color display, that is, any resolution pixel point in the RGB color space can represent one color, and the sequential color utilizes the temporal property of the vision of human eyes to rapidly display tricolor light in a time-sharing manner, so that human eyes feel a stable color image, but the current micro display chip basically only supports displaying a video source with corresponding resolution, although some methods capable of supporting displaying video sources with multiple resolutions are available, for example, a video display system for improving display resolution disclosed in patent application CN110136644A in 2019, 08 and 16, retains the pixel information of the video source, but performs 4-sample mode display only on a plurality of physical pixel point sets with certain pixel structures of spatial color; another patent application CN111402781A discloses a display system with reduced display screen area by pixel space sampling in 2020, month 07 and 10, which loses part of the pixel information of the video source during sampling and has poor effect; however, for time-series colors, pixel points correspond to physical pixel points one by one, and one pixel point corresponds to one physical pixel point, so that the physical pixel point needs to be broken through for processing, and no related method exists.
Therefore, the video source with the resolution supported by the current micro display chip is relatively fixed, resulting in low universality.
Disclosure of Invention
In view of the above, it is desirable to provide a video source display method, system, micro display chip and storage medium capable of improving the versatility of the micro display chip.
A video source display method, the method comprising:
carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream;
adopting a preset sampling mode to carry out pixel sampling on the converted data stream to obtain each sub data frame;
processing each sub data frame through the data recombination and fusion algorithm, fusing partial pixel information of one pixel point to adjacent or similar physical pixel points of the pixel point for recombination, and obtaining a processed data stream;
and after the processed data stream is cached and analyzed, outputting the pixel value of the physical pixel point for displaying.
In one embodiment, the data reassembly fusion algorithm is overlapped in the following manner:
the actual physical pixel point F (i, j) is corresponding to the pixel value V of the pixel point in each sub data framenAnd (i, j) performing recombination fusion operation on the overlapped area according to a mode that except the first sub data frame is completely overlapped, the rest sub data frames are partially overlapped, and obtaining the pixel value of the actual physical pixel point F (i, j).
In one embodiment, the remaining sub-data frames are partially overlapped in the following manner:
the overlapping area of each sub data frame in the horizontal direction is (f-1)/f, (f-2)/f, (f-3)/f … 1/f, and f is the value in the horizontal direction of the sampling mode;
the overlapping area of each sub data frame in the longitudinal direction is (g-1)/g, (g-2)/g, (g-3)/g … 1/g, g-1 sub data frames are totally, and g is a value in the longitudinal direction of the sampling mode;
the overlap area at the crossing point of the transverse direction and the longitudinal direction in the oblique direction is the product of the overlap areas of the transverse sub data frames and the transverse sub data frames, namely ((f-1)/f) ((g-1)/g), and f × g-f-g +1 sub frames.
In one embodiment, the resolution of the video source data stream is greater than or equal to the resolution of the processed data stream.
In one embodiment, the converted data stream is any one of time sequential color and spatial color.
In one embodiment, the displayed pixel structure is any one of a field shape, a delta shape, an L shape and a long strip shape.
In one embodiment, the sampling pattern is represented as f × g, where f and g are positive integers, f ≧ r/a, g ≧ s/b, where r × s represents the resolution of the video source data stream, and a × b represents the resolution of the processed data stream.
In one embodiment, the display format corresponding to the converted data stream is any one of spatial color and time-sequential color.
A video source display system, the system comprising:
the register configuration module is used for configuring a display mode and a sampling mode;
the signal format conversion module is used for carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream;
the sampling control module is used for carrying out pixel sampling on the converted data stream by adopting a preset sampling mode to obtain each sub data frame;
the data recombination and fusion module is used for processing each sub data frame through the data recombination and fusion algorithm, fusing partial pixel information of one pixel point to adjacent or similar physical pixel points of the pixel point for recombination, and obtaining a processed data stream;
the storage module is used for caching the processed data stream;
the analysis module is used for analyzing the processed data stream and outputting the pixel value of the physical pixel point;
and the micro display chip module is used for displaying according to the pixel values of the physical pixel points.
A micro display chip comprises a storage module, a signal format conversion module, a sampling control module, a data recombination and fusion module and a display module, wherein the micro display chip realizes the steps of the video source display method when in execution.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video source display method.
According to the video source display method, the video source display system, the micro display chip and the storage medium, the format of the input video source data stream is converted according to the display mode, and the converted data stream is obtained; adopting a pre-configured sampling mode to carry out pixel sampling on the converted data stream to obtain each sub data frame; processing each sub data frame through a data recombination fusion algorithm, fusing partial pixel information of one pixel point to adjacent or similar physical pixel points for recombination, and obtaining a processed data stream; and after the processed data stream is cached and analyzed, outputting the pixel value of the physical pixel point for displaying. The micro display chip can be free from the limitation of physical pixels when displaying a video source, can be used for displaying the video source with the resolution higher than the self resolution by the micro display chip with the low resolution, can also be used for displaying the occasion with the special resolution, and improves the universality of the micro display chip.
Drawings
FIG. 1 is a flow chart illustrating a method for displaying a video source according to an embodiment;
FIG. 2 is a diagram illustrating the location of pixels in a 1920 × 1080 frame resolution video source, according to an embodiment;
FIG. 3 is a schematic diagram illustrating an arrangement position of sub-data frame 1 processed by a data reassembly and fusion algorithm after 2 × 2 sampling is performed in one embodiment;
FIG. 4 is a schematic diagram illustrating an arrangement position of sub-data frame 2 processed by a data reassembly fusion algorithm after 2 × 2 sampling is performed in one embodiment;
FIG. 5 is a schematic diagram illustrating an arrangement position of sub-data frame 3 processed by a data reassembly and fusion algorithm after 2 × 2 sampling is performed in one embodiment;
FIG. 6 is a schematic diagram illustrating an arrangement position of sub-data frames 4 processed by a data reassembly fusion algorithm after 2 × 2 sampling is performed in one embodiment;
FIG. 7 is a diagram illustrating the location of pixels in a 1920 × 1080 frame resolution video source, according to an embodiment;
FIG. 8 is a schematic diagram illustrating an arrangement position of sub-data frame 1 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 9 is a schematic diagram illustrating an arrangement position of sub-data frame 2 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 10 is a schematic diagram illustrating an arrangement position of sub-data frame 3 processed by a data reassembly and fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 11 is a schematic diagram illustrating an arrangement position of sub-data frames 4 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 12 is a schematic diagram illustrating an arrangement position of sub-data frames 5 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed according to an embodiment;
FIG. 13 is a schematic diagram illustrating an arrangement position of sub-data frames 6 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 14 is a schematic diagram illustrating an arrangement position of sub-data frames 7 processed by a data reassembly and fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 15 is a schematic diagram illustrating an arrangement position of sub-data frames 8 processed by a data reassembly fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 16 is a schematic diagram illustrating an arrangement position of sub-data frames 9 processed by a data reassembly and fusion algorithm after 3 × 3 sampling is performed in one embodiment;
FIG. 17 is a block diagram of a video source display system in accordance with one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The video source display method can be embedded into the driving module and is interactively realized with the micro display chip; the video source display method can also be embedded into a micro display chip and executed by the micro display chip as a part of the micro display chip. The video source display method can be used for LCoS, OLED and the like.
In one embodiment, as shown in fig. 1, there is provided a video source display method comprising the steps of:
step S220, performing format conversion on the input video source data stream according to the display mode, and obtaining a converted data stream.
And the display format corresponding to the converted data stream is any one of space color and time sequence color. Format conversion includes conversion of signal types, such as HDMI to RGB, conversion of input video source data stream to a desired data format, and so on, and the format conversion may be as follows: the input video source data stream is a video source data stream with a spatial color, the micro display chip adopts a time-sequence color display mode, the input video source data stream is converted into a time-sequence color data stream, the input video source data stream is a video source data stream with a time-sequence color, the micro display chip adopts a space-color display mode, the input video source data stream is converted into a data stream with a spatial color, and if the display mode adopted by the micro display chip corresponds to the format of the input video source data stream, format conversion is not needed.
A spatial color pixel (resolution pixel) refers to a collection of physical pixels having a certain pixel structure, one pixel corresponds to a plurality of physical pixels, each physical pixel corresponds to an actual display circuit, which is the minimum unit of display, and a time-sequential color pixel corresponds to a physical pixel one by one.
Step S240, performing pixel sampling on the converted data stream by using a preconfigured sampling mode, to obtain each sub data frame.
The sampling mode for collecting the sub-data frame is represented as f × g, and may be configured according to actual needs, and there are various selectable sampling modes, for example: 2 × 2, 3 × 3, etc., where f and g are positive integers, f ≧ r/a, g ≧ s/b, r × s represents the resolution of the video source data stream, and a × b represents the resolution of the processed data stream.
Step S260, each sub-data frame is processed through the data recombination and fusion algorithm, and partial pixel information of one pixel point is fused to a physical pixel point adjacent or close to the pixel point for recombination, so as to obtain a processed data stream.
For time sequence color, due to the limitation on physical pixels, one pixel information can only be displayed on one physical pixel point correspondingly, in order to display more pixel information and not to enable the information of adjacent pixel points to be completely overlapped, in order to solve the problem, a data recombination and fusion algorithm based on data recombination is provided, partial pixel information of one pixel point is fused onto the adjacent or similar physical pixel point for recombination and display, the data recombination and fusion algorithm can be used for time sequence color display and space color display, various sampling modes can be selected, the limitation of the physical pixels is avoided, and the purpose of displaying high resolution at low resolution is achieved. The data recombination fusion algorithm is not limited by a pixel structure, can use various sampling modes, is not limited by physical pixels, achieves the purpose of displaying a video source with resolution higher than the self resolution, can reduce the area of a chip to a certain extent, and reduces power consumption. The resolution of the video source data stream is greater than or equal to the resolution of the processed data stream.
In one embodiment, the data reassembly fusion algorithm is overlapped in the following manner: the actual physical pixel point F (i, j) is corresponding to the pixel value V of the pixel point in each sub data framenAnd (i, j) performing recombination fusion operation on the overlapped area according to a mode that except the first sub data frame is completely overlapped, the rest sub data frames are partially overlapped, and obtaining the pixel value of the actual physical pixel point F (i, j).
In an embodiment, the remaining sub-data frames are partially overlapped in the following manner: the overlapping area of each sub data frame in the horizontal direction is (f-1)/f, (f-2)/f, (f-3)/f … 1/f, and f is the value in the horizontal direction of the sampling mode; the overlapping area of each sub data frame in the longitudinal direction is (g-1)/g, (g-2)/g, (g-3)/g … 1/g, g-1 sub data frames are totally, and g is a value in the longitudinal direction of the sampling mode; the overlap area at the crossing point of the transverse direction and the longitudinal direction in the oblique direction is the product of the overlap areas of the transverse sub data frames and the transverse sub data frames, namely ((f-1)/f) ((g-1)/g), and f × g-f-g +1 sub frames.
Step S280, after the processed data stream is cached and analyzed, the pixel value of the physical pixel point is output for display.
The displayed pixel structure can be any one of a field shape, a triangle shape, an L shape and a long strip shape, and can also be other pixel structures, and the corresponding pixel structure can be adopted for displaying according to the micro display chip and actual needs.
According to the video source display method, format conversion is carried out on an input video source data stream according to a display mode, and a converted data stream is obtained; adopting a pre-configured sampling mode to carry out pixel sampling on the converted data stream to obtain each sub data frame; processing each sub data frame through a data recombination fusion algorithm, fusing partial pixel information of one pixel point to adjacent or similar physical pixel points for recombination, and obtaining a processed data stream; and after the processed data stream is cached and analyzed, outputting the pixel value of the physical pixel point for displaying. The micro display chip can be used for displaying the video source with the resolution higher than the self resolution, can also be used for displaying the occasion with the special resolution, improves the universality of the micro display chip, and simultaneously can also reduce the area of the micro display chip used by a system and reduce the power consumption of the system.
In an embodiment, a video source display method is described by taking an example of selecting a 2 × 2 sampling mode and displaying a 1920 × 1080 resolution video source in a time-sequential color mode with 960 × 540 resolution, and the specific contents are as follows:
as shown in fig. 2, a schematic position diagram of a pixel point of a video source with a resolution of 1920 × 1080 of a frame is input, a sampling mode of 2 × 2 is selected for sampling, and 4 sub data frames are obtained, which are respectively sub data frame 1, sub data frame 2, sub data frame 3, and sub data frame 4, and the resolution of each sub data frame is 960 × 540. The pixel information of each pixel point of the sub-data frame 1 is arranged according to the schematic of fig. 3, the pixel information of each pixel point of the sub-data frame 2 is arranged according to the schematic of fig. 4, the pixel information of each pixel point of the sub-data frame 3 is arranged according to the schematic of fig. 5, and the pixel information of each pixel point of the sub-data frame 4 is arranged according to the schematic of fig. 6.
P for pixel point in sub data framen(i, j) indicates that i is the number of rows, j is the number of columns, and n is the number of sub-data frames; pixel point Pn(i, j) is Vn(i, j) indicates that i is the number of rows, j is the number of columns, and n is the number of sub-data frames; the actual physical pixel points of the micro display chip are represented by F (i, j).
As shown in FIG. 3, the pixel value V in the sub-data frame 11And (i, j) sequentially falling on the physical pixel points F (i, j) in a one-to-one correspondence manner. As shown in fig. 4, one pixel information in the sub-data frame 2 spans two adjacent columns of pixel points, and the overlapping portion (denoted by numeral 1/2) of the actual physical pixel point in fig. 4 and the pixel point in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 21/2 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 22(1,1) as an example, P2(1,1) pixel value V of pixel point2 (1,1) 1/2 should be displayed on the actual physical pixel at row 1, column 1, pixel point F (1,1), P2The other 1/2 of pixel values of (1,1) should be displayed on pixel F (1,2) at row 1 and column 2 of the actual physical pixel, where the pixel value displayed at the column end F (1,960) is V2(1,959)/2 plus V2(1,960)/2, another V2(1,960)/2 is discarded, and 1/2 absent from train F (1,1) can be V2The same values of (1,1)/2 are substituted, and so on.
As shown in fig. 5, one pixel information in the sub-data frame 3 spans over two adjacent rows of pixel points, and the overlapping part (denoted by numeral 1/2) of the actual physical pixel point and the pixel point in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 31/2 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 33(1,1) as an example, P3(1,1) pixel value V of pixel point 31/2 of (1,1) should be displayed on the 1 st row and 1 st column pixel F (1,1) of the actual physical pixel, P3Another 1/2 of pixel values of (1,1) pixel should be displayed on pixel F (2,1) of actual physical pixel at row 2, column 1, where the tail of row F (1080,1) displays pixel value V3 (1079,1)/2 plus V3(1080,1)/2, another V3(1080,1)/2 is omitted, and 1/2 absent from row head F (1,1) may be V3 The same values of (1,1)/2 are substituted, and so on.
As shown in fig. 6, one pixel information in the sub-data frame 4 spans two adjacent rows and two adjacent columns of pixel points, and the overlapping part (denoted by numeral 1/4) of the actual physical pixel point and the pixel point in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 41/4 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 44(1,1) as an example, P4 (1,1) pixel value V of pixel point4 The first copy 1/4 of (1,1) should be displayed on the actual physical pixel at row 1, column 1 pixel F (1,1), P4The second portion 1/4 of the pixel value of the (1,1) pixel point should be displayed on the actual objectP on pixel F (1,2) of row 1 and column 2 of the pixel4The third copy 1/4 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (2,1) at row 2, column 1, P4The fourth 1/4 copy of the (1,1) pixel's pixel value should be displayed on the actual physical pixel's line 2, column 2 pixel F (2,2), where the tail of the line F (1080,1) displays the pixel value V4(1079,1)/4 plus V4(1080,1)/4, another V4(1080,1)/2 is omitted, and 3/4 absent from row head F (1,1) may be V4The same values of (1,1)/4 are substituted, and the like;
in principle, the display of the data of the entry 1 frame can be completed by sequentially displaying 4 sub-data frames according to the arrangement mode of the embodiment, but one pixel point is used as a physical minimum display unit, the sub-data frames are not recombined and fused, and one pixel point cannot span two pixel points to be displayed. Therefore, the present application proposes a column data reorganization and fusion algorithm, which performs fusion processing on the data of each sub data frame falling on the actual physical pixel point with the resolution of 960 × 540 according to the schematic diagrams of fig. 3 to 6, to obtain a display effect equivalent to 4 sub data frames, where the pixel value calculation formula of the actual physical pixel point is:
Figure 432413DEST_PATH_IMAGE001
here, the weight of each sub-data frame is set to 1/4, a weight coefficient may be set according to actual conditions, and when each sub-data frame is processed by using a data reassembly and fusion algorithm, the weight of each sub-data frame displayed and output may be set individually as needed.
Taking F (2,2) as an example, the actual pixel value of the physical pixel point is:
Figure 518050DEST_PATH_IMAGE002
this completes the display of a 1920 × 1080 resolution video source at 960 × 540 resolution. The low-resolution micro display chip is used for displaying a high-resolution video source, and the display effect is superior to the self-resolution.
In an embodiment, a video source display method is described by taking an example of selecting a 3 × 3 sampling mode and displaying a 1920 × 1080 resolution video source in a time-sequential color mode with 640 × 360 resolution, and the specific contents are as follows:
as shown in fig. 7, a schematic position diagram of a pixel point of a video source with a resolution of 1920 × 1080 input one frame is shown, a 3 × 3 sampling mode is selected for sampling, and 9 sub data frames are obtained, which are respectively sub data frame 1, sub data frame 2, sub data frame 3, sub data frame 4, sub data frame 5, sub data frame 6, sub data frame 7, sub data frame 8, and sub data frame 9, and the resolution of each sub data frame is 640 × 360. The pixel information of each pixel point of the sub-data frame 1 is arranged according to the schematic of fig. 8, the pixel information of each pixel point of the sub-data frame 2 is arranged according to the schematic of fig. 9, the pixel information of each pixel point of the sub-data frame 3 is arranged according to the schematic of fig. 10, the pixel information of each pixel point of the sub-data frame 4 is arranged according to the schematic of fig. 11, the pixel information of each pixel point of the sub-data frame 5 is arranged according to the schematic of fig. 12, the pixel information of each pixel point of the sub-data frame 6 is arranged according to the schematic of fig. 13, the pixel information of each pixel point of the sub-data frame 7 is arranged according to the schematic of fig. 14, the pixel information of each pixel point of the sub-data frame 8 is arranged according to the schematic of fig. 15, and the pixel information of each pixel point of the sub-data frame 9 is arranged according to the schematic of fig. 16.
P for pixel point in sub data framen(i, j) indicates that i is the number of rows, j is the number of columns, and n is the number of sub-data frames; pixel point Pn(i, j) is Vn(i, j) indicates that i is the number of rows, j is the number of columns, and n is the number of sub-data frames; the actual physical pixel points of the micro display chip are represented by F (i, j).
As shown in FIG. 8, the pixel value V in sub-data frame 11And (i, j) sequentially falling on the physical pixel points F (i, j) in a one-to-one correspondence manner. As shown in FIG. 9, a pixel in the sub-data frame 2 spans two adjacent columns of pixels, and the actual physical pixel overlaps with the pixel in the sub-data frame (indicated by numbers 2/3 or 1/3)Shown) means that: the pixel value V of a certain pixel point in the sub data frame 22/3 or 1/3 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 22(1,1) as an example, P2(1,1) pixel value V of pixel point 22/3 of (1,1) should be displayed on the 1 st row and 1 st column pixel F (1,1) of the actual physical pixel, P 21/3 of pixel value of (1,1) pixel point should be displayed on pixel point F (1,2) of actual physical pixel at line 1 and column 2, wherein the pixel value displayed at column tail F (1,640) is V2(1,639)/3 plus V2(1,640) 2/3, another V2(1,640)/3 is discarded, and 1/3 absent from train head F (1,1) can be V2The (1,1)/3 values are substituted, and the like.
As shown in fig. 10, the pixel information in the sub-data frame 3 spans two adjacent columns of pixels, and the overlapping part (indicated by numeral 2/3 or 1/3) of the actual physical pixel and the pixel in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 32/3 or 1/3 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 33(1,1) as an example, P3(1,1) pixel value V of pixel point 31/3 of (1,1) should be displayed on the 1 st row and 1 st column pixel F (1,1) of the actual physical pixel, P 32/3 of pixel value of (1,1) pixel point should be displayed on pixel point F (1,2) of actual physical pixel at line 1 and column 2, wherein the pixel value displayed at column tail F (1,640) is V3(1,639) 2/3 plus V3(1,640)/3, another V3(1,640) 2/3 is cut off, and 2/3 absent from train head F (1,1) can be V3The (1,1) × 2/3 values are substituted and so on.
As shown in fig. 11, one pixel information in the sub-data frame 4 spans two adjacent rows of pixels, and the overlapping part (indicated by numeral 2/3 or 1/3) of the actual physical pixel and the pixel in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 42/3 or 1/3 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). In the image of the 1 st row and 1 st column of the sub-data frame 4Prime point P4(1,1) as an example, P4(1,1) pixel value V of pixel point 42/3 of (1,1) should be displayed on the 1 st row and 1 st column pixel F (1,1) of the actual physical pixel, P4The other 1/3 of pixel values of (1,1) should be displayed on pixel F (2,1) at row 2 and column 1 of the actual physical pixel, where the pixel value displayed at the end of row F (360,1) is V4(359,1)/3 plus V4(360,1) × 2/3, further V4(360,1)/3 is discarded, and 1/3 missing from the head F (1,1) can be V4The same values of (1,1)/3 are substituted, and so on.
As shown in fig. 12, the pixel information in the sub-data frame 5 spans over two adjacent rows of pixels, and the overlapping part (indicated by numeral 2/3 or 1/3) of the actual physical pixel and the pixel in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 52/3 or 1/3 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 55(1,1) as an example, P5(1,1) pixel value V of pixel point 51/3 of (1,1) should be displayed on the 1 st row and 1 st column pixel F (1,1) of the actual physical pixel, P5The other 2/3 of pixel values of (1,1) should be displayed on pixel F (2,1) at row 2 and column 1 of the actual physical pixel, where the pixel value displayed at the end of row F (360,1) is V5(359,1) 2/3 plus V5(360,1)/3, another V5(360,1) × 2/3 is cut off, and 2/3 missing from the head F (1,1) may be V5The same values of (1,1) × 2/3 are substituted and so on.
As shown in fig. 13, one pixel information in the sub-data frame 6 spans two adjacent rows and two adjacent columns of pixel points, and the overlapping part (indicated by the numeral 4/9 or 2/9 or 1/9) of the actual physical pixel point and the pixel point in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 64/9 or 2/9 or 1/9 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub data frame 66(1,1) as an example, P6(1,1) pixel value V of pixel point6The first copy 4/9 of (1,1) should be displayed at the very end of the actual physical pixel1 line and 1 column pixel F (1,1), P6The second portion 2/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (1,2) at row 1, column 2, P6The third copy 2/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (2,1) at row 2, column 1, P6The fourth portion 1/9 of the pixel value of (1,1) pixel point should be displayed on the pixel point F (2,2) of the 2 nd row and 2 nd column of the actual physical pixel of the micro-display chip, wherein the pixel value displayed at the end of the row F (360,1) is V6(359,1) 2/9 plus V6(360,1) × 4/9, further V6(360,1) × 3/9 is cut off, and 5/9 missing from the head F (1,1) may be V6The same values of (1,1) × 5/9 are substituted and so on.
As shown in fig. 14, one pixel information in the sub-data frame 7 spans two adjacent rows and two adjacent columns of pixel points, and the overlapping part (indicated by the numeral 4/9 or 2/9 or 1/9) of the actual physical pixel point and the pixel point in the sub-data frame means: the pixel value V of a certain pixel point in the sub data frame 74/9 or 2/9 or 1/9 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub-data frame 77(1,1) as an example, P7(1,1) pixel value V of pixel point7The first copy 2/9 of (1,1) should be displayed on the actual physical pixel at row 1, column 1 pixel F (1,1), P7The second portion 4/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (1,2) at row 1, column 2, P7The third copy 1/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (2,1) at row 2, column 1, P7The fourth portion 2/9 of the pixel value of (1,1) should be displayed on the pixel F (2,2) of the actual physical pixel at line 2 and column 2, where the pixel value displayed at the end of line F (360,1) is V7(359,1)/9 plus V7(360,1) × 2/9, further V7(360,1) × 3/9 is cut off, and 7/9 missing from the head F (1,1) may be V7The same values of (1,1) × 7/9 are substituted and so on.
As shown in fig. 15, one pixel information in the sub-data frame 8 spans over two adjacent rows and two adjacent columns of pixel points, actuallyThe part of the physical pixel point that overlaps a pixel point in a sub-data frame (denoted by the numeral 4/9 or 2/9 or 1/9) means: the pixel value V of a certain pixel point in the sub data frame 84/9 or 2/9 or 1/9 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub-data frame 88(1,1) as an example, P8(1,1) pixel value V of pixel point8The first copy 2/9 of (1,1) should be displayed on the actual physical pixel at row 1, column 1 pixel F (1,1), P8The second portion 1/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (1,2) at row 1, column 2, P8The third copy 4/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (2,1) at row 2, column 1, P8The fourth portion 2/9 of the pixel value of (1,1) should be displayed on the pixel F (2,2) of the actual physical pixel at line 2 and column 2, where the pixel value displayed at the end of line F (360,1) is V8(359,1) 2/9 plus V8(360,1) × 4/9, further V8(360,1) × 3/9 is cut off, and 7/9 missing from the head F (1,1) may be V8The same values of (1,1) × 7/9 are substituted, and so on;
as shown in FIG. 16, a pixel information in the sub-data frame 9 spans two adjacent rows and two adjacent columns of pixels, and the actual physical pixel(s) in the figure are used
Figure 431779DEST_PATH_IMAGE003
Representation) and pixel points in sub-data frames (by
Figure 926476DEST_PATH_IMAGE004
Denoted by the number 4/9 or 2/9 or 1/9) means that: the pixel value V of a certain pixel point in the sub data frame 94/9 or 2/9 or 1/9 of (i, j) is the pixel value displayed on the actual physical pixel point F (i, j). With the pixel point P of the 1 st row and 1 st column of the sub-data frame 99(1,1) as an example, P9(1,1) pixel value V of pixel point9The first copy 1/9 of (1,1) should be displayed on the actual physical pixel at row 1, column 1 pixel F (1,1), P9(1,1) pixel of pixel pointThe second portion 2/9 of the value should be displayed on pixel point F (1,2) of actual physical pixel, row 1, column 2, P9The third copy 2/9 of the pixel value of (1,1) pixel should be displayed on the actual physical pixel at pixel F (2,1) at row 2, column 1, P9The fourth portion 4/9 of the pixel value of (1,1) should be displayed on the pixel F (2,2) of the actual physical pixel at line 2 and column 2, where the pixel value displayed at the end of line F (360,1) is V9(359,1) 2/9 plus V9(360,1)/9, another V9(360,1) × 3/9 is cut off, and 8/9 missing from the head F (1,1) may be V9The same values of (1,1) × 8/9 are substituted and so on.
In principle, the display of the data of the entry 1 frame can be completed by sequentially displaying 9 sub-data frames according to the arrangement mode of the above embodiment, but one pixel point is used as a physical minimum display unit, the sub-data frames are not recombined and fused, and one pixel point cannot span two pixel points to be displayed. Therefore, the present application proposes a column data reorganization and fusion algorithm, which performs fusion processing on the data of each sub data frame falling on the 640 × 360 resolution actual physical pixel point according to the schematic diagrams of fig. 8 to 16, to obtain a display effect equivalent to 9 sub data frames, where the calculation formula of the actually displayed pixel value (i.e., the pixel value of the physical pixel point) is:
(V1(i,j)+V2(i,j)*2/3+V2(i,j-1)/3+V3(i,j)/3+V3(i,j-1)*2/3+V4(i,j)*2/3+V4(i-1,j)/3+V5(i,j)/3+V5(i-1,j)*2/3+V6(i,j)*4/9+V6(i,j-1)*2/9+V6(i-1,j)*2/9+V6(i-1,j-1)/9+V7(i,j)*2/9+V7(i,j-1)*4/9+V7(i-1,j)/9+V7(i-1,j-1)*2/9+V8(i,j)*2/9+V8(i,j-1)/9+V8(i-1,j)*4/9+V8(i-1,j-1)*2/9+V9(i,j)/9+V9(i,j-1)*2/9+V9(i-1,j)*2/9+V9(i-1,j-1)*4/9 )/9
here, the weight of each sub-data frame is set to 1/9, a weight coefficient may be set according to actual conditions, and when each sub-data frame is processed by using a data reassembly and fusion algorithm, the weight of each sub-data frame displayed and output may be set individually as needed.
Taking F (2,2) as an example, the actually displayed pixel values are:
(V1(2,2)+V2(2,3)*2/3+V2(2,1)/3+V3(2,2)/3+V3(2,1)*2/3+V4(2,2)*2/3+V4(1,2)/3+V5(2,2)/3+V5(1,2)*2/3+V6(2,2)*4/9+V6(2,1)*2/9+V6(1,2)*2/9+V6(1,1)/9+V7(2,2)*2/9+V7(2,1)*4/9+V7(1,2)/9+V7(1,1)*2/9+V8(2,2)*2/9+V8(2,1)/9+V8(1,2)*4/9+V8(1,1)*2/9+V9(2,2)/9+V9(2,1)*2/9+V9(1,2)*2/9+V9(1,1)*4/9 )/9
this completes the display of a 1920x1080 resolution video source at 640x360 resolution. The low-resolution micro display chip is used for displaying a high-resolution video source, and the display effect is superior to the self-resolution.
It should be noted that the sampling control module may select a plurality of sampling modes, including but not limited to 2 × 2 subframe samples, 3 × 3 subframe samples, and 2 × 3 subframe samples. Correspondingly, when the video source resolution is m × n, a micro display chip with a × b resolution can be selected for display, wherein m/a is greater than or equal to 1, and n/b is greater than or equal to an integer of 1.
The above embodiments are described by using time-sequential color display, and the present application can also be used for spatial color display, and the pixel structure support is not limited to "tian-shaped", "delta-shaped", "L-shaped", and "stripe", and can be adapted to more different actual requirements.
In order to improve the display effect and achieve the purpose that the display effect is better than the resolution of the display effect, original pixel information needs to be kept as much as possible during display. However, aiming at time sequence color, due to the limitation on physical pixels, one pixel information can only be displayed on one corresponding physical pixel point, and in order to display more pixel information and not enable the information of adjacent pixel points to be completely overlapped, the data recombination fusion algorithm is provided in the application, partial pixel information of one pixel point is fused to the adjacent or similar physical pixel point for recombination display, the method can be used for time sequence color display and space color display, can select various sampling modes, is not limited by the physical pixels, achieves the purpose of displaying high resolution at low resolution, is suitable for displaying a video source higher than the self resolution on a low-resolution micro-display chip, and can also be used for displaying occasions with special resolution.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 17, there is provided a video source display system comprising: the device comprises a register configuration module, a signal format conversion module, a sampling control module, a data recombination and fusion module, a storage module, an analysis module and a micro display chip module.
And the register configuration module is used for configuring a display mode and a sampling mode.
And the signal format conversion module is used for carrying out format conversion on the input video source data stream according to the display mode to obtain a converted data stream.
And the sampling control module is used for carrying out pixel sampling on the converted data stream by adopting a preset sampling mode to obtain each sub data frame.
And the data recombination and fusion module is used for processing each sub data frame through the data recombination and fusion algorithm, fusing partial pixel information of one pixel point to an adjacent or similar physical pixel point of the pixel point for recombination, and obtaining a processed data stream.
And the storage module is used for caching the processed data stream.
And the analysis module is used for analyzing the processed data stream and outputting the pixel value of the physical pixel point.
And the micro display chip module is used for displaying according to the pixel values of the physical pixel points.
The driving module can support but not limited to RGB888 interface, MIPI interface, HDMI interface and VGA interface, and the driving module can convert the pixel value into signals which need to be used inside through the signal conversion module, and the required data is selected through calculation and output to the following module. The obtained processed data stream is buffered in the storage module, so that the frame rate of the output port can be improved, and the use area of the storage module can be reduced. The driving module and the micro display chip module support, but are not limited to, video sources of 3840 × 2160, 2560 × 1440, 1920 × 1080, and 1280 × 720 resolutions.
For specific limitations of the video source display system, reference may be made to the above limitations of the video source display method, which are not described herein again. The various modules in the video source display system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, the micro display chip comprises a storage module, a signal format conversion module, a sampling control module, a data recombination and fusion module and a display module, and when the micro display chip is executed, the steps of the method are realized
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the video source display method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for displaying a video source, the method comprising:
carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream;
adopting a preset sampling mode to carry out pixel sampling on the converted data stream to obtain each sub data frame;
processing each sub-data frame through a data recombination and fusion algorithm, performing recombination and fusion operation in a mode that other sub-data frames are partially overlapped except for the first sub-data frame, fusing the pixel information of one pixel point to the adjacent or similar physical pixel point of the pixel point for recombination, and obtaining a processed data stream;
and after the processed data stream is cached and analyzed, outputting the pixel value of the physical pixel point for displaying.
2. The video source display method according to claim 1, wherein the data reconstruction fusion algorithm is overlapped in a manner of:
the actual physical pixel point F (i, j) is corresponding to the pixel value V of the pixel point in each sub data framenAnd (i, j) performing recombination fusion operation on the overlapped area according to a mode that except the first sub data frame is completely overlapped, the rest sub data frames are partially overlapped, and obtaining the pixel value of the actual physical pixel point F (i, j).
3. The video source display method of claim 2, wherein the remaining sub-data frames are partially overlapped in a manner of:
the overlapping area of each sub data frame in the horizontal direction is (f-1)/f, (f-2)/f, (f-3)/f … 1/f, and f is the value in the horizontal direction of the sampling mode;
the overlapping area of each sub data frame in the longitudinal direction is (g-1)/g, (g-2)/g, (g-3)/g … 1/g, g-1 sub data frames are totally, and g is a value in the longitudinal direction of the sampling mode;
the overlap area at the crossing point of the transverse direction and the longitudinal direction in the oblique direction is the product of the overlap areas of the transverse sub data frames and the transverse sub data frames, namely ((f-1)/f) ((g-1)/g), and f × g-f-g +1 sub frames.
4. The video source display method according to claim 1, wherein the resolution of the video source data stream is greater than or equal to the resolution of the processed data stream.
5. The video source display method according to claim 1, wherein when the pixel values of the physical pixels are displayed, the pixel structure is any one of a field shape, a delta shape, an L shape and a strip shape.
6. Video source display method according to claim 1, wherein the sampling pattern is represented by f × g, where f and g are positive integers, f ≧ r/a, g ≧ s/b, r × s represents the resolution of the video source data stream, and a × b represents the resolution of the processed data stream.
7. The video source display method according to claim 1, wherein the display format corresponding to the converted data stream is any one of spatial color and temporal color.
8. A video source display system, said system comprising:
the register configuration module is used for configuring a display mode and a sampling mode;
the signal format conversion module is used for carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream;
the sampling control module is used for carrying out pixel sampling on the converted data stream by adopting a preset sampling mode to obtain each sub data frame;
the data recombination and fusion module is used for processing each sub-data frame through a data recombination and fusion algorithm, performing recombination and fusion operation in a mode that the rest sub-data frames are partially overlapped except the first sub-data frame, fusing the pixel information of one pixel point to the adjacent or similar physical pixel points of the pixel point, and recombining to obtain a processed data stream;
the storage module is used for caching the processed data stream;
the analysis module is used for analyzing the processed data stream and outputting the pixel value of the physical pixel point;
and the micro display chip module is used for displaying according to the pixel values of the physical pixel points.
9. A micro display chip, comprising a storage module, a signal format conversion module, a sampling control module, a data recombination and fusion module and a display module, wherein the micro display chip implements the steps of the video source display method according to any one of claims 1 to 7 when executed.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video source display method according to any one of claims 1 to 7.
CN202110954210.4A 2021-08-19 2021-08-19 Video source display method, system, micro display chip and storage medium Active CN113409719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110954210.4A CN113409719B (en) 2021-08-19 2021-08-19 Video source display method, system, micro display chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110954210.4A CN113409719B (en) 2021-08-19 2021-08-19 Video source display method, system, micro display chip and storage medium

Publications (2)

Publication Number Publication Date
CN113409719A CN113409719A (en) 2021-09-17
CN113409719B true CN113409719B (en) 2021-11-16

Family

ID=77688903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110954210.4A Active CN113409719B (en) 2021-08-19 2021-08-19 Video source display method, system, micro display chip and storage medium

Country Status (1)

Country Link
CN (1) CN113409719B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500703B (en) * 2022-01-30 2023-03-14 上海傲显科技有限公司 Sub-pixel arrangement based downsampling method, mobile terminal and terminal readable storage medium
CN114387922B (en) * 2022-02-24 2023-04-07 硅谷数模(苏州)半导体股份有限公司 Driving chip
WO2023206428A1 (en) * 2022-04-29 2023-11-02 京东方科技集团股份有限公司 Display data processing method and apparatus, and display apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8723889B2 (en) * 2011-01-25 2014-05-13 Freescale Semiconductor, Inc. Method and apparatus for processing temporal and spatial overlapping updates for an electronic display
CN104363384B (en) * 2014-10-29 2017-06-06 复旦大学 Hardware sewing method based on row in a kind of video fusion
CN104793341B (en) * 2015-05-12 2018-01-12 京东方科技集团股份有限公司 A kind of display drive method and device
CN105049826B (en) * 2015-07-23 2017-05-31 南京大学 Real time tridimensional video fusion conversion method based on FPGA
CN105141876B (en) * 2015-09-24 2019-02-22 京东方科技集团股份有限公司 Video signal conversion method, video-signal converting apparatus and display system
CN107948544A (en) * 2017-11-28 2018-04-20 长沙全度影像科技有限公司 A kind of multi-channel video splicing system and method based on FPGA
CN111402833B (en) * 2020-06-05 2020-09-01 南京芯视元电子有限公司 Correction system for improving phase modulation precision of LCoS spatial light modulator

Also Published As

Publication number Publication date
CN113409719A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN113409719B (en) Video source display method, system, micro display chip and storage medium
US10923075B2 (en) Display method, display device, electronic device and computer readable storage medium
DE102017122391A1 (en) Low-resolution RGB rendering for efficient transmission
JP3715249B2 (en) Image processing circuit, image display device, and image processing method
KR101340427B1 (en) Improved memory structures for image processing
JP2009181097A (en) Multi-domain display device
US20150138218A1 (en) Display driver and display device including the same
JP4647280B2 (en) Display device
JP2009505125A (en) Matrix display with sequential color display and addressing method
US11238819B2 (en) Display-driving circuit, display apparatus, and display method based on time-division data output
US20080303758A1 (en) Display Device
CN111402781B (en) Display system for reducing area of display screen by pixel space sampling
CN106935213B (en) Low-delay display system and method
CN109643462B (en) Real-time image processing method based on rendering engine and display device
JP6418010B2 (en) Image processing apparatus, image processing method, and display apparatus
US20100033496A1 (en) Methods and Storing Colour Pixel Data and Driving a Display, Means for Preforming Such Methods, and Display Apparatus Using the Same
JPH11175037A (en) Liquid crystal display device
CN115938313B (en) Display driving method and device, liquid crystal controller, display system and projection device
JP2007010811A (en) Display device and display panel
US20170213492A1 (en) Color unevenness correction device and color unevenness correction method
CN110264976B (en) Improve the video display system and sequential colorization dynamic display method of display resolution
JPH04326323A (en) Display controller
JPH02100715A (en) Effective utilization system for displaying memory
CN117809591A (en) Driving method and driving system of display screen and display device
JPH10254385A (en) Led dot matrix display device and gradation display method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant