CN115002444A - Display module and display method thereof, display device and virtual display equipment - Google Patents

Display module and display method thereof, display device and virtual display equipment Download PDF

Info

Publication number
CN115002444A
CN115002444A CN202210582350.8A CN202210582350A CN115002444A CN 115002444 A CN115002444 A CN 115002444A CN 202210582350 A CN202210582350 A CN 202210582350A CN 115002444 A CN115002444 A CN 115002444A
Authority
CN
China
Prior art keywords
sub
picture
frame
subframe
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210582350.8A
Other languages
Chinese (zh)
Inventor
李治富
汪志强
苗京花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Display Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210582350.8A priority Critical patent/CN115002444A/en
Publication of CN115002444A publication Critical patent/CN115002444A/en
Priority to PCT/CN2023/091507 priority patent/WO2023226693A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Abstract

The disclosed embodiment provides a display method, which includes: decomposing a frame of color picture into n sub-frame pictures; sequentially displaying n sub-frame pictures, wherein n is more than or equal to 3 and is an integer; acquiring an eye image, processing the eye image, and determining the current pupil position corresponding to the first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when n subframe pictures are displayed in sequence; calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m times of measurement; m is more than or equal to 2 and is an integer; calculating the actual offset of the original position to be displayed of each subsequent sub-frame picture relative to the current position of the first sub-frame picture; calculating the actual display position of each subsequent sub-frame picture according to the actual offset; and sequentially displaying the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.

Description

Display module and display method thereof, display device and virtual display equipment
Technical Field
The disclosure relates to the technical field of display, in particular to a display module and a display method, a display device and virtual display equipment thereof.
Background
Virtual Reality (VR) display device is used to seal human vision and hearing from outside, and guide a user to generate a sense of feeling in a Virtual environment. The display principle is that a three-dimensional and highly simulated 3D space is simulated by utilizing a computer technology, and when a user wears the VR head display device, a common illusion as if the user is in reality is generated. In this space, the operator can shuttle or interact within this virtual environment using a controller or keyboard.
Disclosure of Invention
In one aspect, an embodiment of the present disclosure provides a display method, including:
decomposing a frame of color picture into n sub-frame pictures; the n sub-frame pictures are sequentially displayed, n is more than or equal to 3 and is an integer;
acquiring an eye image, processing the eye image, and determining a current pupil position corresponding to a first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when the n subframe pictures are displayed in sequence;
calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m times of measurement; m is more than or equal to 2 and is an integer;
calculating actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the original to-be-displayed position relative to the current position of the first sub-frame picture;
and sequentially displaying the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
In some embodiments, the acquiring an eye image, processing the eye image, and determining a current pupil position corresponding to the first sub-frame includes:
shooting an eye image when the first subframe picture is displayed;
converting the eye image into a gray image;
detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right canthus position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the crossed origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right canthus position points;
processing the gray level image in a coordinate plane formed by an X axis and a Y axis to determine a pupil area of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first subframe picture.
In some embodiments, the processing the gray scale image in a coordinate plane formed by an X axis and a Y axis determines a pupil area of the eye; determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture, including:
carrying out binarization processing on the gray level image to obtain a binarization image of the eye;
detecting a candidate pupil connected region on the binary image of the eye part by adopting a connected region marking method;
screening out the pupil areas of the eye part from the candidate pupil connected areas based on a geometric constraint and distance constraint algorithm;
the pupil area is overlaid with a circle of smallest diameter, the center of which is determined as the center of the pupil area.
In some embodiments, n-3, m-2;
the subsequent sub-frame pictures of the first sub-frame picture comprise a second sub-frame picture and a third sub-frame picture; the first sub-frame picture, the second sub-frame picture and the third sub-frame picture are sequentially displayed;
the calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m-time measurement comprises:
calculating the current pupil position corresponding to the second subframe picture according to a formula (1), and calculating the current pupil position corresponding to the third subframe picture according to a formula (2);
pos_curr_g=pos_curr+v_curr×delta+1/2×a_curr×delta 2 ; (1)
pos_curr_b=pos_curr+v_curr×delta×2+1/2×a_curr×(2×delta) 2 ; (2)
wherein the content of the first and second substances,
Figure BDA0003664526500000031
a _ curr is the current acceleration of eyeball rotation; v _ curr is the current speed of eyeball rotation; pos _1 and pos _2 are pupil positions corresponding to the first subframe picture obtained by the previous 2 times of measurement respectively; pos _ curr is the current pupil position corresponding to the first subframe picture; pos _ curr _ g is the current pupil position corresponding to the second subframe picture; pos _ curr _ b is the current pupil position corresponding to the third subframe picture; delta is the refreshing time length of one subframe picture, and the delta is 1/the refreshing frequency of one subframe picture.
In some embodiments, the actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture is calculated according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; and calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the original to-be-displayed position relative to the current position of the first sub-frame picture, wherein the method comprises the following steps of:
calculating a first offset matrix of the original to-be-displayed position of the second sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the second sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
calculating a second offset matrix of the original position to be displayed of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
multiplying the original position to be displayed of the second sub-frame picture by the first offset matrix to obtain the actual display position of the second sub-frame picture;
and multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
In some embodiments of the present invention, the,
the first offset matrix is:
Figure BDA0003664526500000041
the second offset matrix is:
Figure BDA0003664526500000042
wherein Rx1, Ry1 and Rz1 respectively represent the rotation amplitudes of the original to-be-displayed position of the second sub-frame along the X-axis, the Y-axis and the Z-axis relative to the current position of the first sub-frame; tx1, Ty1 and Tz1 respectively represent the translation amplitude of the original position to be displayed of the second sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame;
rx2, Ry2 and Rz2 respectively represent the rotation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; tx2, Ty2 and Tz2 respectively represent the translation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame;
the Z axis is perpendicular to a two-dimensional coordinate plane formed by the intersection of the X axis and the Y axis, and the Z axis intersects the X axis and the Y axis at the origin.
In some embodiments, further comprising: and storing the current pupil position corresponding to the first sub-frame picture and the current pupil positions corresponding to the subsequent sub-frame pictures.
In a second aspect, an embodiment of the present disclosure further provides a display module, including:
a disassembling module configured to disassemble a frame of color picture into n sub-frame pictures; n is not less than 3 and is an integer;
a display module configured to sequentially display the n subframe pictures;
the processing module is configured to acquire an eye image, process the eye image and determine a current pupil position corresponding to the first subframe picture; the first subframe picture is a subframe picture which is displayed firstly in the n subframe pictures;
a first prediction module, configured to calculate, according to a current pupil position corresponding to the first subframe picture and a pupil position corresponding to the first subframe picture obtained by previous m measurements, a current pupil position corresponding to each subsequent subframe picture of the first subframe pictures in the n subframe pictures; m is more than or equal to 2 and is an integer;
the second prediction module is configured to calculate the actual offset of the original to-be-displayed position of each subsequent sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to each subsequent sub-frame picture in the n sub-frame pictures relative to the current pupil position corresponding to the first sub-frame picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of each subsequent sub-frame picture relative to the current position of the first sub-frame picture;
and the display module is also configured to sequentially display the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
In some embodiments, the display module includes a display panel and a lens, the lens being located on a display side of the display panel;
the processing module comprises an infrared emitter and an infrared camera,
the infrared emitters are positioned on one side of the lens, which is far away from the display panel, distributed on the peripheral edge of the lens and used for emitting infrared light to eyes;
the infrared camera is located on one side of the lens, which is far away from the display panel, and is located at the edge of the lens and used for shooting eye images.
In some embodiments, the processing module further comprises an image processing unit configured to convert the eye image into a grayscale image; detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right eye corner position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the intersection origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right eye corner position points;
the image processing unit is further configured to process the gray-scale image of the eye and determine a pupil area of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first subframe picture.
In some embodiments, the second prediction module is configured to calculate a first offset matrix of an original to-be-displayed position of the second sub-frame relative to a current position of the first sub-frame according to an offset of a current pupil position corresponding to the second sub-frame relative to a current pupil position corresponding to the first sub-frame;
the second prediction module is further configured to calculate a second offset matrix of the original to-be-displayed position of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
the second prediction module is further configured to multiply the original to-be-displayed position of the second sub-frame picture with the first offset matrix to obtain an actual display position of the second sub-frame picture; and multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
In some embodiments, the mobile terminal further includes a storage module configured to store a current pupil position corresponding to the first sub-frame and a current pupil position corresponding to each subsequent sub-frame.
In a third aspect, an embodiment of the present disclosure further provides a display device, which includes the display module.
In a fourth aspect, an embodiment of the present disclosure further provides a virtual display device, where the virtual display device includes the display apparatus.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The above and other features and advantages will become more apparent to those skilled in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
fig. 1 is a schematic diagram illustrating a principle of color separation when VR displays a color picture according to the prior art.
Fig. 2 is a scene picture for generating color separation in the disclosed technology.
Fig. 3 is a schematic block diagram of a display module according to an embodiment of the disclosure.
Fig. 4 is a schematic top view illustrating the arrangement of pixels and light sources in the display module according to the embodiment of the disclosure.
FIG. 5 is a diagram illustrating a color picture frame being decomposed into three sub-frames according to an embodiment of the present disclosure.
FIG. 6 is another schematic diagram of splitting a color frame into three sub-frames according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating sequential refreshing of sub-frame pictures and sequential lighting of light sources with different colors in the backlight module in the display process of the display module according to the embodiment of the disclosure.
Fig. 8 is a structure disassembly schematic diagram of a display module in the embodiment of the present disclosure.
Fig. 9a is a schematic diagram of a gray scale image of an eye pattern.
Fig. 9b is a schematic diagram of a binarized image of an eye obtained by processing with an image processing unit in the embodiment of the present disclosure.
Fig. 10 is a schematic diagram illustrating a detection frequency of a pupil position corresponding to a first sub-frame in the embodiment of the disclosure.
Fig. 11 is a schematic diagram illustrating a process of predicting actual display positions of the second sub-frame picture and the third sub-frame picture in the embodiment of the disclosure.
Fig. 12 is a flowchart of a display method of a display module according to an embodiment of the disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the embodiments of the present disclosure, the display module, the display method thereof, the display apparatus, and the virtual display device provided in the embodiments of the present disclosure are described in further detail below with reference to the accompanying drawings and the detailed description.
The disclosed embodiments will be described more fully hereinafter with reference to the accompanying drawings, but the illustrated embodiments may be embodied in different forms and should not be construed as limited to the embodiments set forth in the disclosure. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The disclosed embodiments are not limited to the embodiments shown in the drawings, but include modifications of configurations formed based on a manufacturing process. Thus, the regions illustrated in the figures have schematic properties, and the shapes of the regions shown in the figures illustrate specific shapes of regions, but are not intended to be limiting.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present disclosure, "a plurality" means two or more unless otherwise specified.
In the prior art, for the VR display screen, the user can feel as if the picture refreshing rate reaches 90 Hz. When the field sequential screen is used as the VR screen, in order to ensure the above effect, it is necessary to make the refresh rate thereof to 270 Hz. The field sequential screen is a screen that divides a previous frame of color image (RGB image) into three sub-frames (i.e., images that are respectively backlit by red, green, and blue light sources) for display, and the sub-frames are sequentially displayed, and backlights for the sub-frames are also sequentially displayed.
Referring to fig. 1, a schematic diagram of color separation when VR displays a color picture in the prior art is shown; with the rotation of the eyeballs of human eyes, pixels at the same position on the screen provide backlight through the red, green and blue light sources respectively, and displayed red, green and blue pictures respectively fall at different positions of the retinas of the human eyes, so that the human eyes can feel the color separation phenomenon. Referring to fig. 2, a color separated scene picture is generated in the disclosed technology; it can be seen from figure 2 that there are stripes of three colours red, green and blue on the window jamb. Analyzing the scene of the occurrence of color separation, the larger the difference of the three sub-frames (i.e. when the display gray scales of the three sub-frames are the same and are not 0), the more obvious the color separation. For example: when three sub-frames (i.e., a sub-frame in which a backlight is provided by a red light source, a sub-frame in which a backlight is provided by a green light source, and a sub-frame in which a backlight is provided by a blue light source) are displayed at 255 gradations, the backlights of the respective sub-frames are sequentially turned on, and color separation occurs most easily at saccades of eyes.
In order to solve the above problems in the prior art, in a first aspect, an embodiment of the present disclosure provides a display method, including: decomposing a frame of color picture into n sub-frame pictures; sequentially displaying n subframe pictures, wherein n is more than or equal to 3 and is an integer; acquiring an eye image, processing the eye image, and determining the current pupil position corresponding to the first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when the n subframe pictures are displayed in sequence; calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by previous m-time measurement; m is more than or equal to 2 and is an integer; calculating the actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the to-be-displayed position relative to the current position of the first sub-frame picture; and sequentially displaying the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
In the display method of the display module provided in the embodiment of the disclosure, the current pupil position corresponding to the first subframe picture displayed first in n subframe pictures is determined through processing; calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture; calculating the actual display position of each subsequent subframe picture of the first subframe picture according to the calculation result of the current pupil position corresponding to each subsequent subframe picture of the first subframe picture; the rotation track information of the eyeballs of the human eyes can be acquired so as to capture and track the rotation of the eyeballs of the human eyes; the actual display position of each subsequent sub-frame picture of the first sub-frame picture can be enabled to catch up with the eyeball rotating position by adjusting the position to be displayed to the actual display position of each subsequent sub-frame picture of the first sub-frame picture, so that the pictures displayed by the pixels with the same position on each subsequent sub-frame picture of the first sub-frame picture are projected to the same position of a pupil area (namely on a retina) of a human eye, and three pictures which are provided with backlight by light sources with different colors are fused, thereby eliminating the color separation phenomenon and ensuring that the pictures sensed by the human eye are a complete picture.
In a second aspect, an embodiment of the present disclosure provides a display module, and referring to fig. 3, a schematic block diagram of the display module in the embodiment of the present disclosure is provided; wherein, display module assembly includes: a disassembling module configured to disassemble a frame of color picture into n sub-frame pictures; n is not less than 3 and is an integer; a display module configured to sequentially display the n subframe pictures; the processing module is configured to acquire an eye image, process the eye image and determine a current pupil position corresponding to the first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when n subframe pictures are displayed in sequence; the first prediction module is configured to calculate the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m times of measurement; m is more than or equal to 2 and is an integer; the second prediction module is configured to calculate the actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the to-be-displayed position relative to the current position of the first sub-frame picture; and the display module is also configured to sequentially display the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
In some embodiments, referring to fig. 4, a schematic top view of an arrangement of pixels and light sources in a display module according to an embodiment of the disclosure is shown; the display module includes a display panel 1, where the display panel 1 includes a plurality of pixels 10, and the plurality of pixels 10 are arranged in an array. In this embodiment, referring to fig. 5 and fig. 6, fig. 5 is a schematic diagram illustrating a color picture of one frame being disassembled into three sub-frames according to an embodiment of the present disclosure; FIG. 6 is another schematic diagram of a color picture frame being decomposed into three sub-frames according to an embodiment of the present disclosure; one frame of color picture is decomposed into three sub-frames, which are sequentially displayed by the pixel 10 array.
In some embodiments, the display module further comprises a lens, the lens being located at the display side of the display panel 1. In some embodiments, referring to fig. 4, the display module further includes a backlight module 5, located on a side of the display panel 1 away from the lens, for providing backlight for the display of the display panel 1; the backlight module 5 comprises light sources of n colors; the display panel 1 includes upper and lower substrates, which are formed in a pair of cell gaps filled with liquid crystal.
In some embodiments, the backlight module 5 includes three color light sources, namely a red light source 51, a green light source 52 and a blue light source 53. The red light source 51, the green light source 52, and the blue light source 53 respectively provide backlight for three sub-frames sequentially displayed.
In some embodiments, the backlight module 5 provides a direct-type backlight for the display of the display panel 1, i.e. the light emitting surface of the light source faces the display surface of the display panel 1.
In some embodiments, referring to fig. 4, the display area of the display panel 1 includes a plurality of sub-areas 100, each sub-area 100 having a plurality of pixels 10 distributed therein; a plurality of pixels 10 are arranged in an array; the backlight module 5 comprises a plurality of groups of light sources 50, and each group of light sources 50 comprises one light source of each of n colors; the plurality of sets of light sources 50 are disposed in one-to-one correspondence with the plurality of sub-regions 100.
In this embodiment, referring to fig. 4 and 7, fig. 7 is a schematic diagram illustrating sequential refreshing of each sub-frame picture and sequential lighting of light sources with different colors in a backlight module in a display process of a display module according to an embodiment of the present disclosure; the backlight for each sub-area 100 is provided by a set of light sources 50 consisting of light sources of three colors, red, green and blue (e.g., LED lamps), respectively. The light sources with different colors in the backlight module can be sequentially lightened, for example, the red light source, the green light source and the blue light source are sequentially lightened so as to respectively provide backlight for the three sub-frame pictures which are sequentially displayed. The brightness of the light sources with different colors in the backlight module can be adjusted. Different from the liquid crystal panel which is composed of red, green and blue sub-pixels and controls color display by one pixel in the prior art, each pixel 10 of the field sequential display module in the embodiment of the disclosure corresponds to an integral pixel area, and the pixel 10 can only display a gray scale image through liquid crystal deflection in the display of each sub-frame picture; the pixel 10 array in the display module is matched with the sequential refreshing of three sub-frame pictures and the sequential lightening of three color light sources in the backlight module to realize the display of color pictures. The pixel 10 design in this embodiment increases the display aperture ratio and reduces the display power consumption.
In some embodiments, referring to fig. 7, the display module generates an original color frame at 90Hz, in this embodiment, the field sequential display of the display module divides each frame of the original color frame into three sub-frames in order to simulate the refresh rate at 90Hz, and actually extracts the pixel values (i.e. gray-scale values) of the corresponding colors from the original color frame, i.e. extracts the pixel value (e.g. 167) of the first sub-frame of the original color frame that is backlit by the red light source 51, the pixel value (e.g. 145) of the second sub-frame of the original color frame that is backlit by the green light source 52, and the pixel value (e.g. 189) of the third sub-frame of the original color frame that is backlit by the blue light source 53; referring to fig. 6, three sub-frames corresponding to the split original color frame, i.e. R, G, B three-channel frames. R, G, B three-channel pictures are sequentially displayed, and simultaneously, light sources with three colors in the backlight module are sequentially lightened to respectively provide backlight with different colors for the three sub-frame pictures.
In some embodiments, the refresh frequency of each of the three sub-frames is 270Hz, and the lighting frequency of the light sources of the three colors in the backlight module is 270 Hz.
In some embodiments, referring to fig. 8, a structure disassembly schematic diagram of a display module in an embodiment of the present disclosure is shown; the display module further comprises a lens 2, and the lens 2 is positioned on the display side of the display panel 1; the processing module comprises an infrared emitter 3 and an infrared camera 4, wherein the infrared emitter 3 is positioned on one side of the lens 2, which is far away from the display panel 1, distributed on the peripheral edge of the lens 2 and used for emitting infrared light to eyes; the infrared camera 4 is located on one side of the lens 2 away from the display panel 1, and is located at the edge of the lens 2, and is used for shooting an eye image.
The lens 2 is used for introducing distortion to realize virtual reality display. The number of the infrared emitters 3 is plural, and the plural infrared emitters 3 are wound around the peripheral edge of the lens 2. The infrared camera 4 is provided with one. The infrared emitter 3 emits infrared light to the eyes, so that the infrared camera 4 can shoot clear infrared images of the eyes.
In some embodiments, referring to fig. 3, the processing module further comprises an image processing unit configured to convert the eye image into a grayscale image; detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right eye corner position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the intersection origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right eye corner position points; an image processing unit, further configured to process the gray scale image of the eye, determining a pupil region of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture.
Referring to fig. 9a, a schematic diagram of a gray scale image of an eye pattern is shown; the eye images shot by the infrared camera are color pictures; since color components in the eye image are not required in the process of determining the pupil area of the eye, the eye image is converted into a gray image, namely only the brightness value (gray level value) of the eye image is counted; such as: the formula for calculating the grayscale image may be: gray ═ 0.30R +0.60G + 0.1B; wherein, gray is the gray value of each pixel in the gray image; assuming that the pixels include R (red), G (green), and B (blue) sub-pixels, the gray scale value of each pixel in the gray scale image is calculated to account for 30% of the R sub-pixel gray scale value, 60% of the G sub-pixel gray scale value, and 10% of the B sub-pixel gray scale value, thereby obtaining a gray scale image of the entire eye image. Referring to fig. 9a, left and right eye corner position points eye _ edge _ l and eye _ edge _ r of the eye are detected in the gray-scale image according to the eye corner feature points, and the gray-scale image is cropped with the two position points as left and right sides; a connecting line of left and right eye angle position points eye _ edeg _ l and eye _ edeg _ r of the eye is taken as an X axis, an axis perpendicular to the X axis is taken as a Y axis, and an origin point of intersection of the X axis and the Y axis is a midpoint of the connecting line of the left and right eye angle position points eye _ edeg _ l and eye _ edeg _ r.
In some embodiments, referring to fig. 9b, a schematic diagram of an eye pupil region obtained by processing by the image processing unit in the embodiments of the present disclosure; the image processing unit is further configured to process the gray-scale image of the eye and determine a pupil area of the eye; determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture, wherein the specific processing process of the image processing unit is as follows: firstly, carrying out binarization processing on a gray level image to obtain a binarization image of an eye; then, detecting a candidate pupil connected region on the binary image of the eye part by adopting a connected region marking method; then, based on a geometric constraint and distance constraint algorithm, screening out a pupil area of the eye part from the candidate pupil connected area; finally, the pupil area is covered with a circle of the smallest diameter, and the center of the circle is determined as the center of the pupil area 0_ t.
In some embodiments, the binarizing processing the grayscale image includes: setting a gray threshold t, determining the gray value of each sub-pixel in the gray image, setting the gray value of the sub-pixel to 0 if the gray value is greater than the gray threshold t, setting the gray value of the sub-pixel to 1 if the gray value is less than or equal to the gray threshold t, and performing binarization processing to form a binarized image (such as a black-and-white image) of the eye portion as shown in fig. 9 b.
In some embodiments, the processing module may be an image processing chip in the display module, and the image processing chip is integrated with an image processing unit.
In some embodiments, n-3, m-2; the subsequent sub-frame pictures of the first sub-frame picture comprise a second sub-frame picture and a third sub-frame picture; the display module is configured to sequentially display the first sub-frame picture, the second sub-frame picture and the third sub-frame picture; the first prediction module is configured to calculate a current pupil position corresponding to the second sub-frame according to formula (1), and calculate a current pupil position corresponding to the third sub-frame according to formula (2).
pos_curr_g=pos_curr+v_curr×delta+1/2×a_curr×delta 2 ; (1)
pos_curr_b=pos_curr+v_curr×delta×2+1/2×a_curr×(2×delta)2; (2)
Wherein the content of the first and second substances,
Figure BDA0003664526500000131
a _ curr is the current acceleration of eyeball rotation; v _ curr is the current speed of eyeball rotation; pos _1 and pos _2 are pupil positions corresponding to a first subframe picture obtained by the previous 2 measurements respectively; pos _ curr is the current pupil position corresponding to the first subframe picture; pos _ curr _ g is the current pupil position corresponding to the second subframe picture; pos _ curr _ b is the current pupil position corresponding to the third subframe picture; delta is the refresh duration of one sub-frame picture, and delta is 1/refresh frequency of one sub-frame picture.
In some embodiments, the first sub-frame is a sub-frame backlit by a red light source; the second sub-frame is a sub-frame with backlight provided by a green light source; the third sub-frame is a sub-frame backlit by a blue light source.
In some embodiments, referring to fig. 10, a schematic diagram of a detection frequency of a pupil position corresponding to a first sub-frame in the embodiment of the present disclosure is shown; where n is 3, the eye pupil position is detected every two subframes, and the actual display positions of the second subframe and the third subframe are predicted.
In some embodiments, the refresh frequency of one sub-frame is 270Hz, and the refresh duration delta of one sub-frame is 1/270 s or 0.0037 s. The lighting frequency of the light source in the backlight module for providing backlight for each sub-frame is 270 Hz.
In some embodiments, the pupil position corresponding to the first subframe picture obtained by the previous m measurements is counted in the calculation of the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures, so that the rotation track information of the human eyes can be obtained, and the rotation of the human eyes can be captured and tracked conveniently.
In some embodiments, the first prediction module is a hardware structure having a calculation function in the display module, such as an algorithm.
In some embodiments, referring to fig. 11, a schematic diagram of a process of predicting actual display positions of a second sub-frame picture and a third sub-frame picture in the embodiments of the present disclosure is shown; the second prediction module is configured to calculate a first offset matrix of an original position to be displayed of the second sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the second sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture; the second prediction module is also configured to calculate a second offset matrix of the original position to be displayed of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture; the second prediction module is also configured to multiply the original position to be displayed of the second sub-frame picture with the first offset matrix to obtain the actual display position of the second sub-frame picture; and multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
Wherein, the current position of the first sub-frame refers to the current display position of the first sub-frame. And the current pupil position corresponding to the first subframe picture is obtained by detecting the current established coordinate system. The current pupil position corresponding to the second subframe picture and the current pupil position corresponding to the third subframe picture are obtained by calculation according to the current pupil position corresponding to the first subframe picture, the speed and the acceleration of eyeball rotation and the refreshing frequency of one subframe picture. The position of the original position to be displayed of the second sub-frame picture in human eyes corresponds to the current pupil position corresponding to the second sub-frame picture, and the position of the original position to be displayed of the third sub-frame picture in human eyes corresponds to the current pupil position corresponding to the third sub-frame picture; the position of the actual display position of the second sub-frame picture in human eyes corresponds to the current pupil position corresponding to the first sub-frame picture; the position of the actual display position of the third sub-frame in the eyes corresponds to the current pupil position corresponding to the first sub-frame; after the calculation of the second prediction module, the positions of the first sub-frame picture, the second sub-frame picture and the third sub-frame picture in human eyes can be all located at the current pupil position corresponding to the first sub-frame picture, so that the color separation phenomenon caused by eyeball rotation can be eliminated.
In some embodiments, the first offset matrix is:
Figure BDA0003664526500000151
the second offset matrix is:
Figure BDA0003664526500000152
rx1, Ry1 and Rz1 respectively represent the rotation amplitude of the original position to be displayed of the second sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; tx1, Ty1 and Tz1 respectively represent the translation amplitude of the original position to be displayed of the second sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; rx2, Ry2 and Rz2 respectively represent the rotation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; tx2, Ty2 and Tz2 respectively represent the translation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; the Z axis is perpendicular to a two-dimensional coordinate plane formed by the intersection of the X axis and the Y axis, and the Z axis intersects the X axis and the Y axis at the origin.
In some embodiments, the second prediction module is a hardware structure having a calculation function in the display module, such as an algorithm.
In some embodiments, referring to fig. 5, the display module is configured to sequentially display the second sub-frame picture and the third sub-frame picture according to actual display positions of the second sub-frame picture and the third sub-frame picture. The method specifically comprises the following steps: the second sub-frame picture and the third sub-frame picture are sequentially displayed at the corresponding actual display positions of the display module, so that the pictures displayed by the pixels 10 at the same positions on the first sub-frame picture, the second sub-frame picture and the third sub-frame picture can be projected to the same position of a pupil area (namely retina) of a human eye, and three pictures which are provided with backlight by light sources with different colors are fused, so that the color separation phenomenon is eliminated, and the picture sensed by the human eye is a complete picture.
In some embodiments, referring to fig. 3, the display module further includes a storage module configured to store the current pupil position corresponding to the first sub-frame and the current pupil positions corresponding to the subsequent sub-frames. Therefore, the current pupil position data corresponding to each sub-frame picture can be conveniently involved in the prediction of the pupil position corresponding to each subsequent sub-frame picture.
In some embodiments, the storage module is a hardware structure having a storage function in the display module, such as a memory.
The display module provided in the embodiment of the present disclosure determines, through processing by the processing module, a current pupil position corresponding to a first sub-frame displayed first in n sub-frame pictures; the first prediction module calculates the current pupil position corresponding to each subsequent subframe picture of the first subframe picture; the second prediction module calculates the actual display position of each subsequent sub-frame of the first sub-frame according to the calculation result of the first prediction module; the rotation track information of the eyeballs of the human eyes can be acquired so as to capture and track the rotation of the eyeballs of the human eyes; the actual display position of each subsequent sub-frame picture of the first sub-frame picture can be enabled to catch up with the eyeball rotating position by adjusting the position to be displayed to the actual display position, so that the pictures displayed by the pixels with the same position on each subsequent sub-frame picture of the first sub-frame picture are projected to the same position in the pupil area (namely on the retina) of human eyes, and three pictures which are provided with backlight by light sources with different colors are fused, thereby eliminating the color separation phenomenon and ensuring that the pictures sensed by human eyes are a complete picture.
Based on the structure of the display module in the above embodiment, the embodiment of the present disclosure further provides a display method of the display module, and fig. 12 is a flowchart of the display method of the display module in the embodiment of the present disclosure; the display method comprises the following steps: step S101: decomposing a frame of color picture into n sub-frame pictures; and sequentially displaying n subframe pictures, wherein n is more than or equal to 3 and is an integer.
Step S102: acquiring an eye image, processing the eye image, and determining a current pupil position corresponding to a first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when the n subframe pictures are displayed in sequence.
The method specifically comprises the following steps: shooting an eye image when a first subframe picture is displayed; converting the eye image into a gray image; detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right eye corner position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the intersection origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right eye corner position points; processing the gray level image in a coordinate plane formed by an X axis and a Y axis to determine a pupil area of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture.
In some embodiments, the gray scale image is processed in a coordinate plane formed by an X axis and a Y axis to determine a pupil area of the eye; determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture, specifically including: carrying out binarization processing on the gray level image to obtain a binarization image of the eye; detecting a candidate pupil connected region on a binary image of the eye by adopting a connected region marking method; screening out a pupil area of an eye part from the candidate pupil connected area based on a geometric constraint and distance constraint algorithm; the pupil area is covered with a circle of smallest diameter, and the center of the circle is determined as the center of the pupil area.
Step S103: calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m times of measurement; m is not less than 2, and m is an integer.
In some embodiments, n-3, m-2; the subsequent sub-frame pictures of the first sub-frame picture comprise a second sub-frame picture and a third sub-frame picture; sequentially displaying a first sub-frame picture, a second sub-frame picture and a third sub-frame picture; the step S103 includes: and calculating the current pupil position corresponding to the second subframe picture according to a formula (1), and calculating the current pupil position corresponding to the third subframe picture according to a formula (2).
pos_curr_g=pos_curr+v_curr×delta+1/2×a_curr×delta 2 ; (1)
pos_curr_b=pos_curr+v_curr×delta×2+1/2×a_curr×(2×delta) 2 ; (2)
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003664526500000171
a _ curr is the current acceleration of eyeball rotation; v _ curr is the current speed of eyeball rotation; pos _1 and pos _2 are pupil positions corresponding to a first subframe picture obtained by the previous 2 measurements respectively; pos _ curr is the current pupil position corresponding to the first subframe picture; pos _ curr _ g is the current pupil position corresponding to the second subframe picture; pos _ curr _ b is the current pupil position corresponding to the third subframe picture; delta is the refresh duration of one sub-frame picture, and delta is 1/refresh frequency of one sub-frame picture.
In some embodiments, the display method further comprises: and storing the current pupil position corresponding to the first sub-frame picture and the current pupil positions corresponding to the subsequent sub-frame pictures.
Step S104: calculating the actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; and calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the current position of each subsequent sub-frame picture relative to the first sub-frame picture.
The method specifically comprises the following steps: and calculating a first offset matrix of the original position to be displayed of the second sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the second sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture. And calculating a second offset matrix of the original position to be displayed of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture. And multiplying the original position to be displayed of the second sub-frame picture by the first offset matrix to obtain the actual display position of the second sub-frame picture. And multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
In some embodiments, the first offset matrix is:
Figure BDA0003664526500000181
the second offset matrix is:
Figure BDA0003664526500000182
wherein, Rx1, Ry1, Rz1 respectively represent the rotation amplitudes of the original to-be-displayed position of the second sub-frame along the X-axis, Y-axis, Z-axis with respect to the current position of the first sub-frame; tx1, Ty1 and Tz1 respectively represent the translation amplitude of the original position to be displayed of the second sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; rx2, Ry2 and Rz2 respectively represent the rotation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; tx2, Ty2 and Tz2 respectively represent the translation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; the Z axis is perpendicular to a two-dimensional coordinate plane formed by the intersection of the X axis and the Y axis, and the Z axis intersects the X axis and the Y axis at the origin.
Step S105: and sequentially displaying the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
In the display method of the display module provided in the embodiment of the disclosure, the current pupil position corresponding to the first subframe picture displayed first in n subframe pictures is determined through processing; calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture; calculating the actual display position of each subsequent subframe picture of the first subframe picture according to the calculation result of the current pupil position corresponding to each subsequent subframe picture of the first subframe picture; the rotation track information of the eyeballs of the human eyes can be acquired so as to capture and track the rotation of the eyeballs of the human eyes; the actual display position of each subsequent sub-frame picture of the first sub-frame picture can be enabled to catch up with the eyeball rotating position by adjusting the position to be displayed to the actual display position, so that the pictures displayed by the pixels with the same position on each subsequent sub-frame picture of the first sub-frame picture are projected to the same position in the pupil area (namely on the retina) of human eyes, and three pictures which are provided with backlight by light sources with different colors are fused, thereby eliminating the color separation phenomenon and ensuring that the pictures sensed by human eyes are a complete picture.
In a third aspect, an embodiment of the present disclosure further provides a display device, which includes the display module in the foregoing embodiment.
Through adopting the display module assembly in the above-mentioned embodiment, make this display device can not appear the colour separation phenomenon when carrying out the virtual reality demonstration, promoted this display device's virtual reality display effect.
The display device may be: VR glasses, VR panels, VR televisions, mobile phones, tablet computers, notebook computers, displays, digital photo frames, navigators and other products or components with VR display function.
In a fourth aspect, an embodiment of the present disclosure further provides a virtual display apparatus, including the display device in the foregoing embodiment.
By adopting the display device in the embodiment, the color separation phenomenon can not occur when the virtual display equipment performs virtual reality display, and the virtual reality display effect of the virtual display equipment is improved.
The virtual display device may be: VR glasses, VR panels, VR televisions, mobile phones, tablet computers, notebook computers, displays, digital photo frames, navigators and other products or components with VR display function.
It is to be understood that the above embodiments are merely exemplary embodiments that are employed to illustrate the principles of the present disclosure, and that the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the disclosure, and these are to be considered as the scope of the disclosure.

Claims (14)

1. A display method, comprising:
decomposing a frame of color picture into n sub-frame pictures; the n sub-frame pictures are sequentially displayed, n is more than or equal to 3 and is an integer;
acquiring an eye image, processing the eye image, and determining a current pupil position corresponding to a first subframe picture; the first subframe picture is a subframe picture which is displayed firstly when the n subframe pictures are displayed in sequence;
calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by previous m-time measurement; m is more than or equal to 2 and is an integer;
calculating actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of each subsequent sub-frame picture relative to the current position of the first sub-frame picture;
and sequentially displaying the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
2. The display method according to claim 1, wherein the acquiring the eye image, processing the eye image, and determining the current pupil position corresponding to the first sub-frame picture comprises:
shooting an eye image when the first subframe picture is displayed;
converting the eye image into a gray image;
detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right eye corner position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the intersection origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right eye corner position points;
processing the gray level image in a coordinate plane formed by an X axis and a Y axis to determine a pupil area of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture.
3. The display method according to claim 2, wherein the processing of the gray scale image in the coordinate plane formed by the X-axis and the Y-axis determines a pupil area of the eye; determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture, including:
carrying out binarization processing on the gray level image to obtain a binarization image of the eye;
detecting candidate pupil connected regions on the binary image of the eye part by adopting a connected region marking method;
screening out the pupil areas of the eyes from the candidate pupil connected areas based on a geometric constraint and distance constraint algorithm;
the pupil area is covered with a circle of smallest diameter, and the center of the circle is determined as the center of the pupil area.
4. The display method according to claim 2, wherein n-3, m-2;
the subsequent sub-frame pictures of the first sub-frame picture comprise a second sub-frame picture and a third sub-frame picture; the first sub-frame picture, the second sub-frame picture and the third sub-frame picture are sequentially displayed;
the calculating the current pupil position corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by the previous m-time measurement comprises:
calculating the current pupil position corresponding to the second subframe picture according to a formula (1), and calculating the current pupil position corresponding to the third subframe picture according to a formula (2);
pos_curr_g=pos_curr+v_curr×delta+1/2×a_curr×delta 2 ; (1)
pos_curr_b=pos_curr+v_curr×delta×2+1/2×a_curr×(2×delta) 2 ; (2)
wherein the content of the first and second substances,
Figure FDA0003664526490000021
a _ curr is the current acceleration of eyeball rotation; v _ curr is the current speed of eyeball rotation; pos _1 and pos _2 are pupil positions corresponding to the first subframe picture obtained by the previous 2 times of measurement respectively; pos _ curr is the current pupil position corresponding to the first subframe picture; pos _ curr _ g is the current pupil position corresponding to the second subframe picture; pos _ curr _ b is the current pupil position corresponding to the third subframe picture; delta is the refresh duration of one said sub-frame picture, delta is 1/refresh frequency of one said sub-frame picture.
5. The display method according to claim 4, wherein the actual offset of the original to-be-displayed position of each subsequent sub-frame picture relative to the current position of the first sub-frame picture is calculated according to the offset of the current pupil position corresponding to each subsequent sub-frame picture in the n sub-frame pictures relative to the current pupil position corresponding to the first sub-frame picture; and calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of the original to-be-displayed position relative to the current position of the first sub-frame picture, wherein the method comprises the following steps of:
calculating a first offset matrix of the original to-be-displayed position of the second sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the second sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
calculating a second offset matrix of the original position to be displayed of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
multiplying the original position to be displayed of the second sub-frame picture by the first offset matrix to obtain the actual display position of the second sub-frame picture;
and multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
6. The display method according to claim 5,
the first offset matrix is:
Figure FDA0003664526490000031
the second offset matrix is:
Figure FDA0003664526490000032
wherein Rx1, Ry1 and Rz1 respectively represent the rotation amplitudes of the original to-be-displayed position of the second sub-frame along the X-axis, the Y-axis and the Z-axis relative to the current position of the first sub-frame; tx1, Ty1 and Tz1 respectively represent the translation amplitude of the original position to be displayed of the second sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame;
rx2, Ry2 and Rz2 respectively represent the rotation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame; tx2, Ty2 and Tz2 respectively represent the translation amplitude of the original position to be displayed of the third sub-frame along the X axis, the Y axis and the Z axis relative to the current position of the first sub-frame;
the Z axis is perpendicular to a two-dimensional coordinate plane formed by the intersection of the X axis and the Y axis, and the Z axis intersects the X axis and the Y axis at the origin.
7. The display method according to any one of claims 1 to 6, further comprising: and storing the current pupil position corresponding to the first sub-frame picture and the current pupil positions corresponding to the subsequent sub-frame pictures.
8. A display module, wherein, includes:
a disassembling module configured to disassemble a frame of color picture into n sub-frame pictures; n is not less than 3 and is an integer;
a display module configured to sequentially display the n subframe pictures;
the processing module is configured to acquire an eye image, process the eye image and determine a current pupil position corresponding to the first subframe picture; the first subframe picture is a subframe picture which is displayed firstly in the n subframe pictures;
the first prediction module is configured to calculate current pupil positions corresponding to each subsequent subframe picture of the first subframe picture in the n subframe pictures according to the current pupil position corresponding to the first subframe picture and the pupil position corresponding to the first subframe picture obtained by measuring for m times before; m is more than or equal to 2 and is an integer;
the second prediction module is configured to calculate the actual offset of the original to-be-displayed position of each subsequent subframe picture relative to the current position of the first subframe picture according to the offset of the current pupil position corresponding to each subsequent subframe picture in the n subframe pictures relative to the current pupil position corresponding to the first subframe picture; calculating the actual display position of each subsequent sub-frame picture according to the original to-be-displayed position of each subsequent sub-frame picture and the actual offset of each subsequent sub-frame picture relative to the current position of the first sub-frame picture;
and the display module is also configured to sequentially display the subsequent sub-frame pictures according to the actual display positions of the subsequent sub-frame pictures.
9. The display module of claim 8, wherein the display module comprises a display panel and a lens, the lens being located on a display side of the display panel;
the processing module comprises an infrared emitter and an infrared camera,
the infrared emitters are positioned on one side of the lens, which is far away from the display panel, distributed on the peripheral edge of the lens and used for emitting infrared light to eyes;
the infrared camera is located on one side of the lens, which is far away from the display panel, and is located at the edge of the lens and used for shooting eye images.
10. The display module of claim 9, wherein the processing module further comprises an image processing unit configured to convert the eye image into a grayscale image; detecting left and right canthus position points of the eyes according to the canthus feature points in the gray level image; connecting the left and right canthus position points and using the connecting line as an X axis, using an axis vertical to the X axis as a Y axis, and using the crossed origin of the X axis and the Y axis as the midpoint of the connecting line of the left and right canthus position points;
the image processing unit is further configured to process the gray-scale image of the eye and determine a pupil area of the eye; and determining the center of the pupil area as the current pupil position corresponding to the first sub-frame picture.
11. The display module according to claim 10, wherein the second prediction module is configured to calculate a first offset matrix of an original to-be-displayed position of the second sub-frame relative to a current position of the first sub-frame according to an offset of a current pupil position corresponding to the second sub-frame relative to a current pupil position corresponding to the first sub-frame;
the second prediction module is further configured to calculate a second offset matrix of the original to-be-displayed position of the third sub-frame picture relative to the current position of the first sub-frame picture according to the offset of the current pupil position corresponding to the third sub-frame picture relative to the current pupil position corresponding to the first sub-frame picture;
the second prediction module is further configured to multiply the original to-be-displayed position of the second sub-frame picture by the first offset matrix to obtain an actual display position of the second sub-frame picture; and multiplying the original position to be displayed of the third sub-frame picture by the second offset matrix to obtain the actual display position of the third sub-frame picture.
12. The display module according to any one of claims 8 to 11, further comprising a storage module configured to store a current pupil position corresponding to the first sub-frame and a current pupil position corresponding to each subsequent sub-frame.
13. A display device comprising the display module according to any one of claims 8 to 12.
14. A virtual display apparatus comprising the display device according to claim 13.
CN202210582350.8A 2022-05-26 2022-05-26 Display module and display method thereof, display device and virtual display equipment Pending CN115002444A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210582350.8A CN115002444A (en) 2022-05-26 2022-05-26 Display module and display method thereof, display device and virtual display equipment
PCT/CN2023/091507 WO2023226693A1 (en) 2022-05-26 2023-04-28 Display module and display method thereof, display apparatus, and virtual display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210582350.8A CN115002444A (en) 2022-05-26 2022-05-26 Display module and display method thereof, display device and virtual display equipment

Publications (1)

Publication Number Publication Date
CN115002444A true CN115002444A (en) 2022-09-02

Family

ID=83030055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210582350.8A Pending CN115002444A (en) 2022-05-26 2022-05-26 Display module and display method thereof, display device and virtual display equipment

Country Status (2)

Country Link
CN (1) CN115002444A (en)
WO (1) WO2023226693A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226693A1 (en) * 2022-05-26 2023-11-30 京东方科技集团股份有限公司 Display module and display method thereof, display apparatus, and virtual display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956144A (en) * 2013-12-13 2014-07-30 天津三星电子有限公司 Display driving method and device, and display
US20160018649A1 (en) * 2014-01-21 2016-01-21 Osterhout Group, Inc. See-through computer display systems
CN113534490A (en) * 2021-07-29 2021-10-22 深圳市创鑫未来科技有限公司 Stereoscopic display device and stereoscopic display method based on user eyeball tracking
US20210366077A1 (en) * 2020-05-21 2021-11-25 Magic Leap, Inc. Warping for spatial light modulating displays using eye tracking

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8970495B1 (en) * 2012-03-09 2015-03-03 Google Inc. Image stabilization for color-sequential displays
US10410566B1 (en) * 2017-02-06 2019-09-10 Andrew Kerdemelidis Head mounted virtual reality display system and method
WO2020201999A2 (en) * 2019-04-01 2020-10-08 Evolution Optiks Limited Pupil tracking system and method, and digital display device and digital image rendering system and method using same
CN110209000A (en) * 2019-05-30 2019-09-06 上海天马微电子有限公司 A kind of display panel, display methods and display device
WO2021236989A1 (en) * 2020-05-21 2021-11-25 Magic Leap, Inc. Warping for laser beam scanning displays using eye tracking
CN115002444A (en) * 2022-05-26 2022-09-02 京东方科技集团股份有限公司 Display module and display method thereof, display device and virtual display equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956144A (en) * 2013-12-13 2014-07-30 天津三星电子有限公司 Display driving method and device, and display
US20160018649A1 (en) * 2014-01-21 2016-01-21 Osterhout Group, Inc. See-through computer display systems
US20210366077A1 (en) * 2020-05-21 2021-11-25 Magic Leap, Inc. Warping for spatial light modulating displays using eye tracking
CN113534490A (en) * 2021-07-29 2021-10-22 深圳市创鑫未来科技有限公司 Stereoscopic display device and stereoscopic display method based on user eyeball tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226693A1 (en) * 2022-05-26 2023-11-30 京东方科技集团股份有限公司 Display module and display method thereof, display apparatus, and virtual display device

Also Published As

Publication number Publication date
WO2023226693A9 (en) 2024-02-01
WO2023226693A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
CN113170136B (en) Motion smoothing of reprojected frames
US6717728B2 (en) System and method for visualization of stereo and multi aspect images
US10802287B2 (en) Dynamic render time targeting based on eye tracking
US20100110069A1 (en) System for rendering virtual see-through scenes
US8189035B2 (en) Method and apparatus for rendering virtual see-through scenes on single or tiled displays
CN101243694B (en) A stereoscopic display apparatus
CN108762492A (en) Method, apparatus, equipment and the storage medium of information processing are realized based on virtual scene
KR20120048301A (en) Display apparatus and method
CN105704479A (en) Interpupillary distance measuring method and system for 3D display system and display device
CN108475180A (en) The distributed video between multiple display areas
Hincapié-Ramos et al. SmartColor: real-time color and contrast correction for optical see-through head-mounted displays
US11343486B2 (en) Counterrotation of display panels and/or virtual cameras in a HMD
KR20100023970A (en) Lighting device
CN111275731A (en) Projection type real object interactive desktop system and method for middle school experiment
CN107025087A (en) A kind of method for displaying image and equipment
WO2023226693A1 (en) Display module and display method thereof, display apparatus, and virtual display device
CN113534490B (en) Stereoscopic display device and stereoscopic display method based on user eyeball tracking
US11170678B2 (en) Display apparatus and method incorporating gaze-based modulation of pixel values
US11710467B2 (en) Display artifact reduction
WO2020014126A1 (en) Autostereoscopic display with viewer tracking
CN117079613B (en) Display screen compensation method, display and storage medium
US20240112628A1 (en) Displays with Selective Pixel Brightness Tuning
CN115767068A (en) Information processing method and device and electronic equipment
KR20100052732A (en) Image display device and method of processing image using vanishing point
KR20090091525A (en) Apparatus and method for measuring quality of moving picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination