WO2023231674A1 - 液晶光栅的驱动方法及显示装置、其显示方法 - Google Patents

液晶光栅的驱动方法及显示装置、其显示方法 Download PDF

Info

Publication number
WO2023231674A1
WO2023231674A1 PCT/CN2023/091502 CN2023091502W WO2023231674A1 WO 2023231674 A1 WO2023231674 A1 WO 2023231674A1 CN 2023091502 W CN2023091502 W CN 2023091502W WO 2023231674 A1 WO2023231674 A1 WO 2023231674A1
Authority
WO
WIPO (PCT)
Prior art keywords
liquid crystal
coordinate system
light
crystal grating
pupil center
Prior art date
Application number
PCT/CN2023/091502
Other languages
English (en)
French (fr)
Other versions
WO2023231674A9 (zh
Inventor
李鑫恺
陈丽莉
吕耀宇
马思研
李言
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2023231674A1 publication Critical patent/WO2023231674A1/zh
Publication of WO2023231674A9 publication Critical patent/WO2023231674A9/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/30Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving parallax barriers
    • G02B30/31Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving parallax barriers involving active parallax barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of display technology, and in particular, to a driving method of a liquid crystal grating, a display device, and a display method thereof.
  • the present disclosure provides a liquid crystal grating driving method, a display device, and a display method thereof.
  • the specific solutions are as follows:
  • embodiments of the present disclosure provide a method for driving a liquid crystal grating, including:
  • the corresponding relationship between the positions of the light-transmitting areas determines the position of the light-transmitting area corresponding to the real-time position in the liquid crystal grating;
  • the liquid crystal grating is driven so that the liquid crystal grating only transmits light at the position of the light-transmitting area corresponding to the real-time position.
  • the user's facial image is collected in real time, and the three-dimensional coordinates of the pupil center in the camera coordinate system are determined based on the facial image, which specifically includes:
  • Visible light cameras with different positions are used to collect facial images in real time
  • the three-dimensional coordinates of the center of the mapping space of each first edge point in the camera coordinate system are used as the three-dimensional coordinates of the pupil center in the camera coordinate system.
  • Extract a plurality of first edge points of the iris in the facial image perform ellipse fitting on the plurality of first edge points, and use the two-dimensional coordinates of the fitted ellipse center in the image coordinate system as the pupil center in the image coordinate system Two-dimensional coordinates under the system;
  • the mean or mode of the depth coordinates of the plurality of second edge points of the human eye in the three-dimensional face model in the camera coordinate system is used as the depth coordinate of the pupil center in the camera coordinate system;
  • the two-dimensional coordinates of the pupil center in the image coordinate system are the same as the depth coordinates of the pupil center in the camera coordinate system. Constitutes the three-dimensional coordinates of the pupil center in the camera coordinate system.
  • One of the facial feature points is used as the origin of the coordinate system of the three-dimensional face model to be established, and the origin of the coordinate system of the three-dimensional face model to be established is adjusted to coincide with the origin of the camera coordinate system, so that the multiple The three-dimensional coordinates of the facial feature points in the camera coordinate system are converted into three-dimensional coordinates in the coordinate system of the three-dimensional face model to be established;
  • establishing a corresponding relationship between the position of the pupil center and the position of the light-transmitting area in the liquid crystal grating specifically includes:
  • the light-transmitting areas at different positions in the liquid crystal grating corresponding to the different positions of the pupil center are determined.
  • the real-time position is determined based on the real-time position and the corresponding relationship between the pre-established position of the pupil center and the position of the light-transmitting area in the liquid crystal grating.
  • the corresponding position of the light-transmitting area in the liquid crystal grating specifically includes:
  • the negative value of the x-coordinate of the pupil center determined in real time in the camera coordinate system is used as the pupil center in The x coordinate in the coordinate system of the liquid crystal grating, and the z coordinate of the pupil center determined in real time in the camera coordinate system as the z coordinate of the pupil center in the coordinate system of the liquid crystal grating;
  • the size of the light-transmitting area is the same as the size of the light-transmitting area before movement; if not, the size of the light-transmitting area is adjusted in the X-axis direction of the coordinate system of the liquid crystal grating, and the position of the light-transmitting area after adjustment is the same as that before adjustment.
  • the light zone locations partially overlap.
  • adjusting the size of the light-transmitting area in the X-axis direction of the coordinate system of the liquid crystal grating specifically includes:
  • the light-transmitting area corresponding to the center, m is the total number of grating periods in the liquid crystal grating, and a is greater than An integer equal to 1 and less than or equal to m, b is an integer greater than or equal to 1 and less than or equal to n.
  • embodiments of the present disclosure provide a display method for the above display device, including:
  • the liquid crystal grating is controlled to be completely transparent
  • the above driving method is used to control the liquid crystal grating to form alternately arranged light-transmitting areas and light-shielding areas.
  • the above display method while adjusting the size of the light-transmitting area in the X-axis direction of the coordinate system of the liquid crystal grating, it also includes:
  • the total number of strip electrodes corresponding to the light-transmitting area is determined, and the backlight brightness emitted by the backlight source is adjusted based on the total number of strip electrodes.
  • the backlight brightness has a negative correlation with the total number of strip electrodes.
  • Figure 4 is a schematic structural diagram of a display device provided by an embodiment of the present disclosure.
  • Figure 5 is another flow chart for determining the real-time position of the pupil center provided by an embodiment of the present disclosure
  • Figure 6 is a flow chart for establishing a three-dimensional face model provided by an embodiment of the present disclosure
  • Figure 8 is a schematic diagram of establishing a coordinate system for a liquid crystal grating provided by an embodiment of the present disclosure
  • Figure 10 is a schematic diagram of moving the position of the light-transmitting zone to the left according to the real-time position of the pupil center provided by an embodiment of the present disclosure
  • Figure 11 is a schematic diagram of reducing the size of the light-transmitting area according to the real-time position of the pupil center provided by an embodiment of the present disclosure
  • the related naked-eye 3D display technology is mainly a viewpoint-based stereoscopic display method, and its light splitting devices mainly include light barrier type and cylindrical lens type.
  • the light barrier type spectroscopic device may be a liquid crystal grating.
  • the liquid crystal grating includes a first substrate and a second substrate that are opposite to each other, and is located between the first substrate and the second substrate.
  • the first substrate has a plurality of strip electrodes on one side facing the liquid crystal layer
  • the second substrate has a planar electrode on the side facing the liquid crystal layer.
  • the real-time position of the pupil center is determined, and the real-time position is determined based on the real-time position and the pre-established correspondence between the position of the pupil center and the position of the light-transmitting area in the liquid crystal grating.
  • the liquid crystal grating is controlled to transmit light only at the position of the light-transmitting area corresponding to the real-time position, so that the user can view 3D images at different distances and different viewing angles, thus lifting the
  • the limitation that naked-eye 3D devices can only be viewed at a fixed viewing angle and fixed distance is conducive to the promotion and application of naked-eye 3D display technology based on liquid crystal gratings.
  • the above-mentioned step S101, determining the real-time position of the pupil center can be implemented in the following manner: collecting the user's facial image in real time, and determining the pupil based on the facial image. The three-dimensional coordinates of the center in the camera coordinate system to achieve accurate positioning of the pupil center.
  • Visible light images usually have higher spatial resolution (i.e., the smallest discernible details in the image), can present more detailed information, and have good contrast between light and dark. Therefore, visible light images are suitable for human visual perception.
  • imaging Component C such as a visible light (RGB) camera, collects the user's facial image.
  • the above steps collect the user's facial image in real time, and determine the three-dimensional coordinates of the pupil center in the camera coordinate system based on the facial image, which can be implemented through the steps shown in Figure 3:
  • S1011' use an infrared camera to obtain the user's facial image in real time
  • facial feature points obtained from the facial image, and map the multiple facial feature points to the same position on the pre-established three-dimensional face model; optionally, the facial feature points can reflect, for example, eyebrows, eyes, nose , mouth, face contour and other facial features points;
  • the rotation vector and translation vector from the coordinate system of the three-dimensional face model to the camera coordinate system can be obtained based on the PnP pose measurement method, according to The obtained rotation vector rotates the xyz coordinate axis of the coordinate system of the three-dimensional face model, and translates the origin of the coordinate system of the three-dimensional face model according to the obtained translation vector, so that the coordinate system of the three-dimensional face model coincides with the camera coordinate system;
  • S1016' Convert the two-dimensional coordinates of the pupil center in the image coordinate system to the two-dimensional coordinates of the pupil center in the camera coordinate system.
  • the two-dimensional coordinates of the pupil center in the camera coordinate system are the same as the depth of the pupil center in the camera coordinate system.
  • the coordinates constitute the three-dimensional coordinates of the pupil center in the camera coordinate system.
  • Each camera has camera intrinsic parameters (such as optical center, focal length, etc.). Combined with the camera intrinsic parameters, the two-dimensional coordinates (i.e., xy coordinates) of each point in the image captured by the camera in the image coordinate system can be converted into the camera coordinate system. Two-dimensional coordinates of the same dimension (i.e. xy coordinates).
  • a three-dimensional face model is established, which can be implemented in the following ways:
  • a visible light camera to collect at least one facial image; in some embodiments, multiple frames of facial images can be collected through two visible light cameras;
  • facial feature points such as the tip of the nose, pupils, etc.
  • PnP pose measurement method to set the coordinate system of the three-dimensional face model to be established.
  • the origin is adjusted to coincide with the origin of the camera coordinate system, so that the three-dimensional coordinates of the multiple facial feature points in the camera coordinate system are converted into three-dimensional coordinates in the coordinate system of the three-dimensional face model to be established;
  • the light emitting direction of the liquid crystal grating (equivalent to the positive Z-axis direction of the camera coordinate system) is the positive Z-axis direction, and the negative X-axis direction of the camera coordinate system is the positive X-axis direction, establish the liquid crystal
  • the coordinate system of the grating is shown in Figure 8; the liquid crystal grating is located between the LCD panel and the backlight, D is the optimal viewing distance of the human eye from the LCD panel, h is the refractive index of the LCD panel and the LCD grating at the optimal viewing distance.
  • the center of the liquid crystal grating roughly coincides with the center of the liquid crystal panel (that is, exactly coincident, or within the error range caused by alignment, measurement, etc.), and the light emitting direction of the liquid crystal grating is the direction from the liquid crystal grating to the liquid crystal panel;
  • step S102 is to determine the real-time position corresponding to the corresponding relationship between the real-time position and the pre-established position of the pupil center and the position of the light-transmitting area in the liquid crystal grating.
  • the position of the light-transmitting area in the liquid crystal grating may include the following steps, as shown in Figure 9:
  • the overall light transmittance of the light-transmitting areas in the liquid crystal grating is 50%, and the size of each light-transmitting area is the same; if not, it can be determined that the human eye moves back and forth on the Y-axis ( Figure 11 shows the direction of the human eye Move closer to the LCD panel), adjust the size of the light transmission area in the X-axis direction of the coordinate system of the liquid crystal grating ( Figure 11 shows the reduction of the light transmission area), and the light transmission area after adjustment is the same as the light transmission area before adjustment Areas partially overlap.
  • moving the light-transmitting area in the X-axis direction of the coordinate system of the liquid crystal grating can be implemented in the following manner:
  • n strip electrodes are used as a cycle, and the strip electrodes corresponding to the number of digits in all cycles are connected together. Therefore, the light transmission of the liquid crystal grating is the same in each cycle, then Just calculate the voltage applied to the strip electrode in one cycle.
  • the line connecting the coordinate point of the center of the left eye pupil under the coordinate system of the liquid crystal grating and the origin of the coordinate system of the liquid crystal grating is extended to a point compared with the liquid crystal grating. Compare each bar in the X-axis direction. The coordinates x i of the left endpoint of the shaped electrode and the coordinates x open of this point. If x i-1 ⁇ x open ⁇ x i , then it can be known that the bars from the i-th to (i+n/2)-1 in one cycle The position of the electrode needs to be set to a light-transmitting state.
  • the size of the light-transmitting area is adjusted in the X-axis direction of the coordinate system of the liquid crystal grating, which can be implemented in the following manner, as shown in Figure 12:
  • the b-th strip electrode in the a-th period Compare the position coordinates of the b-th strip electrode in the a-th period with the coordinate range of the left-eye pixel corresponding to the light-transmissive area and the coordinate range of the right-eye pixel corresponding to the light-transmissive area. If the b-th strip electrode in the a-th period The position coordinates of the strip electrode are simultaneously within the coordinate range of the left eye pixel corresponding to the light-transmissive area and the coordinate range of the right eye pixel corresponding to the light-transmissive area. That is, the position coordinates of the b-th strip electrode in the a-th period are located in the left eye.
  • the area where the b-th strip electrode is located in the a-th period is determined to be the light-transmitting area corresponding to the current pupil center
  • m is the total number of grating periods in the liquid crystal grating
  • a is an integer greater than or equal to 1 and less than or equal to m
  • b is an integer greater than or equal to 1 and less than or equal to n.
  • an embodiment of the disclosure also provides a display device, including a backlight, a liquid crystal panel located on the light emitting side of the backlight, and a liquid crystal grating located between the backlight and the liquid crystal panel.
  • the liquid crystal grating adopts the embodiment of the disclosure.
  • the above-mentioned display device provided by the embodiments of the present disclosure may be: a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, a smart watch, a fitness wristband, a personal digital assistant, or any other device with A product or component that displays functionality.
  • the display device includes but is not limited to: radio frequency unit, network module, audio output & input unit, sensor, display unit, user input unit, interface unit, memory, processor, power supply and other components.
  • the above structure does not constitute a limitation on the above display device provided by the embodiment of the present disclosure.
  • the above display device provided by the embodiment of the present disclosure may include more or less of the above. components, or combinations of certain components, or different arrangements of components.
  • embodiments of the present disclosure also provide a display method for the above display device, including the following steps:
  • the liquid crystal grating is controlled to be completely transparent
  • the above driving method provided by the embodiment of the present disclosure is used to control the liquid crystal grating to form alternately arranged light-transmitting areas and light-shielding areas.
  • the backlight when in the three-dimensional display mode, the backlight is inevitably blocked by the light-shielding area in the liquid crystal grating, resulting in a reduction in screen brightness, and when the human eye moves forward (that is, moves toward the direction closer to the liquid crystal panel), the liquid crystal The light-transmitting area of the grating is reduced, and the screen brightness will be lower. Therefore, to ensure screen brightness, the backlight brightness needs to be increased.
  • the total number of strip electrodes corresponding to the light-transmitting area can be determined, and the backlight output can be adjusted based on the total number of strip electrodes.
  • the backlight brightness has a negative correlation with the total number of strip electrodes, that is, the smaller the light transmission area and the fewer the number of strip electrodes, the greater the backlight brightness needs to be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Diffracting Gratings Or Hologram Optical Elements (AREA)
  • Liquid Crystal (AREA)

Abstract

本公开提供的液晶光栅的驱动方法及显示装置、其显示方法,包括确定瞳孔中心的实时位置;根据实时位置、以及预先建立的瞳孔中心的位置与液晶光栅中透光区位置的对应关系,确定实时位置对应的透光区在液晶光栅中的位置;驱动液晶光栅,使得液晶光栅仅在与实时位置对应的透光区的位置透光。

Description

液晶光栅的驱动方法及显示装置、其显示方法
相关申请的交叉引用
本申请要求在2022年5月30日提交中国专利局、申请号为202210614743.2、申请名称为“液晶光栅的驱动方法及显示装置、其显示方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及显示技术领域,尤其涉及一种液晶光栅的驱动方法及显示装置、其显示方法。
背景技术
随着显示技术的不断发展,三维(three dimensional,3D)显示技术越来越备受关注。三维显示技术的工作原理为:针对同一场景,使观看者的左眼和右眼分别接收图像,观看者的两眼在水平方向上的间距(即瞳距,约为65mm),使得两眼的视角存在细微的差别,由于这种差别的存在,观看者的左眼和右眼分别观察到的图像也会略有差异(即双眼视差),左眼图像和右眼图像经大脑视觉皮层的叠加融合后,就形成了立体效果。
发明内容
本公开提供了一种液晶光栅的驱动方法及显示装置、其显示方法,具体方案如下:
一方面,本公开实施例提供了一种液晶光栅的驱动方法,包括:
确定瞳孔中心的实时位置;
根据所述实时位置、以及预先建立的瞳孔中心的位置与所述液晶光栅中 透光区位置的对应关系,确定所述实时位置对应的所述透光区在所述液晶光栅中的位置;
驱动所述液晶光栅,使得所述液晶光栅仅在与所述实时位置对应的所述透光区的位置透光。
在一些实施例中,在本公开实施例提供的上述驱动方法中,确定瞳孔中心的实时位置,具体包括:实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标。
在一些实施例中,在本公开实施例提供的上述驱动方法中,实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标,具体包括:
采用位置不同的可见光相机分别实时采集面部图像;
分别提取各所述可见光相机同时采集的所述面部图像中虹膜的多个第一边缘点,并对各所述面部图像中相同位置的所述第一边缘点进行匹配;
利用三角测量法计算匹配成功的各所述第一边缘点在相机坐标系下的三维坐标;
以各所述第一边缘点在相机坐标系下映射空间的中心的三维坐标,作为瞳孔中心在相机坐标系下的三维坐标。
在一些实施例中,在本公开实施例提供的上述驱动方法中,实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标,具体包括:
采用红外相机实时获取用户的面部图像;
提取所述面部图像中虹膜的多个第一边缘点,并对所述多个第一边缘点作椭圆拟合,拟合所得椭圆中心在图像坐标系下的二维坐标作为瞳孔中心在图像坐标系下的二维坐标;
获取所述面部图像中多个面部特征点,并将所述多个面部特征点映射至预先建立的三维人脸模型上的相同位置;
调节所述三维人脸模型的坐标系至与相机坐标系重合;
将所述三维人脸模型中人眼的多个第二边缘点在相机坐标系下的深度坐标的均值或众数,作为瞳孔中心在相机坐标系下的深度坐标;
将瞳孔中心在图像坐标系下的二维坐标转换为瞳孔中心在相机坐标系下相同维度的二维坐标,瞳孔中心在相机坐标系下的二维坐标与瞳孔中心在相机坐标系下的深度坐标构成了瞳孔中心在相机坐标系下的三维坐标。
在一些实施例中,在本公开实施例提供的上述驱动方法中,建立三维人脸模型,具体包括:
采用可见光相机采集至少一张面部图像;
获取每个所述面部图像的多个面部特征点;
利用三角测量法计算所述多个面部特征点在相机坐标系下的三维坐标;
以其中一个所述面部特征点作为待建立的三维人脸模型的坐标系的原点,并将待建立的三维人脸模型的坐标系的原点调节至与相机坐标系的原点重合,使得所述多个面部特征点在相机坐标系下的三维坐标转换为在待建立的三维人脸模型的坐标系下的三维坐标;
根据所述多个面部特征点在待建立的三维人脸模型的坐标系下的三维坐标,还原出所述多个面部特征点表征的立体脸,实现三维人脸模型的建立。
在一些实施例中,在本公开实施例提供的上述驱动方法中,建立瞳孔中心的位置与所述液晶光栅中透光区位置的对应关系,具体包括:
以所述液晶光栅的中心为坐标原点,所述液晶光栅的出光方向为Z轴正方向,相机坐标系的X轴负方向为X轴正方向,建立所述液晶光栅的坐标系;
在所述液晶光栅的坐标系下,确定瞳孔中心在不同位置对应的所述液晶光栅中不同位置的透光区。
在一些实施例中,在本公开实施例提供的上述驱动方法中,根据所述实时位置、以及预先建立的瞳孔中心的位置与所述液晶光栅中透光区位置的对应关系,确定所述实时位置对应的所述透光区在所述液晶光栅中的位置,具体包括:
将实时确定的瞳孔中心在相机坐标系下的x坐标的负值作为瞳孔中心在 所述液晶光栅的坐标系下的x坐标,并将实时确定的瞳孔中心在相机坐标系下的z坐标作为瞳孔中心在所述液晶光栅的坐标系下的z坐标;
判断瞳孔中心在所述液晶光栅的坐标系下的z坐标是否等于预设的最佳观看距离;若是,则在所述液晶光栅的坐标系的X轴方向上移动透光区,且移动后的透光区大小与移动前的透光区大小相同;若否,在所述液晶光栅的坐标系的X轴方向上调节透光区的大小,且调节后的透光区位置与调节前的透光区位置部分交叠。
在一些实施例中,在本公开实施例提供的上述驱动方法中,在所述液晶光栅的坐标系的X轴方向上移动透光区,具体包括:
检测瞳孔中心在所述液晶光栅的坐标系下的坐标点、原点的连线延长线与所述液晶光栅的交点坐标;
将所述交点坐标的x坐标与所述液晶光栅中一个周期内的各条状电极在X轴上的同侧端点坐标进行对比,若所述交点坐标的x坐标大于一个周期内的第(i-1)根条状电极的端点坐标且小于等于第i根条状电极的端点坐标,则确定每个周期内第i~[(i+n/2)-1]根条状电极所在区为与当前瞳孔中心对应的透光区,n为一个周围内条状电极的总数且n为偶数,i为大于等于2且小于等于n/2的整数。
在一些实施例中,在本公开实施例提供的上述驱动方法中,在所述液晶光栅的坐标系的X轴方向上调节透光区的大小,具体包括:
计算共m个周期内每根条状电极在X轴上的位置坐标、每个左眼像素在所述液晶光栅中对应的可透光区在X轴上的坐标范围、以及每个右眼像素在所述液晶光栅中对应的可透光区在X轴上的坐标范围;
将第a周期内第b根条状电极的位置坐标与左眼像素对应可透光区的坐标范围、以及右眼像素对应可透光区的坐标范围进行对比,若第a周期内第b根条状电极的位置坐标同时位于左眼像素对应可透光区的坐标范围与右眼像素对应可透光区的坐标范围内,则确定第a周期内第b根条状电极所在区为当前瞳孔中心对应的透光区,m为所述液晶光栅中光栅周期的总数,a为大于 等于1且小于等于m的整数,b为大于等于1且小于等于n的整数。
另一方面,本公开实施例提供了一种显示装置,包括背光源、位于所述背光源出光侧的液晶面板、以及位于所述背光源与所述液晶面板之间的液晶光栅,所述液晶光栅采用本公开实施例提供的上述驱动方法进行驱动。
另一方面,本公开实施例提供了一种上述显示装置的显示方法,包括:
在二维显示模式下,控制液晶光栅完全透光;
在三维显示模式下,采用上述驱动方法控制所述液晶光栅形成交替排布的透光区和遮光区。
在一些实施例中,在本公开实施例提供的上述显示方法中,在所述液晶光栅的坐标系的X轴方向上调节透光区大小的同时,还包括:
确定透光区对应条状电极的总数,并基于条状电极的总数调整所述背光源出射的背光亮度,背光亮度与条状电极总数呈负相关关系。
附图说明
图1为本公开实施例提供的液晶光栅的驱动方法的流程图;
图2为本公开实施例提供的采集面部图像的示意图;
图3为本公开实施例提供的确定瞳孔中心的实时位置的一种流程图;
图4为本公开实施例提供的显示装置的一种结构示意图;
图5为本公开实施例提供的确定瞳孔中心的实时位置的又一种流程图;
图6为本公开实施例提供的建立三维人脸模型的流程图;
图7为本公开实施例提供的建立瞳孔中心的不同位置与液晶光栅中透光区的不同位置对应关系的流程图;
图8为本公开实施例提供的建立液晶光栅的坐标系的示意图;
图9为本公开实施例提供的确定瞳孔中心的实时位置对应的所述透光区在所述液晶光栅中的位置的流程图;
图10为本公开实施例提供的根据瞳孔中心的实时位置向左移动透光区位置的示意图;
图11为本公开实施例提供的根据瞳孔中心的实时位置减小透光区大小的示意图;
图12为本公开实施例提供的调节透光区大小的流程图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。需要注意的是,附图中各图形的尺寸和形状不反映真实比例,目的只是示意说明本公开内容。并且自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。为了保持本公开实施例的以下说明清楚且简明,本公开省略了已知功能和已知部件的详细说明。
除非另作定义,此处使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开说明书以及权利要求书中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“内”、“外”、“上”、“下”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
裸眼3D显示技术是一种利用双眼具有视差的特性,在不需要任何辅助设备(例如3D眼镜)的情况下,即可获得具有空间、深度的逼真立体形象的显示技术。由于裸眼3D显示装置显示的立体影像具有真实生动的表现力、较好的环境感染力和强烈的视觉冲击力等优点,裸眼3D显示装置的应用场景越来越广泛。
相关裸眼3D显示技术主要是基于视点的立体显示方式,其分光器件主要包括光屏障式和柱透镜式两种。其中光屏障式的分光器件可以为液晶光栅,液晶光栅包括相对而置的第一基板和第二基板,以及位于第一基板和第二基 板之间的液晶层,第一基板朝向液晶层的一侧具有多个条状电极,第二基板朝向液晶层的一侧具有一面状电极,通过为面状电极和部分条状电极加载电压的方式,可在液晶光栅中交替形成透光区和遮光区。由于液晶光栅与液晶面板具有良好的兼容性,因此基于液晶光栅的裸眼3D显示技术的应用越来越广,例如在娱乐、教育、车载、医疗等领域均有相关需求。但基于液晶光栅的裸眼3D显示技术受固定视角和观看距离限制,严重影响了其推广应用。
为了解决相关技术中存在的上述技术问题,本公开实施例提供了一种液晶光栅的驱动方法,如图1所示,包括:
S101、确定瞳孔中心的实时位置;
S102、根据实时位置、以及预先建立的瞳孔中心的位置与液晶光栅中透光区位置的对应关系,确定实时位置对应的透光区在液晶光栅中的位置;
S103、驱动液晶光栅,使得液晶光栅仅在与实时位置对应的透光区的位置透光。
在本公开实施例提供的上述液晶光栅的驱动方法中,通过确定瞳孔中心的实时位置,并根据实时位置、以及预先建立的瞳孔中心的位置与液晶光栅中透光区位置的对应关系,确定实时位置对应的透光区在液晶光栅中的位置之后,控制液晶光栅仅在与实时位置对应的透光区的位置透光,使得用户可以在不同距离、不同视角下观看3D图像,由此解除了裸眼3D装置只能在固定视角、固定距离观看的限制,利于实现基于液晶光栅的裸眼3D显示技术的推广应用。
在一些实施例中,在本公开实施例提供的上述驱动方法中,上述步骤S101、确定瞳孔中心的实时位置,具体可以通过以下方式进行实现:实时采集用户的面部图像,并基于面部图像确定瞳孔中心在相机坐标系下的三维坐标,以实现对瞳孔中心的准确定位。
可见光图像通常具有较高的空间分辨率(即图像中最小的可辨识细节)、可呈现更多的细节信息、且具有较好的明暗对比效果,因此,可见光图像适合于人类的视觉感知。基于此,在一些实施例中,如图2所示,可通过成像 元件C,例如可见光(RGB)相机等采集用户的面部图像。在此情况下,上述步骤:实时采集用户的面部图像,并基于面部图像确定瞳孔中心在相机坐标系下的三维坐标,具体可以通过图3所示的各步骤进行实现:
S1011、采用位置不同的可见光相机分别实时采集面部图像;
示例性地,如图4所示,可见光相机可以为2个,且2个可见光相机可对称分布在液晶面板P的中轴线S两侧,相应地,2个相机的光心O可以关于液晶面板P的中轴线S对称;
S1012、分别提取各可见光相机同时采集的面部图像中虹膜的多个第一边缘点,并对各面部图像中相同位置的第一边缘点进行匹配;在不同面部图像中,虹膜某一相同位置均具有第一边缘点的情况下,则视为不同图像中虹膜在该位置的第一边缘点匹配成功,匹配成功的全部第一边缘点即组成了虹膜的边缘;
S1013、利用三角测量法计算匹配成功的各第一边缘点在相机坐标系下的三维坐标,相当于将虹膜的各第一边缘点在图像坐标系下的二维坐标转换为了在相机坐标系下的三维坐标;其中,三角测量是视觉定位中,已知多个相机位置和空间中一点的投影点,进一步求该点3D位置的方法;
S1014、以各第一边缘点在相机坐标系下映射空间的中心的三维坐标,作为瞳孔中心在相机坐标系下的三维坐标。在通常的情况下,虹膜和瞳孔并不是严格的同心圆,但是在一般情况它们的圆心非常接近,所以近似按照圆心相同来处理,可以通过定位虹膜的中心来找到瞳孔中心。因此,可以各第一边缘点在相机坐标系下映射空间的中心的三维坐标,作为瞳孔中心在相机坐标系下的三维坐标。
考虑到,在某些恶劣条件(例如强光、雾气等)下,可见光图像很容易受到的影响,而不能获得理想的可见光图像。相反,描绘物体热辐射的红外图像则可以有效抵抗这些干扰。基于此,在一些实施例中,成像元件C也可以为红外(IR)相机等采集用户的面部图像。在此情况下,上述步骤:实时采集用户的面部图像,并基于面部图像确定瞳孔中心在相机坐标系下的三维 坐标,具体可以通过图5所示的各步骤进行实现:
S1011'、采用红外相机实时获取用户的面部图像;
S1012'、提取面部图像中虹膜的多个第一边缘点,并对多个第一边缘点作椭圆拟合,拟合所得椭圆中心在图像坐标系下的二维坐标作为瞳孔中心在图像坐标系下的二维坐标;
S1013'、获取面部图像中多个面部特征点,并将多个面部特征点映射至预先建立的三维人脸模型上的相同位置;可选地,面部特征点为能够体现例如眉毛、眼睛、鼻子、嘴巴、脸型轮廓等面部特征的点;
S1014'、调节三维人脸模型的坐标系至与相机坐标系重合;可选地,可基于PnP的位姿测量方法得到三维人脸模型的坐标系到相机坐标系的旋转向量和平移向量,根据所得旋转向量旋转三维人脸模型的坐标系的xyz坐标轴、并根据所得平移向量平移三维人脸模型的坐标系的原点,即可使得三维人脸模型的坐标系至与相机坐标系重合;
S1015'、将三维人脸模型中人眼的多个第二边缘点在相机坐标系下的深度坐标的均值或众数等代表性数据,作为瞳孔中心在相机坐标系下的深度坐标;
S1016'、将瞳孔中心在图像坐标系下的二维坐标转换为瞳孔中心在相机坐标系下的二维坐标,瞳孔中心在相机坐标系下的二维坐标与瞳孔中心在相机坐标系下的深度坐标构成了瞳孔中心在相机坐标系下的三维坐标。每个相机均具有相机内参(例如光心、焦距等参数),结合相机内参可将由该相机拍摄的图像中各点在图像坐标系下的二维坐标(即xy坐标)转换为在相机坐标系下同维的二维坐标(即xy坐标)。
在一些实施例中,在本公开实施例提供的上述驱动方法中,如图6所示,建立三维人脸模型,具体可以通过以下方式进行实现:
S601、采用可见光相机采集至少一张面部图像;在一些实施例中,可通过两个可见光相机采集多帧面部图像;
S602、获取每个面部图像的多个面部特征点;
S603、利用三角测量法计算多个面部特征点在相机坐标系下的三维坐标;
S604、以其中一个面部特征点(例如鼻尖、瞳孔等)作为待建立的三维人脸模型的坐标系的原点,并可通过PnP的位姿测量方法,将待建立的三维人脸模型的坐标系的原点调节至与相机坐标系的原点重合,使得多个面部特征点在相机坐标系下的三维坐标转换为在待建立的三维人脸模型的坐标系下的三维坐标;
S605、根据多个面部特征点在待建立的三维人脸模型的坐标系下的三维坐标,还原出多个面部特征点表征的立体脸,实现三维人脸模型的建立。
在一些实施例中,在本公开实施例提供的上述驱动方法中,如图7所示,建立瞳孔中心的位置与液晶光栅中透光区位置的对应关系,具体可以通过以下方式进行实现:
S701、以液晶光栅的中心为坐标原点,液晶光栅的出光方向(相当于相机坐标系的Z轴正方向)为Z轴正方向,相机坐标系的X轴负方向为X轴正方向,建立液晶光栅的坐标系,如图8所示;液晶光栅位于液晶面板与背光源之间,D为人眼距液晶面板的最佳观看距离,h为最佳观看距离下液晶面板与液晶光栅考虑折射率后的间距;液晶光栅的中心与液晶面板的中心大致重合(即恰好重合,或在因对位、测量等因素造成的误差范围内),液晶光栅的出光方向为由液晶光栅指向液晶面板的方向;
S702、在液晶光栅的坐标系下,确定瞳孔中心在不同位置对应的液晶光栅中不同位置的透光区。
在一些实施例中,在本公开实施例提供的上述驱动方法中,步骤S102、根据实时位置、以及预先建立的瞳孔中心的位置与液晶光栅中透光区位置的对应关系,确定实时位置对应的透光区在液晶光栅中的位置,具体可以包括以下步骤,如图9所示:
S901、将实时确定的瞳孔中心在相机坐标系下的x坐标的负值作为瞳孔中心在液晶光栅的坐标系下的x坐标,并将实时确定的瞳孔中心在相机坐标系下的z坐标作为瞳孔中心在液晶光栅的坐标系下的z坐标,实现瞳孔中心的坐标从相机坐标系到液晶光栅的坐标系的转换;
S902、判断瞳孔中心在液晶光栅的坐标系下的z坐标是否等于预设的最佳观看距离D;若是,则可以确定人眼是在X轴上左右移动(图10示出了人眼向左移动),需要在液晶光栅的坐标系的X轴方向上移动透光区(图10示出了向左移动透光区),且移动后的透光区大小与移动前的透光区大小相同,例如液晶光栅中透光区的整体透光率为50%,且每个透光区的大小相同;若否,则可以确定人眼在Y轴上前后移动(图11示出了人眼向前趋近液晶面板移动),在液晶光栅的坐标系的X轴方向上调节透光区的大小(图11示出了减小透光区),且调节后透光区与调节前的透光区部分交叠。
在一些实施例中,在本公开实施例提供的上述驱动方法中,在液晶光栅的坐标系的X轴方向上移动透光区,具体可以通过以下方式进行实现:
检测瞳孔中心在液晶光栅的坐标系下的坐标点、原点的连线延长线与液晶光栅的交点坐标;并将交点坐标的x坐标xopen与液晶光栅中一个周期内的各条状电极在X轴上的同侧(例如左侧或右侧)端点坐标进行对比,若交点坐标的x坐标xopen大于一个周期内的第(i-1)根条状电极的端点坐标xi-1且小于等于第i根条状电极的端点坐标xi,则确定每个周期内第i~[(i+n/2)-1]根条状电极所在区为与当前瞳孔中心对应的透光区,n为一个周围内条状电极的总数且n为偶数,i为大于等于2且小于等于n/2的整数。
在液晶光栅中的多根条状电极中,以n根条状电极作为一个周期,所有周期的对应位数的条状电极连接在一起,因此每个周期内液晶光栅的透光情况相同,则只需计算一个周期内的条状电极的加压情况。以左眼瞳孔中心为例,左眼瞳孔中心在液晶光栅坐标系下的坐标点与液晶光栅的坐标系的原点的连线延长后与液晶光栅相较于一点,对比X轴方向上每根条状电极的左端点坐标xi、与此点的坐标xopen,若xi-1<xopen≤xi,则可知一个周期内从第i根至(i+n/2)-1根条状电极的位置需置为透光状态。
在一些实施例中,在本公开实施例提供的上述驱动方法中,在液晶光栅的坐标系的X轴方向上调节透光区的大小,具体可以通过以下方式进行实现,如图12所示:
计算共m个周期内每根条状电极在X轴上的位置坐标、每个左眼像素在液晶光栅中对应的可透光区在X轴上的坐标范围、以及每个右眼像素在液晶光栅中对应的可透光区在X轴上的坐标范围;
将第a周期内第b根条状电极的位置坐标与左眼像素对应可透光区的坐标范围、以及右眼像素对应可透光区的坐标范围进行对比,若第a周期内第b根条状电极的位置坐标同时位于左眼像素对应可透光区的坐标范围与右眼像素对应可透光区的坐标范围内,即第a周期内第b根条状电极的位置坐标位于左眼像素对应可透光区的坐标范围与右眼像素对应可透光区的坐标范围的交集内,则确定第a周期内第b根条状电极所在区为当前瞳孔中心对应的透光区,m为液晶光栅中光栅周期的总数,a为大于等于1且小于等于m的整数,b为大于等于1且小于等于n的整数。
基于同一发明构思,本公开实施例还提供了一种显示装置,包括背光源、位于背光源出光侧的液晶面板、以及位于背光源与液晶面板之间的液晶光栅,液晶光栅采用本公开实施例提供的上述驱动方法进行驱动。由于该显示装置解决问题的原理与上述驱动方法解决问题的原理相似,因此,本公开实施例提供的该显示装置的实施可以参见本公开实施例提供的上述驱动方法的实施,重复之处不再赘述。
在一些实施例中,本公开实施例提供的上述显示装置可以为:手机、平板电脑、电视机、显示器、笔记本电脑、数码相框、导航仪、智能手表、健身腕带、个人数字助理等任何具有显示功能的产品或部件。该显示装置包括但不限于:射频单元、网络模块、音频输出&输入单元、传感器、显示单元、用户输入单元、接口单元、存储器、处理器、以及电源等部件。另外,本领域技术人员可以理解的是,上述结构并不构成对本公开实施例提供的上述显示装置的限定,换言之,在本公开实施例提供的上述显示装置中可以包括上述更多或更少的部件,或者组合某些部件,或者不同的部件布置。
基于同一发明构思,本公开实施例还提供了一种上述显示装置的显示方法,包括以下步骤:
在二维显示模式下,控制液晶光栅完全透光;
在三维显示模式下,采用本公开实施例提供的上述驱动方法控制液晶光栅形成交替排布的透光区和遮光区。
在一些实施例中,当三维显示模式下,背光不可避免地被液晶光栅中的遮光区遮挡而导致屏幕亮度降低,且在人眼向前移动(即朝向靠近液晶面板的方向移动)时,液晶光栅的透光区缩小,屏幕亮度会更低。因此,为保证屏幕亮度,需要增大背光亮度。可选地,本公开中可在液晶光栅的坐标系的X轴方向上调节透光区的大小的同时,确定透光区对应条状电极的总数,并基于条状电极的总数调整背光源出射的背光亮度,背光亮度与条状电极总数呈负相关关系,即透光区越小,条状电极数量越少,背光亮度需要提升的越大。
显然,本领域的技术人员可以对本公开实施例进行各种改动和变型而不脱离本公开实施例的精神和范围。这样,倘若本公开实施例的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包含这些改动和变型在内。

Claims (12)

  1. 一种液晶光栅的驱动方法,其中,包括:
    确定瞳孔中心的实时位置;
    根据所述实时位置、以及预先建立的瞳孔中心的位置与所述液晶光栅中透光区位置的对应关系,确定所述实时位置对应的所述透光区在所述液晶光栅中的位置;
    驱动所述液晶光栅,使得所述液晶光栅仅在与所述实时位置对应的所述透光区的位置透光。
  2. 如权利要求1所述的驱动方法,其中,确定瞳孔中心的实时位置,具体包括:实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标。
  3. 如权利要求2所述的驱动方法,其中,实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标,具体包括:
    采用位置不同的可见光相机分别实时采集面部图像;
    分别提取各所述可见光相机同时采集的所述面部图像中虹膜的多个第一边缘点,并对各所述面部图像中相同位置的所述第一边缘点进行匹配;
    利用三角测量法计算匹配成功的各所述第一边缘点在相机坐标系下的三维坐标;
    以各所述第一边缘点在相机坐标系下映射空间的中心的三维坐标,作为瞳孔中心在相机坐标系下的三维坐标。
  4. 如权利要求2所述的驱动方法,其中,实时采集用户的面部图像,并基于所述面部图像确定瞳孔中心在相机坐标系下的三维坐标,具体包括:
    采用红外相机实时获取用户的面部图像;
    提取所述面部图像中虹膜的多个第一边缘点,并对所述多个第一边缘点作椭圆拟合,拟合所得椭圆中心在图像坐标系下的二维坐标作为瞳孔中心在图像坐标系下的二维坐标;
    获取所述面部图像中多个面部特征点,并将所述多个面部特征点映射至预先建立的三维人脸模型上的相同位置;
    调节所述三维人脸模型的坐标系至与相机坐标系重合;
    将所述三维人脸模型中人眼的多个第二边缘点在相机坐标系下的深度坐标的均值或众数,作为瞳孔中心在相机坐标系下的深度坐标;
    将瞳孔中心在图像坐标系下的二维坐标转换为瞳孔中心在相机坐标系下相同维度的二维坐标,瞳孔中心在相机坐标系下的二维坐标与瞳孔中心在相机坐标系下的深度坐标构成了瞳孔中心在相机坐标系下的三维坐标。
  5. 如权利要求3所述的驱动方法,其中,建立三维人脸模型,具体包括:
    采用可见光相机采集至少一张面部图像;
    获取每个所述面部图像的多个面部特征点;
    利用三角测量法计算所述多个面部特征点在相机坐标系下的三维坐标;
    以其中一个所述面部特征点作为待建立的三维人脸模型的坐标系的原点,并将待建立的三维人脸模型的坐标系的原点调节至与相机坐标系的原点重合,使得所述多个面部特征点在相机坐标系下的三维坐标转换为在待建立的三维人脸模型的坐标系下的三维坐标;
    根据所述多个面部特征点在待建立的三维人脸模型的坐标系下的三维坐标,还原出所述多个面部特征点表征的立体脸,实现三维人脸模型的建立。
  6. 如权利要求2~5任一项所述的驱动方法,其中,建立瞳孔中心的位置与所述液晶光栅中透光区位置的对应关系,具体包括:
    以所述液晶光栅的中心为坐标原点,所述液晶光栅的出光方向为Z轴正方向,相机坐标系的X轴负方向为X轴正方向,建立所述液晶光栅的坐标系;
    在所述液晶光栅的坐标系下,确定瞳孔中心在不同位置对应的所述液晶光栅中不同位置的透光区。
  7. 如权利要求6所述的驱动方法,其中,根据所述实时位置、以及预先建立的瞳孔中心的位置与所述液晶光栅中透光区位置的对应关系,确定所述实时位置对应的所述透光区在所述液晶光栅中的位置,具体包括:
    将实时确定的瞳孔中心在相机坐标系下的x坐标的负值作为瞳孔中心在所述液晶光栅的坐标系下的x坐标,并将实时确定的瞳孔中心在相机坐标系下的z坐标作为瞳孔中心在所述液晶光栅的坐标系下的z坐标;
    判断瞳孔中心在所述液晶光栅的坐标系下的z坐标是否等于预设的最佳观看距离;若是,则在所述液晶光栅的坐标系的X轴方向上移动透光区,且移动后的透光区大小与移动前的透光区大小相同;若否,在所述液晶光栅的坐标系的X轴方向上调节透光区的大小,且调节后透光区与调节前的透光区部分交叠。
  8. 如权利要求7所述的驱动方法,其中,在所述液晶光栅的坐标系的X轴方向上移动透光区,具体包括:
    检测瞳孔中心在所述液晶光栅的坐标系下的坐标点、原点的连线延长线与所述液晶光栅的交点坐标;
    将所述交点坐标的x坐标与所述液晶光栅中一个周期内的各条状电极在X轴上的同侧端点坐标进行对比,若所述交点坐标的x坐标大于一个周期内的第(i-1)根条状电极的端点坐标且小于等于第i根条状电极的端点坐标,则确定每个周期内第i~[(i+n/2)-1]根条状电极所在区为与当前瞳孔中心对应的透光区,n为一个周围内条状电极的总数且n为偶数,i为大于等于2且小于等于n/2的整数。
  9. 如权利要求7所述的驱动方法,其中,在所述液晶光栅的坐标系的X轴方向上调节透光区的大小,具体包括:
    计算共m个周期内每根条状电极在X轴上的位置坐标、每个左眼像素在所述液晶光栅中对应的可透光区在X轴上的坐标范围、以及每个右眼像素在所述液晶光栅中对应的可透光区在X轴上的坐标范围;
    将第a周期内第b根条状电极的位置坐标与左眼像素对应可透光区的坐标范围、以及右眼像素对应可透光区的坐标范围进行对比,若第a周期内第b根条状电极的位置坐标同时位于左眼像素对应可透光区的坐标范围与右眼像素对应可透光区的坐标范围内,则确定第a周期内第b根条状电极所在区为 当前瞳孔中心对应的透光区,m为所述液晶光栅中光栅周期的总数,a为大于等于1且小于等于m的整数,b为大于等于1且小于等于n的整数。
  10. 一种显示装置,其中,包括背光源、位于所述背光源出光侧的液晶面板、以及位于所述背光源与所述液晶面板之间的液晶光栅,所述液晶光栅采用如权利要求1~9任一项所述的驱动方法进行驱动。
  11. 一种如权利要求10所述显示装置的显示方法,其中,包括:
    在二维显示模式下,控制液晶光栅完全透光;
    在三维显示模式下,采用如权利要求1~9任一项所述的驱动方法控制所述液晶光栅形成交替排布的透光区和遮光区。
  12. 如权利要求11所述的显示方法,其中,在所述液晶光栅的坐标系的X轴方向上调节透光区大小的同时,还包括:
    确定透光区对应条状电极的总数,并基于条状电极的总数调整所述背光源出射的背光亮度,背光亮度与条状电极总数呈负相关关系。
PCT/CN2023/091502 2022-05-30 2023-04-28 液晶光栅的驱动方法及显示装置、其显示方法 WO2023231674A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210614743.2A CN114898440A (zh) 2022-05-30 2022-05-30 液晶光栅的驱动方法及显示装置、其显示方法
CN202210614743.2 2022-05-30

Publications (2)

Publication Number Publication Date
WO2023231674A1 true WO2023231674A1 (zh) 2023-12-07
WO2023231674A9 WO2023231674A9 (zh) 2024-01-11

Family

ID=82726923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091502 WO2023231674A1 (zh) 2022-05-30 2023-04-28 液晶光栅的驱动方法及显示装置、其显示方法

Country Status (2)

Country Link
CN (1) CN114898440A (zh)
WO (1) WO2023231674A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898440A (zh) * 2022-05-30 2022-08-12 京东方科技集团股份有限公司 液晶光栅的驱动方法及显示装置、其显示方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130027331A (ko) * 2011-09-07 2013-03-15 이영우 양안 식의 안위감지기, 부채꼴의 변형 렌티큘러와 액정식 배리어를 사용한 입체영상표시장치
CN106056092A (zh) * 2016-06-08 2016-10-26 华南理工大学 基于虹膜与瞳孔的用于头戴式设备的视线估计方法
CN106918956A (zh) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 一种液晶光栅、3d显示装置及其驱动方法
CN107515474A (zh) * 2017-09-22 2017-12-26 宁波维真显示科技股份有限公司 自动立体显示方法、装置及立体显示设备
CN107529055A (zh) * 2017-08-24 2017-12-29 歌尔股份有限公司 显示屏、头戴显示设备及其显示控制方法和装置
CN114898440A (zh) * 2022-05-30 2022-08-12 京东方科技集团股份有限公司 液晶光栅的驱动方法及显示装置、其显示方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130027331A (ko) * 2011-09-07 2013-03-15 이영우 양안 식의 안위감지기, 부채꼴의 변형 렌티큘러와 액정식 배리어를 사용한 입체영상표시장치
CN106056092A (zh) * 2016-06-08 2016-10-26 华南理工大学 基于虹膜与瞳孔的用于头戴式设备的视线估计方法
CN106918956A (zh) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 一种液晶光栅、3d显示装置及其驱动方法
CN107529055A (zh) * 2017-08-24 2017-12-29 歌尔股份有限公司 显示屏、头戴显示设备及其显示控制方法和装置
CN107515474A (zh) * 2017-09-22 2017-12-26 宁波维真显示科技股份有限公司 自动立体显示方法、装置及立体显示设备
CN114898440A (zh) * 2022-05-30 2022-08-12 京东方科技集团股份有限公司 液晶光栅的驱动方法及显示装置、其显示方法

Also Published As

Publication number Publication date
WO2023231674A9 (zh) 2024-01-11
CN114898440A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
RU2388172C2 (ru) Формирование трехмерного изображения с использованием эффекта близости
CN102098524B (zh) 跟踪式立体显示设备及跟踪式立体显示方法
CN201307266Y (zh) 双目视线跟踪装置
CN102045577B (zh) 用于三维立体显示的观察者跟踪系统及三维立体显示系统
CN102123291B (zh) 智能型裸眼立体显示系统及其控制方法
TWI486631B (zh) 頭戴式顯示裝置及其控制方法
CN104865701B (zh) 头戴式显示装置
CN102062596A (zh) 一种利用双摄像头测距的方法和装置
CN102164296B (zh) 基于单台dlp投影的全角视差立体成像系统及方法
CN203746012U (zh) 一种三维虚拟场景人机交互立体显示系统
CN102620713A (zh) 一种利用双摄像头测距和定位的方法
CN108153502B (zh) 基于透明屏幕的手持式增强现实显示方法及装置
WO2015062319A1 (zh) 3d智能设备及其3d图像显示方法
CN112578564B (zh) 一种虚拟现实显示设备及显示方法
WO2023231674A1 (zh) 液晶光栅的驱动方法及显示装置、其显示方法
CN108428375A (zh) 一种基于增强现实的教学辅助方法及设备
WO2018149267A1 (zh) 一种基于增强现实的显示方法及设备
CN101588511A (zh) 一种立体摄像装置及方法
KR101119781B1 (ko) 스테레오 영상화 터치 장치
CN107111143B (zh) 视觉系统及观片器
KR101780813B1 (ko) 2d-3d 영상 변환 방법 및 이를 이용한 입체 영상 표시장치
CN103852935A (zh) 一种显示方法及电子设备
CN203025419U (zh) 头戴显示器
WO2018149266A1 (zh) 一种基于增强现实的信息处理方法及设备
JP2010267192A (ja) 立体結像のタッチ制御装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814860

Country of ref document: EP

Kind code of ref document: A1