WO2014000370A1 - 景深保持装置、3d显示系统及显示方法 - Google Patents

景深保持装置、3d显示系统及显示方法 Download PDF

Info

Publication number
WO2014000370A1
WO2014000370A1 PCT/CN2012/084852 CN2012084852W WO2014000370A1 WO 2014000370 A1 WO2014000370 A1 WO 2014000370A1 CN 2012084852 W CN2012084852 W CN 2012084852W WO 2014000370 A1 WO2014000370 A1 WO 2014000370A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
field
video signal
right eye
viewer
Prior art date
Application number
PCT/CN2012/084852
Other languages
English (en)
French (fr)
Inventor
陈炎顺
董友梅
武延兵
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US13/985,020 priority Critical patent/US9307228B2/en
Publication of WO2014000370A1 publication Critical patent/WO2014000370A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/373Image reproducers using viewer tracking for tracking forward-backward translational head movements, i.e. longitudinal movements

Definitions

  • Embodiments of the present invention relate to a depth of field holding device, a 3D display system, and a display method. Background technique
  • 3D display has become a major trend in the display industry.
  • the basic principle of the current 3D display is "parallax produces stereo", that is, the left eye of the viewer is only seen in the left eye, the right eye only sees the right eye, and the left and right eye are stereo with parallax. Image pair.
  • the human brain will combine the two images to produce a 3D effect.
  • FIG. 1 The method by which the human eye judges the depth of field of the displayed content can be explained using FIG.
  • the two circles with S spacing at the viewer position represent the left and right eyes of the viewer, respectively, and S is the pupil distance.
  • the two squares at the position of the display screen represent the left and right eye views of an object, which are respectively seen by the viewer's right and left eyes.
  • the viewer's brain will think that the object is actually located at a distance from the surface of the screen, the distance is the depth of field, and D is the viewing distance of the viewer. According to Figure 1, it is easy to get the following formula:
  • the spacing of the stereo image pairs on the display screen is constant (ie, unchanged), so when the position of the viewer changes (ie, D changes), the depth of field changes (ie, L changes with /)).
  • S is the pupil distance, remains unchanged, when it does not change and the viewer's viewing distance changes from D to /
  • the depth of field becomes ".
  • it is hoped that the depth of field will remain unchanged.
  • Embodiments of the present invention provide a depth of field holding device, a 3D display system, and a display method such that, in the case of a viewer moving, the depth of field of the 3D picture seen by the viewer is kept constant.
  • an embodiment of the present invention provides a depth of field holding device, including: a 3D video signal An input device; a position measuring device; an image processing device; and a 3D video signal output device, wherein the 3D video signal input device, the position determining device, and the 3D video signal output device are both connected to the image processing device, the 3D video signal
  • the input device transmits the acquired 3D video signal to the image processing device
  • the position determining device transmits the detected position information of the viewer and the display screen to the image processing device, the image processing device according to the position
  • the information adjusts the spacing of the object points in the left and right eye views in the 3D video signal to keep the depth of field unchanged, and transmits the adjusted 3D video signal to the 3D video signal output device.
  • the image processing apparatus includes:
  • the left and right eye view adjustment modules adjust the distance between the object points in the left and right eye views in the 3D video signal according to the position information so that the depth of field remains unchanged with respect to the same viewer, and the adjusted left and right eye views are transmitted to the 3D video signal generation module. ;
  • the 3D video signal generating module generates a new 3D video signal according to the adjusted left and right eye views, and transmits the new 3D video signal to the 3D video signal output device.
  • the image processing apparatus further includes: left and right eye view lifting blocks, the left and right eye view extraction module receiving the 3D video signal, and transmitting left and right eye views extracted from the 3D video signal to Left and right eye view adjustment module.
  • the left and right eye view extracting blocks include:
  • a sub-module collecting one frame of data in the 3D video signal, and transmitting the one-frame data to a split-molecular module
  • the split molecular module transmits the left and right eye views split from the one frame of data to the left and right eye view adjustment modules.
  • the depth of field holding device further includes: a depth of field acquiring device and a depth of field memory, the depth of field acquiring device is connected to the position determining device, the image processing device, and a depth of field memory, and the depth of field memory is connected to the image processing device
  • the depth of field acquiring device receives the initial position information of the viewer detected by the position determining device and receives an initial distance of each object point in the left and right eye views in the 3D video signal sent by the image processing device, and calculates the Depth of field, the depth of field is sent to the depth of field memory storage.
  • the depth of field holding device further includes: a depth of field obtaining device and a depth of field memory, wherein the depth of field acquiring device is connected to the depth of field memory, and the depth of field memory is connected to the image processing device, and the depth of field acquiring device receives the received An initial depth of field value is sent to the depth of field memory storage.
  • the position determining device comprises: a camera, an infrared sensor or a device for manually inputting position information.
  • the embodiment of the present invention further provides a 3D display system, comprising: a display device and the depth of field holding device according to any one of the above, wherein the 3D video signal output device of the depth of field holding device is connected to the display device, The adjusted 3D video signal is transmitted to the display device display.
  • the display device comprises: a shutter glass type, a phase difference plate type (non-flash type 3D), a parallax barrier type, a cylindrical lens grating type or a pointing back type 3D display device.
  • Embodiments of the present invention also provide a 3D display method implemented by the above-described depth of field holding device, comprising the following steps:
  • S1 real-time collecting position information between the viewer and the display screen according to the position change of the viewer, and acquiring left and right eye views in the 3D video signal;
  • S2 calculating, according to the location information, a distance of each object point when the left and right views are displayed on the display screen, so that the depth of field remains unchanged with respect to the same viewer;
  • step S3 Generate a new 3D video signal according to the pitch and left and right eye views calculated in step S2 and display it on the display screen.
  • the device in step S1 collects position information between the viewer and the display screen through a camera, an infrared sensor or a device that manually inputs position information.
  • the steps of acquiring the left and right eye views in the 3D video signal in the step S1 include:
  • the calculation formula in the step S2 is as follows:
  • the distance between the object points when the left and right views are displayed on the display screen S is the distance of the viewer
  • D is the distance between the viewer and the display screen, which is the depth of field.
  • the 3D display method further includes: an initial value of a distance between each object point and an initial value of a distance between the viewer and the display screen according to the left and right views displayed on the display screen, Calculate the depth of field as follows:
  • the 3D display method further includes: acquiring an initial depth of field set by the user. DRAWINGS
  • Figure 1 is a schematic diagram of a viewer's perceived 3D display
  • Figure 2 is a schematic diagram of the respective 3D display before and after the viewer moves;
  • FIG. 3 is a schematic structural view of a depth of field holding device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural view of another depth of field holding device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural view of still another depth of field holding device according to an embodiment of the present invention;
  • FIG. 6 is a 3D display system according to an embodiment of the present invention. Schematic diagram of the structure. detailed description
  • the structure of the depth of field holding device of the first embodiment of the present invention is as shown in FIG. 3, and includes: a 3D video signal input device 1, a position measuring device 2, an image processing device 3, and a 3D video signal output device 4.
  • the 3D video signal input device 1, the position measuring device 2, and the 3D video signal output device 4 are all connected to the image processing device 3.
  • the 3D video signal input device 1 transmits the acquired 3D video signal to the image processing device 3.
  • the 3D video signal input device 1 may be various types of transmission interfaces capable of transmitting 3D video signals.
  • the position measuring device 2 transmits the detected position information of the viewer and the display screen to the image processing device 3.
  • the position determining device 2 includes, but is not limited to: a camera, an infrared sensor, or a device for manually inputting position information, wherein the position information may be coordinates of the viewer and the display screen in the current spatial coordinate system, or Is the distance between the viewer and the display screen.
  • the camera collects the position coordinates of the viewer and the display screen in the space coordinate system, and transmits the coordinate data to the image processing device 3, or the operation module in the camera displays the coordinates of the screen and the position coordinates according to the space coordinate system.
  • the distance between the viewer and the display screen is calculated, and the distance data is transmitted to the image processing apparatus 3.
  • the image processing device 3 adjusts the pitch of each object point in the left and right eye views in the 3D video signal based on the above position information so that the depth of field remains unchanged with respect to the same viewer, and transmits the adjusted 3D video signal to the 3D video signal output device 4.
  • the image processing device 3 may be a Micro Control Unit (MCU), a Field-Programmable Gate Array (FPGA), or a Permanent Magnet Non-contact Linear Displacement Sensor (PLCD, Permanent Linear). Contactless Displacement Sensor) constitutes an embedded chip system.
  • the 3D video signal usually includes left and right eye views (left eye view and right eye view) and the initial spacing of the objects when they are displayed.
  • the image processing apparatus 3 specifically includes: a left and right view view adjustment module 32 and a 3D video signal generation module 33.
  • the left and right eye view adjustment module 32 adjusts the pitch of each object point in the left and right eye views in the 3D video signal according to the position information to keep the depth of field unchanged, and transmits the adjusted left and right eye views to the 3D video signal generation module 33. If the position information is the position coordinate of the viewer, the module first calculates the distance between the viewer and the display screen according to the coordinates of the display screen in the space coordinate system and the position coordinates. Among them, the formula for adjusting the pitch of each object point in the left and right view in the 3D video signal is as follows:
  • S is the pupil distance, remains unchanged, is the depth of field, and remains unchanged.
  • D is the distance between the viewer and the display screen.
  • the left and right eye views are adjusted according to the value, and the adjusted left and right eye views are transmitted to the 3D video signal generating module 33.
  • the 3D video signal generating module 33 generates a new 3D video signal based on the adjusted left and right eye views.
  • the depth of field holding device of the embodiment further includes: a left and right eye view extraction module 31.
  • the left and right eye view extraction module 31 receives the 3D video signal input by the 3D video signal input device, and transmits the left and right eye views extracted from the 3D video signal to the left and right eye view adjustment module 32.
  • the left and right eye view lifting block 31 includes: a sub-module module and a disassembling module, wherein the sub-module is used to collect one frame of data in the 3D video signal, and send one frame of data to the disassembling module; The left and right eye views split from the one frame of data are transmitted to the left and right eye view adjustment module 32.
  • the pupil distance S and the depth of field are previously determined values which are acquired in advance and stored in the memory of the chip system constituting the image processing apparatus 3.
  • the depth of field holding device of the present embodiment further includes: a depth of field obtaining device 5 and a depth of field memory 6.
  • Depth of field acquisition device 5 Connect the depth of field memory 6 and send the acquired initial depth of field value to the depth of field memory 6 for storage.
  • the depth of field memory 6 is connected to the left and right eye view adjustment module 32, and the left and right eye view adjustment module 32 obtains the initial depth of field from the depth of field memory 6.
  • the depth of field acquiring device 5 acquires the initial depth of field in the following two ways.
  • the depth of field acquisition device 5 is a module having an arithmetic function. As shown in FIGS. 3 and 4, the depth of field acquisition device 5 is connected to the position measurement device 2 and the 3D video signal input device 1 or the left and right eye view extraction module 31, if in the 3D video signal.
  • the left-eye view and the right-eye view are transmitted as the same picture, and the depth-of-field acquisition device 5 is connected to the left and right eye view extraction module 31 (acquisition of the object points in the left and right eye views in the 3D video signal is obtained from the left and right eye view presentation blocks 31)
  • the initial depth interval the depth of field acquisition device 5 receives the initial pitch of each object point in the left and right eye views in the 3D video signal and the initial position information of the viewer detected by the position measuring device 2, and calculates the initial depth of field according to the following formula, the initial depth of field Depth of field is sent to the depth of field memory storage.
  • the depth of field acquisition device 5 is an input device, such as a keyboard. As shown in FIG. 5, the depth of field acquisition device 5 is connected to the depth of field memory 6, and the depth of field acquisition device 5 transmits the received initial depth value input by the user to the depth of field memory. 6 storage. This input type depth of field acquisition device 5 facilitates the user to input a depth of field suitable for viewing, and the user experience is good.
  • the depth of field holding device of the embodiment can adjust the display screen left in real time when the viewer moves
  • the distance between the objects in the right eye view keeps the depth of field unchanged, achieving a better viewing effect.
  • Embodiment 2 of the present invention provides a 3D display system, as shown in Fig. 6, comprising: a 3D display device 7 and a depth of field holding device in Embodiment 1.
  • the 3D video signal generation module 33 in the depth of field holding device transmits a new 3D video signal to the 3D display device 7 through the 3D video signal output device 4 to ensure that the 3D display device 7 maintains the depth of field while being displayed.
  • the 3D display device 7 includes: a shutter Galss 3D, a Pattern retarder 3D, a barrier 3D, and a cylindrical lens grating. Lenticular 3D) or point to a backlit 3D display device.
  • the 3D display device 7 includes a display panel and a phase difference plate, a parallax barrier, a cylindrical lens grating, and the like.
  • the display panel may be a liquid crystal display panel, an organic electroluminescence display panel, a plasma display panel, an electronic ink display panel, or the like.
  • Embodiment 3 of the present invention provides a 3D display method, including:
  • Step S701 Collect position information between the viewer and the display screen in real time according to the position change of the viewer, and acquire left and right eye views in the 3D video signal.
  • the position measuring device such as a camera, an infrared sensor, or a manual input position information is used to collect position information between the viewer and the display screen in real time.
  • the location information may be the coordinates of the viewer and the display screen in the current spatial coordinate system, or the distance between the viewer and the display screen.
  • the camera collects the position coordinates of the viewer and the display screen in the space coordinate system.
  • the operation module in the camera calculates the distance between the viewer and the display screen according to the coordinates of the display screen in the space coordinate system and the position coordinates.
  • the 3D video signal usually includes the left and right eye views (the left eye view and the right eye view) and the initial spacing of the object points when the two are displayed. If the left and right eye views of the 3D video signal are transmitted as the same picture, the set is transmitted. One frame of data in a 3D video signal; left and right eye views split from the one frame of data.
  • Step S702 Calculate the distance between the object points when the left and right eye views are displayed on the display screen according to the position information and the depth of field information, so that the depth of field remains unchanged with respect to the same viewer. That is, according to the distance between the viewer and the display screen after the movement of the viewer, the distance between the object points when the left and right views are displayed on the display screen is calculated by the following formula.
  • D-L the distance between the object points when the left and right views are displayed on the display screen
  • S is the distance of the viewer
  • D is the distance between the viewer and the display screen, which is the depth of field.
  • Step S703 generating a new 3D video signal according to the calculated pitch and left and right eye views and displaying on the display screen.
  • the method further comprises: calculating the depth of field according to the initial value of the distance between the object points and the initial value of the distance between the viewer and the display screen when the left and right views are displayed on the display screen:
  • the initial depth of field is determined based on the initial depth of field value set by the user before step S701.
  • the distance between the object points in the left and right eye views on the display screen can be adjusted in real time when the viewer moves, so that the depth of field remains unchanged, and a better viewing effect is achieved.

Abstract

本发明的实施例公开了一种景深保持装置、3D显示系统及显示方法。该景深保持装置包括:3D视频信号输入装置;位置测定装置;图像处理装置;以及3D视频信号输出装置,其中所述3D视频信号输入装置、位置测定装置及3D视频信号输出装置均连接到所述图像处理装置,所述3D视频信号输入装置将获取的3D视频信号传输至所述图像处理装置,所述位置测定装置将检测到的观看者与显示屏幕的位置信息传输至所述图像处理装置,所述图像处理装置根据所述位置信息调节3D视频信号中的左右眼视图中各物点的间距使景深保持不变,并将调节后的3D视频信号传输至所述3D视频信号输出装置。

Description

景深保持装置、 3D显示系统及显示方法 技术领域
本发明的实施例涉及一种景深保持装置、 3D显示系统及显示方法。 背景技术
3D显示已经成为显示行业的一大趋势。现在的 3D显示的基本原理是"视 差产生立体" , 即, 使观看者的左眼只看到左眼图, 右眼只看到右眼图, 左 眼图和右眼图是具有视差的立体图像对。人的大脑就会把这两幅图融合起来, 从而产生 3D效果。
人眼判断显示内容的景深的方法可以用图 1来说明。 处于观看者位置处 的两间距为 S的圓圈分别代表观看者的左右眼, S为瞳距。 显示屏幕平面所 在位置上两个正方形代表某个物体的左右眼视图, 他们分别被观看者的右、 左眼看到。 经过此立体成像, 观看者的大脑就会认为该物体其实位于距离屏 幕表面 的位置处, 距离 就是景深, D是观看者的观看距离。 根据图 1 , 很容易得到如下公式:
M _ L
Ύ_ D— L
现有的 3D显示器中, 对于同一画面, 显示屏幕上立体图像对的间距是 不变的 (即 不变) , 因此当观看者的位置发生变化时(即 D变化) , 景 深会跟着变化(即 L随/)变化)。 如图 2所示(左图为观看距离变化前的情 况, 右图为观看距离变化后的情况) , S为瞳距, 保持不变, 当 不变且观 看者的观看距离由 D变成/ Τ 时, 景深 变为了 " 。 而事实上, 为了模拟 出更逼真的画面, 希望景深是不变的。 发明内容
本发明的实施例提供了景深保持装置、 3D显示系统及显示方法,使得在 观看者移动的情况下, 保持观看者看到的 3D画面的景深不变。
一方面, 本发明的实施例提供了一种景深保持装置, 包括: 3D视频信号 输入装置; 位置测定装置; 图像处理装置; 以及 3D视频信号输出装置, 其 中所述 3D视频信号输入装置、位置测定装置及 3D视频信号输出装置均连接 到所述图像处理装置,所述 3D视频信号输入装置将获取的 3D视频信号传输 至所述图像处理装置, 所述位置测定装置将检测到的观看者与显示屏幕的位 置信息传输至所述图像处理装置, 所述图像处理装置根据所述位置信息调节 3D视频信号中的左右眼视图中各物点的间距使景深保持不变,并将调节后的 3D视频信号传输至所述 3D视频信号输出装置。
备选地, 所述图像处理装置包括:
左右眼视图调节模块, 根据所述位置信息调节 3D视频信号中的左右眼 视图中各物点的间距使景深相对同一观看者保持不变, 将调节后的左右眼视 图传输至 3D视频信号生成模块;
3D视频信号生成模块,根据调节后的左右眼视图生成新的 3D视频信号, 并将新的 3D视频信号传输至所述 3D视频信号输出装置。
备选地, 所述图像处理装置还包括: 左右眼视图提耳4莫块, 所述左右眼 视图提取模块接收所述 3D视频信号,并将从所述 3D视频信号提取的左右眼 视图传输至左右眼视图调节模块。
备选地, 所述左右眼视图提 莫块包括:
釆样子模块, 釆集所述 3D视频信号中的一帧数据, 并将所述一帧数据 发送至拆分子模块; 以及
拆分子模块, 将从所述一帧数据中拆分出的左右眼视图传输至所述左右 眼视图调节模块。
备选地, 所述景深保持装置还包括: 景深获取装置和景深存储器, 所述 景深获取装置连接所述位置测定装置、 所述图像处理装置和景深存储器, 所 述景深存储器连接所述图像处理装置, 所述景深获取装置接收所述位置测定 装置检测得到的观看者的初始位置信息和接收所述图像处理装置发送的 3D 视频信号中的左右眼视图中各物点的初始间距, 并计算所述景深, 将所述景 深发送至景深存储器存储。
备选地, 所述景深保持装置还包括: 景深获取装置和景深存储器, 所述 景深获取装置连接所述景深存储器,所述景深存储器连接所述图像处理装置, 所述景深获取装置将接收到的初始景深值发送至所述景深存储器存储。 备选地, 所述位置测定装置包括: 摄像头、 红外感应器或手动输入位置 信息的装置。
本发明的实施例还提供了一种 3D显示系统, 包括: 显示装置及上述任 一项所述的景深保持装置, 所述景深保持装置的 3D视频信号输出装置连接 所述显示装置, 将所述调节后的 3D视频信号传输至所述显示装置显示。
备选地, 所述显示装置包括: 快门眼镜式、 相位差板式(不闪式 3D ) 、 视差挡板式、 柱透镜光栅式或指向背光式 3D显示装置。
本发明的实施例还提供了一种釆用上述景深保持装置实现的 3D显示方 法, 包括以下步骤:
S1 : 根据观看者的位置变化实时釆集观看者与显示屏幕之间的位置信 息, 并获取 3D视频信号中的左右眼视图;
S2: 根据所述位置信息实时计算所述左右视图在显示屏幕上显示时各物 点的间距, 使景深相对于同一观看者保持不变;
S3:按照步骤 S2计算的间距和左右眼视图生成新的 3D视频信号并在显 示屏幕上显示。
备选地, 所述步骤 S1 中通过摄像头、 红外感应器或手动输入位置信息 的装置釆集观看者与显示屏幕之间的位置信息。
备选地, 若 3D视频信号中左右眼视图以同一幅图的形式传输, 所述步 骤 S1中获取 3D视频信号中的左右眼视图的步骤包括:
釆集所述 3D视频信号中的一帧数据;
从所述一帧数据中拆分出的左右眼视图。
其中, 所述步骤 S2中计算公式如下:
M =
D— L
其中, 为左右视图在显示屏幕上显示时各物点的间距, S为观看者的 瞳距, D为观看者与显示屏幕之间的距离, 为景深。
备选地, 在所述步骤 S1之前, 所述 3D显示方法还包括: 根据左右视图 在显示屏幕上显示时各物点的间距的初始值和观看者与显示屏幕之间的距离 的初始值, 按如下公式计算景深:
— S + M 其中, 为左右视图在显示屏幕上显示时各物点的间距, S为观看者的 瞳距, D为观看者与显示屏幕之间的距离, 为景深。
备选地, 在所述步骤 S1之前, 所述 3D显示方法还包括: 获取用户设置 的初始景深。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对实施例的附图作 简单地介绍,显而易见地,下面描述中的附图仅仅涉及本发明的一些实施例, 而非对本发明的限制。
图 1是观看者感知 3D显示的原理图;
图 2是观看者移动前后分别感知 3D显示的原理图;
图 3是根据本发明实施例的一种景深保持装置的结构示意图;
图 4是根据本发明实施例的另一种景深保持装置的结构示意图; 图 5是根据本发明实施例的又一种景深保持装置的结构示意图; 图 6是根据本发明实施例的 3D显示系统的结构示意图。 具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发 明实施例的附图,对本发明实施例的技术方案进行清楚、 完整地描述。显然, 所描述的实施例是本发明的一部分实施例, 而不是全部的实施例。 基于所描 述的本发明的实施例, 本领域普通技术人员在无需创造性劳动的前提下所获 得的所有其他实施例, 都属于本发明保护的范围。
实施例 1
本发明实施例 1的景深保持装置结构如图 3所示, 包括: 3D视频信号输 入装置 1、 位置测定装置 2、 图像处理装置 3及 3D视频信号输出装置 4。 3D 视频信号输入装置 1、位置测定装置 2及 3D视频信号输出装置 4均连接到图 像处理装置 3。
3D视频信号输入装置 1将获取的 3D视频信号传输至图像处理装置 3。 该 3D视频信号输入装置 1可以是能够传输 3D视频信号的各种类型的传输接 口。 位置测定装置 2将检测到的观看者与显示屏幕的位置信息传输至图像处 理装置 3。 本实施例中, 位置测定装置 2包括但不限于: 摄像头、 红外感应 器或手动输入位置信息的装置, 其中, 该位置信息可以是观看者和显示屏幕 在当前空间坐标系下的坐标, 也可以是观看者与显示屏幕间的距离。 如: 摄 像头釆集到观看者和显示屏幕在空间坐标系下的位置坐标, 将该坐标数据传 输至图像处理装置 3 , 或者摄像头中的运算模块根据空间坐标系下显示屏幕 的坐标和该位置坐标计算出观看者与显示屏幕间的距离, 并将该距离数据传 输至图像处理装置 3。
图像处理装置 3根据上述位置信息调节 3D视频信号中的左右眼视图中 各物点的间距使景深相对同一观看者保持不变, 并将调节后的 3D视频信号 传输至 3D视频信号输出装置 4。本实施例中, 图像处理装置 3可以是微控制 单元 (MCU, Micro Control Unit ) 、 现场可编程门阵列 (FPGA, Field - Programmable Gate Array )或永磁式非接触线性位移传感器( PLCD, Permanent Linear Contactless Displacement Sensor )等构成的嵌入式芯片系统。 其中 3D 视频信号通常包括左右眼视图 (左眼视图和右眼视图)和两者在显示时各物 点的初始间距。 图像处理装置 3具体包括: 左右目艮视图调节模块 32和 3D视 频信号生成模块 33。
左右眼视图调节模块 32根据上述位置信息调节 3D视频信号中的左右眼 视图中各物点的间距使景深保持不变, 将调节后的左右眼视图传输至 3D视 频信号生成模块 33。 若位置信息为观看者的位置坐标时, 该模块先根据空间 坐标系下显示屏幕的坐标和该位置坐标计算观看者与显示屏幕间的距离。 其 中, 调节 3D视频信号中的左右目艮视图中各物点的间距 的公式如下:
M =
D— L
其中, S为瞳距, 保持不变, 为景深, 也要保持不变, D为观看者观 看与显示屏幕之间的距离, 当 D变化时根据以上公式计算得到 M。
得到 值后根据该该 值调节左右眼视图, 并将调节后的左右眼视图 传输至 3D视频信号生成模块 33。 3D视频信号生成模块 33根据调节后的左 右眼视图生成新的 3D视频信号。
由于, 现有的某些 3D视频信号中左眼视图和右眼视图可能是作为同一 幅图传输, 则需要将左眼视图和右眼视图从该幅图中拆分出来。 因此, 本实 施例的景深保持装置进一步包括: 左右眼视图提取模块 31。 如图 4所示, 左 右眼视图提取模块 31接收 3D视频信号输入装置输入的 3D视频信号, 并将 从该 3D视频信号提取的左右眼视图传输至左右眼视图调节模块 32。 该左右 眼视图提 莫块 31 包括: 釆样子模块和拆分子模块, 釆样子模块用于釆集 3D视频信号中的一帧数据,并将一帧数据发送至拆分子模块;拆分子模块用 于将从该一帧数据中拆分出的左右眼视图传输至左右眼视图调节模块 32。
其中, 瞳距 S和景深 为事先确定的值, 该值事先获取, 并存储在构成 图像处理装置 3的芯片系统的存储器中。 当用户初次观看时, 需要获取该景 深 的初始值。 同时为了保证 在存储器中不被其他数据覆盖, 本实施例的 景深保持装置进一步包括:景深获取装置 5和景深存储器 6。景深获取装置 5 连接景深存储器 6 , 将获取到的初始景深值发送到景深存储器 6中存储。 景 深存储器 6连接左右眼视图调节模块 32, 左右眼视图调节模块 32从景深存 储器 6中获取初始景深。
示例性地, 景深获取装置 5以以下两种方式获取初始景深。
1、 景深获取装置 5为具有运算功能的模块, 如图 3、 4所示, 景深获取 装置 5连接位置测定装置 2以及 3D视频信号输入装置 1或左右眼视图提取 模块 31 , 若 3D视频信号中左眼视图和右眼视图作为同一幅图传输, 则景深 获取装置 5便连接到左右眼视图提取模块 31 (从左右眼视图提 莫块 31中 获取 3D视频信号中的左右眼视图中各物点的初始间距) , 景深获取装置 5 接收 3D视频信号中的左右眼视图中各物点的初始间距 和位置测定装置 2 检测得到的观看者的初始位置信息 并按如下公式计算初始景深, 将该初 始景深发送至景深存储器存储。
L— MD
~ S + M
2、 景深获取装置 5为一种输入装置, 如: 键盘, 如图 5所示, 景深获取 装置 5连接景深存储器 6 , 该景深获取装置 5将接收到的用户输入的初始景 深值发送至景深存储器 6存储。 此输入式的景深获取装置 5方便用户输入适 合自己观看的景深, 用户体验感好。
本实施例的景深保持装置在观看者移动时能够实时地调节显示屏幕上左 右目艮视图中各物点的间距 使景深保持不变, 达到了较好的观看效果。 实施例 2
本发明的实施例 2提供了一种 3D显示系统, 如图 6所示, 包括: 3D显 示装置 7和实施例 1中的景深保持装置。 景深保持装置中的 3D视频信号生 成模块 33将生成新的 3D视频信号通过 3D视频信号输出装置 4传输至 3D 显示装置 7显示, 以保证 3D显示装置 7在显示时保持景深不变。 其中, 3D 显示装置 7包括:快门目艮镜式( shutter Galss 3D )、相位差板式( Pattern retarder 3D,又称作不闪式 3D )、视差挡板式 ( barrier 3D )、柱透镜光栅式( lenticular 3D )或指向背光式 3D显示装置。
作为示例, 3D显示装置 7包括显示面板和相位差板、视差挡板、柱透镜 光栅等。作为示例, 显示面板可以是液晶显示面板、有机电致发光显示面板、 等离子体显示面板或电子墨水显示面板等。
实施例 3
本发明的实施例 3提供了一种 3D显示方法, 包括:
步骤 S701 ,根据观看者的位置变化实时釆集观看者与显示屏幕之间的位 置信息, 并获取 3D视频信号中的左右眼视图。 具体地, 釆用摄像头、 红外 感应器或手动输入位置信息等的位置测定装置实时釆集观看者与显示屏幕之 间的位置信息。 该位置信息可以是观看者和显示屏幕在当前空间坐标系下的 坐标, 也可以是观看者与显示屏幕间的距离。 如: 摄像头釆集到观看者和显 示屏幕在空间坐标系下的位置坐标。 或者摄像头中的运算模块根据空间坐标 系下显示屏幕的坐标和该位置坐标计算出观看者与显示屏幕间的距离。 3D视 频信号通常包括左右眼视图 (左眼视图和右眼视图)和两者在显示时各物点 的初始间距, 若 3D视频信号中左目艮视图和右眼视图作为同一幅图传输, 釆 集 3D视频信号中的一帧数据; 从该一帧数据中拆分出的左右眼视图。
步骤 S702,根据位置信息、 景深信息实时计算左右眼视图在显示屏幕上 显示时各物点的间距, 使景深相对同一观看者保持不变。 即根据观看者移动 后与显示屏幕间的距离按以下公式计算, 左右视图在显示屏幕上显示时各物 点的间距。
M =
D— L 其中, 为左右视图在显示屏幕上显示时各物点的间距, S为观看者的 瞳距, D为观看者与显示屏幕之间的距离, 为景深。
步骤 S703 , 按照计算出的间距和左右眼视图生成新的 3D视频信号并在 显示屏幕上显示。
若是观看者初始观看, 在步骤 S701 之前还包括: 根据左右视图在显示 屏幕上显示时各物点的间距的初始值和观看者与显示屏幕之间的距离的初始 值, 按如下公式计算景深:
~ S + M
或在步骤 S701之前根据用户设置的初始景深值来确定初始景深。
本实施例的方法中在观看者移动时能够实时地调节显示屏幕上左右眼视 图中各物点的间距 使景深保持不变, 达到了较好的观看效果。
本领域普通技术人员可以理解实现上述实施例的方法中的全部或部分步 骤可以通过程序来指令相关的硬件来完成, 所述程序可以存储于计算机可读 取存储介质中, 所述可读取存储介质可以是 ROM/RAM、 磁碟、 光盘等 以上实施方式仅用于说明本发明, 而并非对本发明的限制, 有关技术领 域的普通技术人员, 在不脱离本发明的精神和范围的情况下, 还可以做出各 种变化和变型, 因此所有等同的技术方案也属于本发明的范畴, 本发明的专 利保护范围应由权利要求限定。

Claims

权利要求书
1、 一种景深保持装置, 包括:
3D视频信号输入装置;
位置测定装置;
图像处理装置; 以及
3D视频信号输出装置,
其中所述 3D视频信号输入装置、位置测定装置及 3D视频信号输出装置 均连接到所述图像处理装置,
所述 3D视频信号输入装置将获取的 3D视频信号传输至所述图像处理装 置,
所述位置测定装置将检测到的观看者与显示屏幕的位置信息传输至所述 图像处理装置, 并且
所述图像处理装置根据所述位置信息调节 3D视频信号中的左右眼视图 中各物点的间距使景深保持不变, 并将调节后的 3D视频信号传输至所述 3D 视频信号输出装置。
2、 如权利要求 1所述的景深保持装置, 其中所述图像处理装置包括: 左右眼视图调节模块, 根据所述位置信息调节 3D视频信号中的左右眼 视图中各物点的间距使景深相对同一观看者保持不变, 将调节后的左右眼视 图传输至 3D视频信号生成模块;
3D视频信号生成模块,根据调节后的左右眼视图生成新的 3D视频信号, 并将新的 3D视频信号传输至所述 3D视频信号输出装置。
3、 如权利要求 2所述的景深保持装置, 其中所述图像处理装置还包括: 左右眼视图提取模块, 所述左右眼视图提取模块接收所述 3D视频信号, 并 将从所述 3D视频信号提取的左右眼视图传输至左右眼视图调节模块。
4、如权利要求 3所述的景深保持装置,其中所述左右眼视图提 莫块包 括:
釆样子模块, 釆集所述 3D视频信号中的一帧数据, 并将所述一帧数据 发送至拆分子模块; 以及
拆分子模块, 将从所述一帧数据中拆分出的左右眼视图传输至所述左右 眼视图调节模块。
5、如权利要求 1所述的景深保持装置,还包括: 景深获取装置和景深存 储器, 所述景深获取装置连接到所述位置测定装置、 所述图像处理装置和景 深存储器, 所述景深存储器连接到所述图像处理装置,
所述景深获取装置接收所述位置测定装置检测得到的观看者的初始位置 信息和接收所述图像处理装置发送的 3D视频信号中的左右眼视图中各物点 的初始间距, 并计算所述景深, 将所述景深发送至景深存储器存储。
6、如权利要求 1所述的景深保持装置,还包括: 景深获取装置和景深存 储器, 所述景深获取装置连接所述景深存储器, 所述景深存储器连接所述图 像处理装置, 所述景深获取装置将接收到的初始景深值发送至所述景深存储 器存储。
7、如权利要求 1所述的景深保持装置,还包括: 景深获取装置和景深存 储器, 所述景深获取装置连接到所述位置测定装置、 所述 3D视频信号输入 装置和景深存储器, 所述景深存储器连接到所述图像处理装置,
所述景深获取装置接收所述位置测定装置检测得到的观看者的初始位置 信息和接收所述 3D视频信号输入装置发送的 3D视频信号中的左右眼视图中 各物点的初始间距, 并计算所述景深, 将所述景深发送至景深存储器存储。
8、 如权利要求 1~7 中任一项所述的景深保持装置, 其中所述位置测定 装置包括: 摄像头、 红外感应器或手动输入位置信息的装置。
9、 一种 3D显示系统, 包括: 显示装置及权利要求 1~7中任一项所述的 景深保持装置, 所述景深保持装置的 3D视频信号输出装置连接到所述显示 装置, 将所述调节后的 3D视频信号传输至所述显示装置显示。
10、 如权利要求 9所述的 3D显示系统, 其中所述显示装置包括: 快门 眼镜式、 相位差板式、 视差挡板式、 柱透镜光栅式或指向背光式 3D显示装 置。
11、 一种釆用如权利要求 1所述的景深保持装置实现的 3D显示方法, 包括以下步骤:
S1 : 根据观看者的位置变化实时釆集观看者与显示屏幕之间的位置信 息, 并获取 3D视频信号中的左右眼视图;
S2: 根据所述位置信息实时计算所述左右视图在显示屏幕上显示时各物 点的间距, 使景深相对于同一观看者保持不变;
S3:按照步骤 S2计算的间距和左右眼视图生成新的 3D视频信号并在显 示屏幕上显示。
12、 如权利要求 11所述的 3D显示方法, 其中所述步骤 S1中通过摄像 头、 红外感应器或手动输入位置信息的装置釆集观看者与显示屏幕之间的位 置信息。
13、 如权利要求 11所述的 3D显示方法, 其中若 3D视频信号中左右眼 视图以同一幅图的形式传输,所述步骤 S1中获取 3D视频信号中的左右眼视 图的步骤包括:
釆集所述 3D视频信号中的一帧数据;
从所述一帧数据中拆分出的左右眼视图。
14、 如权利要求 11所述的 3D显示方法, 其中所述步骤 S2中计算公式 下:
M =
D— L
其中, 为左右视图在显示屏幕上显示时各物点的间距, S为观看者的 瞳距, D为观看者与显示屏幕之间的距离, 为景深。
15、如权利要求 10~14中任一项所述的 3D显示方法, 在所述步骤 S1之 前, 还包括: 根据左右眼视图在显示屏幕上显示时各物点的间距的初始值和 观看者与显示屏幕之间的距离的初始值, 按如下公式计算景深: ~ S + M
其中, 为左右视图在显示屏幕上显示时各物点的间距, S为观看者的 瞳距, D为观看者与显示屏幕之间的距离, 为景深。
16、如权利要求 10~14中任一项所述的 3D显示方法, 在所述步骤 S1之 前, 还包括: 获取用户设置的初始景深。
PCT/CN2012/084852 2012-06-26 2012-11-19 景深保持装置、3d显示系统及显示方法 WO2014000370A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/985,020 US9307228B2 (en) 2012-06-26 2012-11-19 Depth of field maintaining apparatus, 3D display system and display method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2012102163502A CN102752621A (zh) 2012-06-26 2012-06-26 景深保持装置、3d显示装置及显示方法
CN201210216350.2 2012-06-26

Publications (1)

Publication Number Publication Date
WO2014000370A1 true WO2014000370A1 (zh) 2014-01-03

Family

ID=47032493

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/084852 WO2014000370A1 (zh) 2012-06-26 2012-11-19 景深保持装置、3d显示系统及显示方法

Country Status (3)

Country Link
US (1) US9307228B2 (zh)
CN (1) CN102752621A (zh)
WO (1) WO2014000370A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752621A (zh) * 2012-06-26 2012-10-24 京东方科技集团股份有限公司 景深保持装置、3d显示装置及显示方法
CN103176605A (zh) * 2013-03-27 2013-06-26 刘仁俊 一种手势识别控制装置及控制方法
EP3087737A4 (en) * 2013-12-24 2017-08-16 Intel Corporation Techniques for stereo three dimensional video processing
CN103686141B (zh) * 2013-12-30 2016-08-17 京东方科技集团股份有限公司 3d显示方法及3d显示装置
CN105306918B (zh) * 2014-07-31 2018-02-09 优视科技有限公司 一种基于立体显示的处理方法及装置
CN104581113B (zh) * 2014-12-03 2018-05-15 深圳市魔眼科技有限公司 基于观看角度的自适应全息显示方法及全息显示装置
CN105872528B (zh) * 2014-12-31 2019-01-15 深圳超多维科技有限公司 3d显示方法、装置及3d显示设备
CN105867597B (zh) * 2014-12-31 2020-01-10 深圳超多维科技有限公司 3d交互方法及3d显示设备
CN105730237A (zh) * 2016-02-04 2016-07-06 京东方科技集团股份有限公司 一种行车辅助装置及行车辅助方法
CN110119260B (zh) * 2019-04-28 2023-04-18 维沃移动通信有限公司 一种屏幕显示方法及终端
US11449004B2 (en) 2020-05-21 2022-09-20 Looking Glass Factory, Inc. System and method for holographic image display
WO2021262860A1 (en) 2020-06-23 2021-12-30 Looking Glass Factory, Inc. System and method for holographic communication
CN112224146B (zh) * 2020-10-20 2021-11-09 广州柒度科技有限公司 一种具有前端防护结构的计算机用显示设备
WO2022119940A1 (en) 2020-12-01 2022-06-09 Looking Glass Factory, Inc. System and method for processing three dimensional images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075776A (zh) * 2011-01-18 2011-05-25 青岛海信电器股份有限公司 一种立体显示的控制方法及装置
CN102158721A (zh) * 2011-04-06 2011-08-17 青岛海信电器股份有限公司 调节三维立体图像的方法、装置及电视机
CN102340678A (zh) * 2010-07-21 2012-02-01 深圳Tcl新技术有限公司 一种景深可调的立体显示装置及其景深调整方法
CN102752621A (zh) * 2012-06-26 2012-10-24 京东方科技集团股份有限公司 景深保持装置、3d显示装置及显示方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI223780B (en) * 2001-01-24 2004-11-11 Vrex Inc Method for zooming a stereoscopic image, article of manufacture comprising a computer usable medium, computer related product for use with a graphics display device, and program storage device readable by a machine and tangibly embodying a program of iK
US8369607B2 (en) * 2002-03-27 2013-02-05 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
WO2003081921A1 (fr) * 2002-03-27 2003-10-02 Sanyo Electric Co., Ltd. Procede de traitement d'images tridimensionnelles et dispositif
KR101324440B1 (ko) * 2009-02-11 2013-10-31 엘지디스플레이 주식회사 입체 영상의 뷰 제어방법과 이를 이용한 입체 영상표시장치
CN102193705A (zh) * 2010-03-02 2011-09-21 鸿富锦精密工业(深圳)有限公司 三维多媒体影像互动控制系统及方法
JP2011223482A (ja) * 2010-04-14 2011-11-04 Sony Corp 画像処理装置、画像処理方法、およびプログラム
KR101685343B1 (ko) * 2010-06-01 2016-12-12 엘지전자 주식회사 영상표시장치 및 그 동작방법
CN102316333A (zh) * 2010-07-06 2012-01-11 宏碁股份有限公司 显示系统与提示系统
JP5640680B2 (ja) * 2010-11-11 2014-12-17 ソニー株式会社 情報処理装置、立体視表示方法及びプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340678A (zh) * 2010-07-21 2012-02-01 深圳Tcl新技术有限公司 一种景深可调的立体显示装置及其景深调整方法
CN102075776A (zh) * 2011-01-18 2011-05-25 青岛海信电器股份有限公司 一种立体显示的控制方法及装置
CN102158721A (zh) * 2011-04-06 2011-08-17 青岛海信电器股份有限公司 调节三维立体图像的方法、装置及电视机
CN102752621A (zh) * 2012-06-26 2012-10-24 京东方科技集团股份有限公司 景深保持装置、3d显示装置及显示方法

Also Published As

Publication number Publication date
US9307228B2 (en) 2016-04-05
US20140055580A1 (en) 2014-02-27
CN102752621A (zh) 2012-10-24

Similar Documents

Publication Publication Date Title
WO2014000370A1 (zh) 景深保持装置、3d显示系统及显示方法
US9451242B2 (en) Apparatus for adjusting displayed picture, display apparatus and display method
CN101636747B (zh) 二维/三维数字信息获取和显示设备
US8090251B2 (en) Frame linked 2D/3D camera system
CN102802014B (zh) 一种多人跟踪功能的裸眼立体显示器
EP2659680B1 (en) Method and apparatus for providing mono-vision in multi-view system
TWI523493B (zh) 三維影像顯示裝置及其驅動方法
KR101732131B1 (ko) 사용자 위치 기반의 영상 제공 장치 및 방법
KR20120016408A (ko) 3차원 컨텐츠를 출력하는 디스플레이 기기의 영상 출력 방법 및 그 방법을 채용한 디스플레이 기기
JP2011049644A5 (zh)
CN203838470U (zh) 裸眼3d拍摄和显示装置及其显示屏
JP2012205267A5 (zh)
CN103533340A (zh) 移动终端的裸眼3d播放方法和移动终端
CN102547350A (zh) 一种基于梯度光流算法合成虚拟视点方法及立体显示装置
US20120007949A1 (en) Method and apparatus for displaying
TWI515457B (zh) 多視點三維顯示器系統及其控制方法
KR101960577B1 (ko) 뷰 공간에 관한 스테레오 정보를 송수신하는 방법
CN103813148A (zh) 三维立体显示装置及其方法
CN101908233A (zh) 产生用于三维影像重建的复数视点图的方法及系统
CN109218701B (zh) 裸眼3d的显示设备、方法、装置及可读存储介质
KR101046580B1 (ko) 영상처리장치 및 그 제어방법
KR20110136326A (ko) 삼차원 입체안경의 수평각 정보를 반영한 삼차원 스테레오스코픽 렌더링 시스템
JP2012065010A (ja) 遠隔映像監視システム
CN202565412U (zh) 自动调整立体图像视差的显示装置
CN202634616U (zh) 景深保持装置及3d显示装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 13985020

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12880031

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08-06-2015)

122 Ep: pct application non-entry in european phase

Ref document number: 12880031

Country of ref document: EP

Kind code of ref document: A1