WO2017028498A1 - 3d场景展示方法及装置 - Google Patents

3d场景展示方法及装置 Download PDF

Info

Publication number
WO2017028498A1
WO2017028498A1 PCT/CN2016/071616 CN2016071616W WO2017028498A1 WO 2017028498 A1 WO2017028498 A1 WO 2017028498A1 CN 2016071616 W CN2016071616 W CN 2016071616W WO 2017028498 A1 WO2017028498 A1 WO 2017028498A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
scene
image
position information
viewer
Prior art date
Application number
PCT/CN2016/071616
Other languages
English (en)
French (fr)
Inventor
王涛
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US15/037,511 priority Critical patent/US10045007B2/en
Publication of WO2017028498A1 publication Critical patent/WO2017028498A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • Embodiments of the present invention relate to a 3D scene display method and apparatus.
  • 3D display technology can make the picture become stereoscopic.
  • the most basic principle is to use the left and right human eyes to receive different pictures separately, and then pass the brain to the image information. Superimposed and regenerated to form an image with a stereo effect.
  • N is a display device, taking the hexahedron M in the 3D scene as an example, in general, standing at the observation position A, only one 3D picture of the hexahedron can be viewed, and only the 3D picture can be viewed in the 3D picture.
  • the embodiment of the present invention provides a method and a device for displaying a 3D image scene, which are used to solve the problem in the prior art that a single viewer cannot view different 3D images in a 3D scene as the observed position changes.
  • an embodiment of the present invention provides a 3D scene display method, which is applied to a 3D scene display apparatus, including: loading 3D scene information, wherein the 3D scene information stores a plurality of 3D image information, and each 3D The image information carries corresponding shooting position information; when the viewer is detected within the visible angle range of the 3D scene display device, the current viewer's observation position information is determined in real time, wherein the observed position information is the current viewer Determining the location information of the device with respect to the 3D scene; determining 3D image information to be displayed according to the observed location information and the plurality of 3D scene information; and 3D content corresponding to the 3D image information to be displayed Show it.
  • an embodiment of the present invention provides a 3D scene display apparatus, including: a loading unit configured to load 3D scene information, wherein the 3D scene information stores a plurality of 3D image information, and each 3D image The information carries corresponding shooting position information; the determining unit is configured to determine the current position information of the current viewer in real time when the viewer is detected within the visible angle range of the 3D scene display device, wherein the observed position information Positioning information for the current viewer relative to the 3D scene display device; the processing unit configured to determine 3D image information to be displayed according to the observed position information and the 3D scene information; and a display unit configured to The corresponding 3D content of the displayed 3D image information is displayed.
  • FIG. 1 is a schematic diagram of a viewer viewing the same 3D picture at positions A and B within a range of viewing angles provided by the prior art
  • FIG. 2 is a schematic flowchart of a 3D scene display method according to Embodiment 1 of the present invention.
  • FIG. 3 is a simplified schematic diagram of a 3D scene M enumerated in Embodiment 1 of the present invention
  • 4(a) is a schematic diagram of the camera S capturing a 3D scene M;
  • 4(b) is a schematic diagram of the viewer R viewing in the range of the viewing angle of the 3D scene display device N;
  • FIG. 5 is a schematic diagram of different 3D pictures viewed by the viewers at positions A and B in the range of the viewing angle;
  • FIG. 6 is a schematic diagram of a viewer viewing different 3D pictures at position A;
  • FIG. 7 is a schematic structural diagram of a 3D scene display apparatus according to Embodiment 2 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 2 is a schematic flowchart diagram of a 3D scene display method according to Embodiment 1 of the present invention. The method is applied to a single viewer's display scene, and the viewer is within a visible angle range of the 3D scene display device. Moving, the method mainly includes the following steps:
  • Step 11 Load 3D scene information.
  • the 3D scene information stores a plurality of 3D image information, and each 3D image information carries corresponding shooting position information.
  • the 3D scene information is obtained by: determining a 3D scene to be photographed; setting the image capturing device at a preset starting shooting point, and correspondingly recording shooting position information of the current starting shooting point; in the 3D Moving the image acquisition device to capture in a range of visible angles of the scene, and correspondingly recording all the shooting position information on the path through which the movement is performed; reconstructing each of the left and right viewpoint image information obtained by the image to form a plurality of 3D Image information.
  • the orientation information is correlated with the observation position information of the subsequent viewer, and the observation position information is limited to the visible angle range of the display screen side of the 3D scene display device, the other side of the display screen is impossible.
  • the position information is related to the observation position information. Therefore, the orientation information should correspond to the observation position information, and the observation position information overlaps with the partial orientation information, and the orientation information and the arrangement manner of the observation position information and the reference target are as consistent as possible.
  • the image capturing device used for shooting may be a camera having an acquisition and image processing function.
  • each 3D image information also carries corresponding shooting position information, which can be obtained by any physical coordinate capable of representing the position information in the 3D scene, and is used to indicate the position relative to the 3D scene M.
  • the relationship may be, for example, the latitude and longitude, the linear distance from the 3D scene M, and the like.
  • the 3D scene M may be used as the coordinate origin
  • the photographing position information may be an X coordinate, a Y coordinate, a Z coordinate, or the like with respect to the coordinate origin.
  • the viewer R can only view the 3D picture displayed on the display screen within the viewing angle range of the 3D scene display device N. Therefore, the observation position information of the viewer R is limited to the viewing angle. Azimuth information within the range.
  • embodiment of the present invention may also acquire 3D scene information by using system modeling or the like.
  • Step 12 When the viewer is detected within the current viewing angle range, the current viewer's observation position is determined in real time to obtain the observed position information.
  • the observation position information is position information of the current viewer relative to the 3D scene display device.
  • the current position information of the current viewer is obtained in the step 12, and the eye information of the viewer may be captured in real time, and the observed position information of the viewer is determined according to the captured eye information.
  • the manner of obtaining the observed position information in the embodiment of the present invention includes, but is not limited to, the above obtaining operation, and can also determine the observer's observation by capturing the other body parts of the viewer, for example, the head. position.
  • the eye information of the viewer may be captured by a human eye based on a single camera or a multi-camera.
  • the human eye detection may be performed by an infrared detecting device disposed on a display screen of the 3D scene display device.
  • an infrared detection device can also be integrated in the camera, which together assist in human eye detection to improve accuracy.
  • Step 13 Determine 3D image information to be displayed according to the observed position information and the 3D scene information.
  • the step 13 may include: searching for 3D scene information according to the observed position information, and using the found 3D image information corresponding to the observed position information as the 3D image information to be displayed.
  • an information parameter correspondence table composed of each 3D image information and the photographing position information carried by itself may be established in advance, and then each of the photographing position information in the observation position information and the information parameter correspondence table is performed one by one.
  • the 3D image information corresponding to the observed position information is used as a 3D image letter to be displayed according to the correlation between the preset observation position and the shooting position. interest.
  • Step 14 Display the 3D content corresponding to the 3D image information to be displayed.
  • the above method may further include the following steps:
  • the first step is to receive a 3D image switching instruction.
  • the 3D image switching instruction includes at least: a rotation switching instruction, a reduction switching instruction, an amplification switching instruction, and a movement switching instruction.
  • the 3D image switching instruction may be key value information selected by the viewer using a remote control or the like to control transmission, or may be gesture information of the viewer (for a 3D scene display device having a touch screen).
  • the 3D image switching instruction is parsed to determine the switching type and the shooting position information after the switching;
  • the 3D image information to be switched is searched from the 3D scene information according to the determined captured shooting position information
  • the 3D image information to be switched is switched according to the switching type, and the corresponding 3D content of the 3D image information to be switched is displayed.
  • the embodiments involved in the first step to the fourth step are mainly directed to the case where the viewer does not move, for example, when the viewer moves to the boundary of the visible angle range, if the user wants to view other 3D pictures, Moving away from the direction of the 3D scene display device, the range of invisible angles is entered, and therefore, the limited observation position information of the viewer limits that it cannot display 3D image information other than the 3D image information corresponding to the observation position information.
  • 3D picture That is, in the case where the viewer moves, if the automatic recognition mode is relied on, only a limited number of different 3D pictures can be viewed, and not all the different 3D pictures can be viewed.
  • the 3D scene information to be displayed is loaded. Before being displayed, the viewer firstly checks whether there is a viewer within the current viewing angle range. If the detection is not detected, the screen is displayed according to the preset playing rule.
  • the 3D image information in the 3D scene information can play only the 3D image information corresponding to the shooting position information of the starting shooting point. If the viewer is detected, it is further determined whether the observer's observation position has moved. If it is determined that the movement occurs, the human eye information is detected in real time by using the infrared camera configured by itself, and the viewer's reality is finally determined according to the human eye information. Observe position information. Taking the viewer in FIG.
  • Example 2 mainly adopts the following two methods.
  • various touch operations can be performed on the 3D scene display device according to the touch touch method, thereby converting into a 3D image switching instruction that can be analyzed by the 3D scene display device, and then, according to the analysis As a result, the switching type and the shooting position information after the switching are determined, and further, the 3D image information to be switched is searched from the 3D scene information, and the 3D content corresponding to the 3D image information is displayed.
  • the 3D scene display device receives the 3D image switching instruction corresponding to the touch operation, and analyzes and determines that the type of the switching needs to be rotated, and the switched shooting position information may be determined according to the application range of the touch operation on the display screen. .
  • 3D scene display devices not all 3D scene display devices have a touch function. Therefore, for a 3D scene display device without a touch function, a control device similar to a 3D scene display device such as a remote controller can be used, and The key value is clicked, long pressed, etc. to send a 3D image switching instruction to the 3D scene display device, and the 3D image display device receives and parses the 3D image switching instruction to determine the switching type, the switched shooting position information, and the subsequent operation. The same as the above method 1, and will not be described here.
  • the 3D image switching instruction includes, but is not limited to, a rotation switching instruction, a reduction switching instruction, an amplification switching instruction, and a movement switching instruction.
  • the 3D scene display device parses the corresponding switching type and the captured shooting position information according to a preset instruction parsing manner.
  • the image capturing device moves each time.
  • a certain step size can be set by a person skilled in the art as needed.
  • the step size can be set in consideration of the calculation speed and the amount of data processed.
  • the moving step of the camera should not be smaller than the human eyelid distance, and the step size can be set to the pupil distance.
  • a 3D scene display method provided by the above-mentioned first embodiment of the present invention belongs to the same technical concept.
  • the embodiment of the present invention further provides a 3D scene display device. The following is explained in detail through the second embodiment.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the 3D scene display device may be a display device, which may be a liquid crystal display device, an OLED display device, or a plasma display device.
  • the embodiment of the present invention is not limited thereto, as long as it can Achieve 3D display effects.
  • the 3D scene display device includes a structural component such as a display screen, a display module, a driving module, and the like, and most importantly, in the embodiment of the present invention, the structural unit capable of realizing the functional purpose of the embodiment of the present invention is mainly included in the figure. As shown in Figure 7, the 3D scene display device mainly includes:
  • the loading unit 21 is configured to load 3D scene information.
  • the 3D scene information stores a plurality of 3D image information, and each 3D image information carries corresponding shooting position information.
  • the 3D scene information loaded in the loading unit 21 can be obtained by: determining a 3D scene to be photographed; setting the image capturing device at a preset starting shooting point, and correspondingly recording the current starting shooting point. Position information; moving the image capturing device to capture in the range of the viewing angle of the 3D scene, and correspondingly recording all the shooting position information on the path through which the movement is performed; reconstructing each of the left and right viewpoint image information obtained by the image, A plurality of 3D image information is formed.
  • the determining unit 22 is configured to determine the current position information of the current viewer in real time when the viewer is detected within the current viewing angle range.
  • the observation position information is position information of the current viewer relative to the 3D scene display device.
  • the determining unit 22 is configured to capture the viewer's eye information when the viewer is detected within the current viewing angle range, and determine the viewer's observed position information based on the captured eye information.
  • the processing unit 23 is configured to determine 3D image information to be displayed according to the observed location information and the 3D scene information.
  • the processing unit 23 is configured to: find 3D scene information according to the observed position information, and use the found 3D image information corresponding to the observed position information as the 3D image information to be displayed.
  • the display unit 24 is configured to perform 3D content corresponding to the 3D image information to be displayed.
  • the 3D scene display apparatus may further include: a receiving unit, configured to receive when the viewer does not move within a range of the viewing angle of the 3D scene display device 3D image switching instruction.
  • the parsing unit is configured to parse the 3D image switching instruction received by the receiving unit, and determine the switching type and the shooting position information after the switching. Therefore, the processing unit is further configured to search for 3D image information to be switched from the 3D scene information according to the switched shooting position information determined by the analyzing unit.
  • the display unit is further configured to switch to the 3D image information to be switched according to the switching type determined by the parsing unit, and display the corresponding 3D content of the 3D image information to be switched; wherein the 3D image switching instruction at least includes: a rotation switching instruction, Reduce the switching command, the amplification switching command, and the movement switching command.
  • the collecting device performs shooting and correspondingly records all the shooting position information on the path through which the movement passes.
  • the image capturing device moves a certain step size each time, and those skilled in the art can perform setting according to needs, for example, may consider
  • the step size is set by calculating the speed, the amount of processed data, etc.
  • the moving step of the camera should not be smaller than the human eyelid distance, and the step size can be set to the pupil distance.
  • the embodiment of the present invention can load 3D scene information in a 3D scene display device, where the 3D scene information stores a plurality of 3D image information carrying corresponding shooting position information, and according to the determined current
  • the observer's observation position information further determines the 3D image information to be displayed, thereby ensuring that the viewer can see different 3D images at any observation position within the range of the viewing angle, and as far as possible, the viewer experiences the actual The effect of watching the 3D scene in the scene.
  • the 3D scene information stores a plurality of 3D image information, and each 3D image information carries corresponding shooting position information; and the viewing is detected within the current viewing angle range. And determining the current position information of the current viewer in real time, the observed position information is position information of the current viewer relative to the 3D scene display device; and then determining the 3D image information to be displayed according to the observed position information and the 3D scene information. And displaying the 3D content corresponding to the 3D image information to be displayed. Thereby, it is ensured that the viewer can view different 3D picture content at different observation positions within the range of the viewing angle, thereby effectively improving the viewing experience of the viewer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种3D场景展示方法及装置被提供。该3D场景展示方法,应用于3D场景显示装置,包括:加载3D场景信息,其中,所述3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;在所述3D场景显示装置的可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,其中,所述观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;根据所述观测位置信息以及所述多个3D场景信息确定待展示的3D图像信息;以及将与所述待展示的3D图像信息相应的3D内容进行展示。这样,根据本发明实施例的3D场景展示方法及装置可以保证观看者在可视角度范围内的任一观测位置都可以看到不同的3D画面。

Description

3D场景展示方法及装置 技术领域
本发明的实施例涉及一种3D场景展示方法及装置。
背景技术
目前,随着显示技术的不断发展,3D显示技术已经备受关注,3D显示技术可以使得画面变得立体逼真,其最基本的原理是利用左右人眼分别接收不同画面,然后经过大脑对图像信息进行叠加重生,形成具有立体效果的影像。
然而,现有的3D显示技术仅能使观看者观看到单一的3D场景,无法观看到不同角度的3D场景。如图1所示,其中N为显示装置,以3D场景中的六面体M为例,一般情况下,站在观测位置A,仅能观看到该六面体的一个3D画面,该3D画面中仅能观看到面a、面b、面c,而无法观看到该六面体的其他面;当观看者移动到观测位置B时,观看到的仍然是之前的3D画面,观看者无法体验在现实情况中从各个角度观看得到不同3D画面的视觉效果,从而,降低了用户的观看体验。
发明内容
本发明的实施例提供一种3D图像场景的展示方法及装置,用以解决现有技术中存在的无法实现单一观看者随着观测位置的改变而观看到3D场景中不同3D画面的问题。
一方面,本发明的实施例提供一种3D场景展示方法,应用于3D场景显示装置,包括:加载3D场景信息,其中,所述3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;在所述3D场景显示装置的可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,其中,所述观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;根据所述观测位置信息以及所述多个3D场景信息确定待展示的3D图像信息;以及将与所述待展示的3D图像信息相应的3D内容 进行展示。
另一方面,本发明的实施例提供一种3D场景展示装置,包括:加载单元,构造为加载3D场景信息,其中,所述3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;确定单元,构造为在所述3D场景展示装置的可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,其中,所述观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;处理单元,构造为根据所述观测位置信息以及所述3D场景信息确定待展示的3D图像信息;以及展示单元,构造为将与所述待展示的3D图像信息相应的3D内容进行展示。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术中提供的观看者在可视角度范围内的位置A、B观看到相同3D画面的示意图;
图2为本发明实施例一提供的一种3D场景展示方法的流程示意图;
图3为本发明实施例一中所列举的一种3D场景M的简单示意图;
图4(a)为摄像机S拍摄3D场景M的示意图;
图4(b)为观看者R在3D场景展示装置N的可视角度范围内进行观看的示意图;
图5为观看者在可视角度范围内的位置A、B分别观看到的不同3D画面的示意图;
图6为观看者在位置A观看不同3D画面的示意图;
图7为本发明实施例二中提供的一种3D场景展示装置的结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部分实施 例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
下面通过具体的实施例对本发明所涉及的技术方案进行详细描述,本发明包括但并不限于以下实施例。
实施例一:
如图2所示,为本发明实施例一提供的一种3D场景展示方法的流程示意图,该方法应用于单一观看者的展示场景中,且观看者在3D场景展示装置的可视角度范围内移动,该方法主要包括以下步骤:
步骤11:加载3D场景信息。
其中,该3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息。
示例性地,该3D场景信息通过以下方式获取:确定待拍摄的3D场景;将图像采集装置设置在预设的起始拍摄点,并对应记录当前起始拍摄点的拍摄位置信息;在该3D场景的可视角度范围内移动所述图像采集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息;将拍摄得到的每个左右视点图像信息进行图像重构,形成多个3D图像信息。
示例性地,如图3所示,在确定3D场景M后,需要对该3D场景M进行全方位的拍摄,并同时记录每个拍摄角度的方位信息,图中虚线仅示出部分拍摄路径;需要说明的是,由于该方位信息与后续观看者的观测位置信息相互关联,而观测位置信息仅限于3D场景展示装置的显示屏一侧的可视角度范围内,显示屏另一侧是不可能涉及到观测位置信息的,因此,方位信息应当与观测位置信息相对应,观测位置信息与部分方位信息相重叠,且方位信息与观测位置信息的设置方式以及参考标的尽可能一致。其中,上述用于拍摄的图像采集装置可以为具有采集、图像处理功能的摄像机。
结合图4(a)和图4(b)所示,分别为摄像机S拍摄3D场景M的示意图,以及观看者R在3D场景展示装置N的可视角度范围内进行观看的示意图。其中,如图4(a)所示,摄像机S可以在三维空间任意可拍摄的角度范围内拍摄该3D场景M,因此,记录的拍摄位置信息可以为三维空间任意可拍摄角度范围内的方位信息。例如,拍摄结束后重构得到H个3D图像信 息,同时,每个3D图像信息中还携带有相应拍摄位置信息,该拍摄位置信息可以依赖于任意一种能够表示3D场景中方位信息的物理坐标得到,用于表示相对于3D场景M的位置关系,例如:经、纬度,距3D场景M的直线距离等,例如,可以以3D场景M为坐标原点,拍摄位置信息可以为相对于坐标原点的X坐标、Y坐标和Z坐标等。如图4(b)所示,观看者R仅能在3D场景展示装置N的可视角度范围内观看到显示屏上展示的3D画面,因此,观看者R的观测位置信息仅限于可视角度范围内的方位信息。
需要说明的是,本发明的实施例还可以采用系统建模等方式获取3D场景信息。
步骤12:在当前可视角度范围内检测到观看者时,实时确定当前观看者的观测位置而得到观测位置信息。
其中,观测位置信息为当前观看者相对于3D场景展示装置的位置信息。
示例性地,该步骤12中实时获取当前观看者的观测位置信息,可以为:实时捕获观看者的眼部信息,并根据捕获到的眼部信息确定观看者的观测位置信息。在此,需要说明的是,本发明实施例中获取观测位置信息的方式包括但并不限于以上获取操作,还可以通过对观看者其他身体部位,例如,头部的捕获以确定观看者的观测位置。
其中,捕获观看者的眼部信息,具体可以通过基于单摄像头或多摄像头的人眼检测,此外,还可以通过配置在3D场景展示装置的显示屏上的红外检测装置进行人眼检测。示例性地,还可以将红外检测装置集成在摄像头中,两者共同协助进行人眼检测,以提高准确性。
步骤13:根据观测位置信息以及3D场景信息确定待展示的3D图像信息。
示例性地,该步骤13可以包括:根据观测位置信息查找3D场景信息,并将查找到的与观测位置信息对应的3D图像信息作为待展示的3D图像信息。
在实际的操作过程中,可以事先建立由每个3D图像信息与自身携带的拍摄位置信息构成的信息参数对应表,然后,将观测位置信息与信息参数对应表中每个拍摄位置信息进行一一比对,进而根据预设的观测位置与拍摄位置的关联性,将与观测位置信息对应的3D图像信息作为待展示的3D图像信 息。
步骤14:将与待展示的3D图像信息相应的3D内容进行展示。
示例性地,若观看者在3D场景展示装置的可视角度范围内不移动,则上述方法还可以进一步包括以下步骤:
第一步,接收3D图像切换指令;其中,3D图像切换指令至少包括:旋转切换指令、缩小切换指令、放大切换指令、移动切换指令。
其中,3D图像切换指令可以是由观看者利用遥控器等类似的控制终端选择发送的键值信息,也可以是观看者的手势信息(针对具有触控屏幕的3D场景展示装置)。
第二步,对3D图像切换指令进行解析,确定切换类型以及切换后的拍摄位置信息;
第三步,根据确定的切换后的拍摄位置信息从3D场景信息中查找待切换的3D图像信息;
第四步,按照切换类型切换至待切换的3D图像信息,并将该待切换的3D图像信息相应的3D内容进行展示。
需要说明的是,第一步至第四步所涉及的实施方案,主要针对观看者不移动的情况,例如:观看者移动至可视角度范围的边界时,若由于想观看其他3D画面而向偏离3D场景展示装置的方向继续移动,则会进入不可视角度范围,因此,观看者的有限的观测位置信息限制了其不能观看除了该观测位置信息对应的3D图像信息以外的3D图像信息所展示的3D画面。即在观看者移动的情况下,若依靠自动识别方式,仅能观看有限的不同3D画面,而不能观看所有不同3D画面。
以下通过几个具体的实例对上述实施例一的方案进行更为详尽的描述。
实例1:
在3D场景展示装置经过初始化开机之后,加载待展示的3D场景信息,在未展示之前,首先在当前可视角度范围内检测是否有观看者,若检测不到,则按照预设的播放规则展示3D场景信息中的3D图像信息,例如:可以仅播放起始拍摄点的拍摄位置信息对应的3D图像信息。若检测到观看者,则进一步判断该观看者的观测位置是否发生移动,若确定发生移动,则利用自身配置的红外摄像头实时检测人眼信息,并根据人眼信息最终确定观看者的实 时观测位置信息。以图5中观看者在3D场景展示装置的显示屏前方位置A水平移动至位置B为例,观看者在位置A时,首先,确定当前位置A的方位信息,然后根据该方位信息查找3D场景信息,并从中确定待展示的3D图像信息,并展示;如图中所示,观看者在位置A时,看到的是六面体的a1一个面。在观看者移动到位置B时,则重新确定当前观看者的方位信息,然后根据该方位信息确定此时待展示的3D图像信息,并展示,如图所示,观看者在位置B时,观看到的是则是a1、a2、a3三个面。从而,仿佛将观看者置于实际的3D场景中,而且,本实例1中假设位置A与位置B之间是衔接的,其切换两个画面的时间是可以忽略的,则观看者看到的应该是较为连续的3D画面。
在此需要强调的是,观看者在实际观看的时候,左、右眼会分别获取不同的画面,从而经过大脑对图像信息进行叠加重生,构成3D画面效果。本发明所涉及的所有实施例均为此类成像方式。
通过这样的方案处理,从而保证观看者在3D场景展示装置的可视角度范围内移动时,观看到不同角度的3D画面,而且,尽可能让观看者体验到在真实观看场景中观看所展示该3D场景的视觉效果。
实例2:
考虑到每个类似于3D场景展示装置的显示设备的可视角度范围都是有限的,因此,观看者移动过程中虽然能够观看到不同角度的3D画面(3D图像内容),但是,该3D画面对应在实际3D场景中的背面却是无法观看到的。因此,为了能够更好的完善观看者的观看体验,本发明实例2还可以允许采用手动调整的方式协助观看者提升观看体验。实例2主要采用以下两种方式。
方式一:
针对具有触控功能的3D场景展示装置,可以根据接触触控方式在该3D场景展示装置上进行各种触控操作,从而转换成3D场景展示装置可以解析的3D图像切换指令,然后,根据解析结果确定切换类型以及切换后的拍摄位置信息,进而,从3D场景信息中查找待切换的3D图像信息,并将与该3D图像信息对应的3D内容进行展示。
例如:当前展示的为图6中左边的3D画面,观看者意图观看倾斜后的该六面体的3D画面,则通过将与旋转操作对应的触控操作施加到显示屏上, 从而,3D场景展示装置接收该触控操作对应的3D图像切换指令,并解析,确定出需要切换的类型为旋转,切换后的拍摄位置信息可以根据该触控操作在显示屏上的施加范围确定。最终,切换至如图6右边的3D画面。
方式二:
然而,并不是所有的3D场景展示装置都具有触控功能,因此,针对没有触控功能的3D场景展示装置,可以使用类似遥控器等与3D场景展示装置配套使用的控制装置,并通过对预设的键值进行点击、长按等操作向3D场景展示装置发送3D图像切换指令,在3D场景展示装置接收并解析该3D图像切换指令,确定切换类型、切换后的拍摄位置信息,之后的操作与上述方式一相同,在此不做赘述。
需要说明的是,在本发明实施例中,3D图像切换指令包括但并不限于旋转切换指令、缩小切换指令、放大切换指令、移动切换指令。3D场景展示装置根据预设的指令解析方式解析出相应的切换类型以及切换后的拍摄位置信息。
需要说明的是,对于“在该3D场景的可视角度范围内移动所述图像采集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息”,这里,图像采集装置每次移动一定的步长,本领域的技术人员可以根据需要进行设定,例如,可以考虑计算速度和处理的数据量等而设定步长。示例性地,由于人眼的瞳距为65mm,相机的移动步长不应小于人眼瞳距,可将步长设置为瞳距。
与本发明上述实施例一提供的一种3D场景展示方法属于同一发明技术构思,本发明的实施例还提供了一种3D场景展示装置。下面通过实施例二详细说明。
实施例二:
在本发明实施例二中,该3D场景展示装置可以为一种显示装置,其可以为液晶显示装置或OLED显示装置或等离子显示装置等,本发明的实施例并不对此进行限定,只要是能够实现3D显示效果即可。该3D场景展示装置包括:显示屏,显示模组,驱动模组等结构部件,最为关键的是,在本发明实施例中,还主要包括能够实现本发明实施例功能目的的结构单元,如图7所示,该3D场景展示装置主要包括:
加载单元21,用于加载3D场景信息。其中,3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息。
示例性地,加载单元21中加载的3D场景信息可以通过以下方式获取:确定待拍摄的3D场景;将图像采集装置设置在预设的起始拍摄点,并对应记录当前起始拍摄点的拍摄位置信息;在该3D场景的可视角度范围内移动图像采集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息;将拍摄得到的每个左右视点图像信息进行图像重构,形成多个3D图像信息。
确定单元22,用于在当前可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息。其中,观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息。
示例性地,确定单元22,构造为:在当前可视角度范围内检测到观看者时,捕获观看者的眼部信息,并根据捕获到的眼部信息确定观看者的观测位置信息。
处理单元23,用于根据观测位置信息以及3D场景信息确定待展示的3D图像信息。
示例性地,处理单元23,构造为:根据观测位置信息查找3D场景信息,并将查找到的与观测位置信息对应的3D图像信息作为待展示的3D图像信息。
展示单元24,用于将与待展示的3D图像信息相应的3D内容进行。
示例性地,为了能够更加有效地提升观看者的观看体验,该3D场景展示装置还可以进一步包括:接收单元,用于在观看者在3D场景展示装置的可视角度范围内不移动时,接收3D图像切换指令。解析单元,用于对接收单元接收到的3D图像切换指令进行解析,确定切换类型以及切换后的拍摄位置信息。从而,处理单元还用于根据解析单元确定的切换后的拍摄位置信息从3D场景信息中查找待切换的3D图像信息。展示单元还用于按照解析单元确定的切换类型切换至待切换的3D图像信息,并将该待切换的3D图像信息相应的3D内容进行展示;其中,3D图像切换指令至少包括:旋转切换指令、缩小切换指令、放大切换指令、移动切换指令。
需要说明的是,对于“在该3D场景的可视角度范围内移动所述图像采 集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息”,这里,图像采集装置每次移动一定的步长,本领域的技术人员可以根据需要进行设定,例如,可以考虑计算速度和处理的数据量等而设定步长。示例性地,由于人眼的瞳距为65mm,相机的移动步长不应小于人眼瞳距,可将步长设置为瞳距。
通过上述技术方案可知,本发明的实施例能够通过在3D场景展示装置中加载3D场景信息,该3D场景信息中存储有多个携带有相应的拍摄位置信息的3D图像信息,并根据确定的当前观看者的观测位置信息进一步确定待展示的3D图像信息,从而,保证观看者在可视角度范围内的任一观测位置都可以看到不同的3D画面,以及尽可能使观看者体验到在实际场景中观看该3D场景的效果。
在上述技术方案中,通过加载3D场景信息,该3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;并在当前可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,该观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;然后,根据观测位置信息以及3D场景信息确定待展示的3D图像信息,并将与待展示的3D图像信息相应的3D内容进行展示。从而,保证了观看者可以在可视角度范围内的不同观测位置观看到不同的3D画面内容,有效提升了观看者的观看体验。
尽管已描述了本发明的实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。
本申请要求于2015年8月19日递交的中国专利申请第201510512281.3号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。

Claims (16)

  1. 一种3D场景展示方法,应用于3D场景显示装置,包括:
    加载3D场景信息,其中,所述3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;
    在所述3D场景显示装置的可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,其中,所述观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;
    根据所述观测位置信息以及所述多个3D场景信息确定待展示的3D图像信息;
    将与所述待展示的3D图像信息相应的3D内容进行展示。
  2. 如权利要求1所述的3D场景展示方法,其中所述多个3D场景信息通过以下方式获取:
    确定待拍摄的3D场景;
    将图像采集装置设置在预设的起始拍摄点,并对应记录当前起始拍摄点的拍摄位置信息;
    在该3D场景的可视角度范围内移动所述图像采集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息;
    将拍摄得到的每个左右视点图像信息进行图像重构,形成所述多个3D图像信息。
  3. 如权利要求1所述的3D场景展示方法,其中所述实时确定当前观看者的观测位置信息,包括:
    实时捕获观看者的眼部信息,并根据捕获到的眼部信息确定观看者的观测位置信息。
  4. 如权利要求1所述的3D场景展示方法,其中根据所述观测位置信息以及所述3D场景信息确定待展示的3D图像信息,包括:
    根据所述观测位置信息查找所述3D场景信息,并将查找到的与所述观测位置信息对应的3D图像信息作为待展示的3D图像信息。
  5. 如权利要求1-4任一所述的3D场景展示方法,其中若所述观看者在所述3D场景展示装置的可视角度范围内不移动,所述方法还包括:
    接收3D图像切换指令;
    解析所述3D图像切换指令,确定切换类型以及切换后的拍摄位置信息;
    根据确定的切换后的拍摄位置信息从所述3D场景信息中查找待切换的3D图像信息;
    按照所述切换类型切换至所述待切换的3D图像信息,并将该待切换的3D图像信息相应的3D内容进行展示;
    其中,所述3D图像切换指令至少包括:旋转切换指令、缩小切换指令、放大切换指令、移动切换指令。
  6. 如权利要求1所述的3D场景展示方法,其中根据所述观测位置信息以及所述3D场景信息确定待展示的3D图像信息包括:
    根据预设的观测位置与所述多个3D场景信息携带的拍摄位置信息的关联性,将与所述观测位置对应的3D图像信息作为所述待展示的3D图像信息。
  7. 如权利要求6所述的3D场景展示方法,其中所述根据所述观测位置信息以及所述3D场景信息确定待展示的3D图像信息,在根据预设的观测位置与所述多个3D场景信息携带的拍摄位置信息的关联性,将与所述观测位置对应的3D图像信息作为所述待展示的3D图像信息之前,还包括:
    建立所述多个3D图像信息的每个与其携带的拍摄位置信息的信息参数对应表。
  8. 如权利要求5所述的3D场景展示方法,其中所述3D图像切换指令由所述观看者手动输入。
  9. 一种3D场景展示装置,包括:
    加载单元,构造为加载3D场景信息,其中,所述3D场景信息中存储有多个3D图像信息,且每个3D图像信息携带有相应的拍摄位置信息;
    确定单元,构造为在所述3D场景展示装置的可视角度范围内检测到观看者时,实时确定当前观看者的观测位置信息,其中,所述观测位置信息为当前观看者相对于所述3D场景展示装置的位置信息;
    处理单元,构造为根据所述观测位置信息以及所述3D场景信息确定待展示的3D图像信息;以及
    展示单元,构造为将与所述待展示的3D图像信息相应的3D内容进行展示。
  10. 如权利要求9所述的3D场景展示装置,其中所述3D场景信息通过以下方式获取:
    确定待拍摄的3D场景;
    将图像采集装置设置在预设的起始拍摄点,并对应记录当前起始拍摄点的拍摄位置信息;
    在该3D场景的可视角度范围内移动所述图像采集装置进行拍摄,并对应记录移动所经过的路径上的全部拍摄位置信息;
    将拍摄得到的每个左右视点图像信息进行图像重构,形成多个3D图像信息。
  11. 如权利要求9所述的3D场景展示装置,其中所述确定单元,构造为:捕获观看者的眼部信息,并根据捕获到的眼部信息确定观看者的观测位置信息。
  12. 如权利要求9所述的3D场景展示装置,其中所述处理单元,构造为:根据所述观测位置信息查找所述3D场景信息,并将查找到的与所述观测位置信息对应的3D图像信息作为待展示的3D图像信息。
  13. 如权利要求9-12中任一项所述的3D场景展示装置,其中所述装置还包括:
    接收单元,构造为在观看者在3D场景展示装置的可视角度范围内不移动时,接收3D图像切换指令;
    解析单元,用于对所述接收单元接收到的3D图像切换指令进行解析,确定切换类型以及切换后的拍摄位置信息;
    所述处理单元,还用于根据解析单元确定的切换后的拍摄位置信息从所述3D场景信息中查找待切换的3D图像信息;
    所述展示单元,还用于按照所述解析单元确定的切换类型切换至所述待切换的3D图像信息,并将该待切换的3D图像信息相应的3D内容进行展示;
    其中,所述3D图像切换指令至少包括:旋转切换指令、缩小切换指令、放大切换指令、移动切换指令。
  14. 如权利要求9所述的3D场景展示装置,其中所述处理单元构造为:根据预设的观测位置与所述多个3D场景信息携带的拍摄位置信息的关联性,将与所述观测位置对应的3D图像信息作为所述待展示的3D图像信息。
  15. 如权利要求14所述的3D场景展示装置,其中所述处理单元还构造为:建立所述多个3D图像信息的每个与其携带的拍摄位置信息的信息参数对应表。
  16. 如权利要求13所述的3D场景展示装置,其中所述3D图像切换指令由所述观看者手动输入。
PCT/CN2016/071616 2015-08-19 2016-01-21 3d场景展示方法及装置 WO2017028498A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/037,511 US10045007B2 (en) 2015-08-19 2016-01-21 Method and apparatus for presenting 3D scene

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510512281.3 2015-08-19
CN201510512281.3A CN105120251A (zh) 2015-08-19 2015-08-19 一种3d场景展示方法及装置

Publications (1)

Publication Number Publication Date
WO2017028498A1 true WO2017028498A1 (zh) 2017-02-23

Family

ID=54668121

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/071616 WO2017028498A1 (zh) 2015-08-19 2016-01-21 3d场景展示方法及装置

Country Status (3)

Country Link
US (1) US10045007B2 (zh)
CN (1) CN105120251A (zh)
WO (1) WO2017028498A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105120251A (zh) 2015-08-19 2015-12-02 京东方科技集团股份有限公司 一种3d场景展示方法及装置
CN106896732B (zh) * 2015-12-18 2020-02-04 美的集团股份有限公司 家用电器的展示方法和装置
CN105574914B (zh) * 2015-12-18 2018-11-30 深圳市沃优文化有限公司 3d动态场景的制作装置及其制作方法
CN106162204A (zh) * 2016-07-06 2016-11-23 传线网络科技(上海)有限公司 全景视频生成、播放方法、装置及系统
CN107623812A (zh) * 2016-07-14 2018-01-23 幸福在线(北京)网络技术有限公司 一种实现实景展示的方法、相关装置及系统
EP3511764B1 (en) * 2016-09-30 2021-07-21 Huawei Technologies Co., Ltd. 3d display method and user terminal
CN107957772B (zh) * 2016-10-17 2021-09-21 阿里巴巴集团控股有限公司 现实场景中采集vr图像的处理方法以及实现vr体验的方法
CN107146278B (zh) * 2017-04-18 2020-05-26 深圳市智能现实科技有限公司 场景建模方法及装置
CN109254660B (zh) 2018-08-31 2020-11-17 歌尔光学科技有限公司 内容显示方法、装置及设备
CN110908499A (zh) * 2018-09-18 2020-03-24 西安中兴新软件有限责任公司 一种3d图像显示方法和装置、及终端
CN109451296B (zh) * 2018-09-26 2020-09-08 深圳市新致维科技有限公司 一种可供多人观看的裸眼3d系统和3d画面的显示方法
CN111416949A (zh) * 2020-03-26 2020-07-14 上海擎天电子科技有限公司 一种实景展示装置
CN115695771A (zh) * 2021-07-28 2023-02-03 京东方科技集团股份有限公司 一种显示装置及其显示方法
CN114422819A (zh) * 2022-01-25 2022-04-29 纵深视觉科技(南京)有限责任公司 一种视频显示方法、装置、设备、系统及介质
CN114500846B (zh) * 2022-02-12 2024-04-02 北京蜂巢世纪科技有限公司 现场活动观看视角切换方法、装置、设备及可读存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008041313A1 (fr) * 2006-10-02 2008-04-10 Pioneer Corporation Dispositif d'affichage d'images
CN101729920A (zh) * 2009-11-23 2010-06-09 南京大学 一种自由视角立体视频显示方法
CN103517060A (zh) * 2013-09-03 2014-01-15 展讯通信(上海)有限公司 一种终端设备的显示控制方法及装置
CN104349155A (zh) * 2014-11-25 2015-02-11 深圳超多维光电子有限公司 模拟立体图像显示方法及显示设备
CN104506841A (zh) * 2014-12-31 2015-04-08 宇龙计算机通信科技(深圳)有限公司 一种多摄像头终端的3d文件制作、播放方法及装置
CN104618706A (zh) * 2015-01-12 2015-05-13 深圳市亿思达科技集团有限公司 一种分时实现多人多角度全息立体显示的移动终端及方法
CN104679227A (zh) * 2013-12-02 2015-06-03 创世界科技有限公司 一种产品展示实现方法
CN104820497A (zh) * 2015-05-08 2015-08-05 东华大学 一种基于增强现实的3d交互显示系统
CN105120251A (zh) * 2015-08-19 2015-12-02 京东方科技集团股份有限公司 一种3d场景展示方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3992629B2 (ja) * 2003-02-17 2007-10-17 株式会社ソニー・コンピュータエンタテインメント 画像生成システム、画像生成装置、画像生成方法
US8189035B2 (en) * 2008-03-28 2012-05-29 Sharp Laboratories Of America, Inc. Method and apparatus for rendering virtual see-through scenes on single or tiled displays
US9432661B2 (en) * 2011-02-24 2016-08-30 Kyocera Corporation Electronic device, image display method, and image display program
JP6240963B2 (ja) * 2011-09-12 2017-12-06 インテル・コーポレーション 運動視差を用いた、2d画像からの3d知覚の生成
WO2014035204A1 (ko) * 2012-08-30 2014-03-06 (주)지에스엠솔루션 뷰 포인트에 따른 입체화 방법
CN103488413B (zh) * 2013-04-26 2016-12-28 展讯通信(上海)有限公司 触控设备及在触控设备上显示3d界面的控制方法和装置
US9838672B2 (en) * 2013-05-16 2017-12-05 Mediatek Inc. Apparatus and method for referring to motion status of image capture device to generate stereo image pair to auto-stereoscopic display for stereo preview

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008041313A1 (fr) * 2006-10-02 2008-04-10 Pioneer Corporation Dispositif d'affichage d'images
CN101729920A (zh) * 2009-11-23 2010-06-09 南京大学 一种自由视角立体视频显示方法
CN103517060A (zh) * 2013-09-03 2014-01-15 展讯通信(上海)有限公司 一种终端设备的显示控制方法及装置
CN104679227A (zh) * 2013-12-02 2015-06-03 创世界科技有限公司 一种产品展示实现方法
CN104349155A (zh) * 2014-11-25 2015-02-11 深圳超多维光电子有限公司 模拟立体图像显示方法及显示设备
CN104506841A (zh) * 2014-12-31 2015-04-08 宇龙计算机通信科技(深圳)有限公司 一种多摄像头终端的3d文件制作、播放方法及装置
CN104618706A (zh) * 2015-01-12 2015-05-13 深圳市亿思达科技集团有限公司 一种分时实现多人多角度全息立体显示的移动终端及方法
CN104820497A (zh) * 2015-05-08 2015-08-05 东华大学 一种基于增强现实的3d交互显示系统
CN105120251A (zh) * 2015-08-19 2015-12-02 京东方科技集团股份有限公司 一种3d场景展示方法及装置

Also Published As

Publication number Publication date
CN105120251A (zh) 2015-12-02
US20170054972A1 (en) 2017-02-23
US10045007B2 (en) 2018-08-07

Similar Documents

Publication Publication Date Title
WO2017028498A1 (zh) 3d场景展示方法及装置
US9807342B2 (en) Collaborative presentation system
US10171792B2 (en) Device and method for three-dimensional video communication
CN109615703B (zh) 增强现实的图像展示方法、装置及设备
EP3334173A1 (en) Method and device for playing video content at any position and time
CN106066701B (zh) 一种ar和vr数据处理设备与方法
JP2019092170A (ja) 3dプレノプティックビデオ画像を作成するためのシステムおよび方法
US8655163B2 (en) Consolidated 2D/3D camera
WO2022199260A1 (zh) 静态对象的立体显示方法、装置、介质及电子设备
WO2016021034A1 (ja) 3次元上の注視点の位置特定アルゴリズム
US20190130193A1 (en) Virtual Reality Causal Summary Content
JPWO2016199731A1 (ja) ヘッドマウントディスプレイ、表示制御方法及びプログラム
KR20130039522A (ko) 입체 파노라마 영상을 생성하는 장치 및 방법
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
US20210400234A1 (en) Information processing apparatus, information processing method, and program
JP2018033107A (ja) 動画の配信装置及び配信方法
US20190028690A1 (en) Detection system
US11128836B2 (en) Multi-camera display
TWI502271B (zh) 控制方法及電子裝置
TW201603557A (zh) 立體影像處理系統、裝置與方法
JP4249187B2 (ja) 立体映像処理装置並びにそのプログラム
CN105630170B (zh) 一种信息处理方法及电子设备
JP2012120194A (ja) 画像表示装置、画像表示方法、および画像補正方法
JP2006042280A (ja) 画像処理装置
WO2018165906A1 (zh) 一种头戴式显示装置及其显示方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 15037511

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16836363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16836363

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16836363

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/08/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16836363

Country of ref document: EP

Kind code of ref document: A1