WO2009140908A1 - 光标处理方法、装置及系统 - Google Patents

光标处理方法、装置及系统 Download PDF

Info

Publication number
WO2009140908A1
WO2009140908A1 PCT/CN2009/071844 CN2009071844W WO2009140908A1 WO 2009140908 A1 WO2009140908 A1 WO 2009140908A1 CN 2009071844 W CN2009071844 W CN 2009071844W WO 2009140908 A1 WO2009140908 A1 WO 2009140908A1
Authority
WO
WIPO (PCT)
Prior art keywords
cursor
image
depth
parallax
displacement information
Prior art date
Application number
PCT/CN2009/071844
Other languages
English (en)
French (fr)
Inventor
树贵明
Original Assignee
深圳华为通信技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳华为通信技术有限公司 filed Critical 深圳华为通信技术有限公司
Publication of WO2009140908A1 publication Critical patent/WO2009140908A1/zh

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/08Cursor circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components

Definitions

  • the present invention relates to image processing techniques, and more particularly to a method, apparatus and system for processing a cursor in a three dimensional scene. Background technique
  • a stereoscopic view or a 3D view is a view having a sense of depth of the scene obtained by independently acquiring an image by the left and right eyes of the user.
  • the existing three-dimensional scene rendering technology based on stereoscopic view is mainly based on the principle of human binocular parallax, that is, due to the difference of the positions of the two eyes, two images of the same scene but slightly different can be obtained respectively, through two sets separated by a certain interval.
  • the camera takes two images of the same scene but slightly different, and respectively displays them to the left and right eyes of the person, so that the viewer obtains a scene with a sense of depth and layering, forming an immersive effect. Bring a better user experience.
  • the existing computer user graphical interface is operated based on a planar two-dimensional cursor, and the user can select an interface element on the user graphical interface by moving the cursor on the plane of the two-dimensional plane by the mouse.
  • the planar two-dimensional cursor can only move in the planar two-dimensional plane, the operation requirements of the three-dimensional stereoscopic display elements with different depths in the three-dimensional stereoscopic display scene cannot be satisfied.
  • FIG. 1 is a schematic diagram of a three-dimensional stereoscopic display scene.
  • the A window, the B window, and the C window are respectively at different depth levels in the three-dimensional display scene.
  • the cursor needs to be displayed at the depth level of the A window, as shown in the figure.
  • the cursor In the A position; when the user needs to operate on the interface element on the C window, it is necessary to control the cursor to be displayed at the depth level of the C window, as shown in the C position in the figure.
  • existing planar two-dimensional cursors cannot achieve such an operation.
  • the concept of the existing 3D mouse refers to a mouse capable of simultaneously outputting three or more types of displacement information.
  • MEMS MEMS
  • the embodiment of the present invention provides a cursor processing method, apparatus, and system for displaying a pointing cursor of a control device such as a mouse in a three-dimensional stereoscopic display scene in real time in different depth levels in a three-dimensional stereoscopic scene according to changes in depth displacement information.
  • an embodiment of the present invention provides a cursor processing method, including:
  • Three dimensional displacement information is generated, the three dimensions including two dimensions of the plane and a depth dimension;
  • the embodiment of the invention further provides a cursor processing system, comprising a cursor generating device and a cursor adjusting device, wherein:
  • the cursor generating device is configured to generate displacement information of three dimensions, where the three dimensions include two dimensions of the plane and a depth dimension;
  • the cursor adjusting device is configured to adjust a parallax corresponding to the two-eye view of the cursor in real time according to the displacement information generated by the cursor generating device, and output the two-eye view corresponding to the parallax-adjusted cursor to the three-dimensional display device.
  • An embodiment of the present invention further provides a cursor processing apparatus including a cursor generating unit and a cursor adjusting unit, wherein:
  • the cursor generating unit is configured to generate displacement information of three dimensions, the three dimensions Including plane two dimensions and depth dimensions;
  • the cursor adjusting unit is configured to adjust a disparity of the two-eye view corresponding to the cursor in real time according to the displacement information generated by the cursor generating unit, and output the two-eye view corresponding to the parallax-adjusted cursor to the three-dimensional display device.
  • the embodiment of the invention has the following advantages:
  • the cursor in the three-dimensional stereoscopic display scene can be correspondingly changed according to the displacement information of the depth dimension, thereby being displayed in different depth levels in the three-dimensional stereoscopic scene in real time; facilitating the operation of the user in the three-dimensional stereoscopic display scene, and the three-dimensional stereoscopic display scene Has a stronger sense of realism.
  • FIG. 1 is a schematic diagram showing an example of a conventional three-dimensional stereoscopic display scene
  • FIG. 2A is a flowchart of a method for processing a cursor according to Embodiment 1 of the present invention
  • FIG. 2B is a specific flowchart of step 102 according to Embodiment 1 of the method of the present invention
  • FIG. 2C is a binocular camera according to Embodiment 1 of the method of the present invention
  • FIG. 2D is a flowchart of a threshold-based iterative algorithm according to Embodiment 1 of the method of the present invention
  • 3A is a flow chart of a method for acquiring a stereoscopic image pair according to Embodiment 2 of the method of the present invention.
  • FIG. 3B is a schematic diagram of photographing a target scene by a camera device according to Embodiment 2 of the present invention.
  • FIG. 3C is a flowchart of resetting the maximum parallax according to Embodiment 2 of the method of the present invention
  • FIG. 4A is a flowchart of a method for acquiring a stereo image pair according to Embodiment 3 of the method of the present invention
  • FIG. 4B is a schematic diagram of a 3D scene modeling according to Embodiment 3 of the present invention
  • FIG. 5 is a schematic structural diagram of a cursor processing system according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a structure of a cursor processing apparatus according to an embodiment of the present invention
  • An embodiment of the present invention provides a cursor processing method in a three-dimensional scene, as shown in FIG. 2A, including:
  • Step 101 The cursor generating device generates displacement information of three dimensions according to the displacement thereof, where the three dimensions include two dimensions of the plane and a depth dimension.
  • the cursor generating device refers to a device for controlling a cursor such as a mouse.
  • the displacement information of the two dimensions of the plane can be obtained by controlling the up, down, left and right movement of the cursor generating means, and the displacement information of the depth dimension can be generated by the depth control unit mounted on the cursor generating means.
  • the depth control unit is a unit for controlling the movement of the cursor in the depth direction, and may be, for example, a scroll wheel, a button or a MEMS device.
  • Step 102 Adjust a parallax corresponding to the two-eye view of the cursor in real time according to the displacement information.
  • the cursor adopts a planar two-dimensional image, so the planar two-dimensional image needs to be converted into a three-dimensional stereoscopic image with parallax, and the specific conversion process includes:
  • Step 102A Generate a depth map of the planar two-dimensional image.
  • the foreground image and the background image of the planar two-dimensional image may be firstly segmented; and different depth information is respectively set for the foreground image and the background image according to the reference reference depth in the three-dimensional scene.
  • the setting of the depth information can be performed by using the principle of a binocular camera. As shown in FIG. 2C, the focal length of the distance B is f. Two cameras capture the target point M of the depth Z in the scene, and the imaging of the target point M in the left and right cameras is 13 ⁇ 4 and ⁇ 3 ⁇ 4, respectively, and the imaging points are obtained according to the image matching. 13 ⁇ 4 and 13 ⁇ 4, in the case where the parallax is d, the depth information of the target point can be obtained according to the following relation.
  • the foreground map refers to the entire cursor picture, such as an arrow pattern, etc.;
  • the background image refers to the display background at the reference reference depth, such as setting the display screen The background of the plane, etc.
  • the foreground and background images can be obtained by segmentation-based iterative algorithm segmentation.
  • the specific iterative algorithm is shown in Figure 2D, which mainly includes the following steps:
  • the image obtained by the threshold segmentation obtained by the above iterative algorithm works well.
  • the iterative-based threshold can distinguish the foreground image of the planar two-dimensional image from the main region of the background image, but there is still no good discrimination in the subtleties of the planar two-dimensional image.
  • a graph-based image segmentation algorithm proposed in Reference 1 can be used.
  • Step 102B Adjust a disparity of the depth map according to the displacement information of the depth dimension. Specifically, the parallax of the foreground image and the background image in the depth map is adjusted.
  • the depth of the object in the stereoscopic image is used to give the user a sense of depth and layering in the three-dimensional display scene, and the accuracy of the depth setting is lower than that required for the target reconstruction and target recognition. Therefore, you can set the foreground and background depths of the scene with a coarse-grained depth, set a smaller depth for the foreground image, and set a larger depth for the background image.
  • the depth setting ensures that when the stereo image is displayed during reconstruction, the viewer can significantly feel the relationship between the foreground image and the background image in the scene.
  • Step 103 Output the two-eye view corresponding to the parallax-adjusted cursor to the three-dimensional scene display device.
  • the stereoscopic images with parallax are respectively displayed to the left and right eyes of the viewer at the time of display, and the viewer perceives the depth information of the image content through the images with different left and right eyes, thereby enabling the user to obtain the sense of depth of the cursor.
  • the method according to the embodiment of the invention enables the cursor of the planar two-dimensional image to be correspondingly changed according to the displacement information of the depth dimension in the three-dimensional stereoscopic display scene, so as to be displayed in different depth levels in the three-dimensional stereoscopic scene in real time. It facilitates the user's operation in the 3D stereoscopic display scene, making the 3D stereoscopic display scene more realistic.
  • the embodiment of the present invention provides a cursor processing method in another three-dimensional scene, and the method is different from the method embodiment 1 in that, in the step 102, the two eyes of the cursor are adjusted in real time according to the displacement information.
  • the cursor of the method embodiment 1 adopts a planar two-dimensional image
  • the cursor according to the embodiment of the present invention adopts a stereoscopic image that already has parallax, and therefore, can directly according to the obtained in step 101.
  • the displacement information of the depth dimension adjusts the parallax of the stereo image.
  • Step 201 Obtain a first image from the first location.
  • the focal length of the photographic apparatus is f
  • the M point of the distance Z is photographed.
  • the first image of M is obtained by photographing the target scene at a certain position by using the camera.
  • Step 202 After moving to the second position, the second image of M is obtained, thereby obtaining two images with parallax of the target scene, that is, stereo image pairs, and the parallax of the first image and the second image is d.
  • the distance the camera moves can be the distance B of the binocular distance of the human eye.
  • the second image of M is not required to be moved to the second position, and the second image of M can be captured at the first position, thereby obtaining the target Two images of the scene with parallax, that is, stereo image pairs.
  • Step 203 Perform image scan line alignment processing on the stereo image pair. Since the position of the first image and the second image captured by the shooting is not precisely controlled in the position, angle and vertical direction of the moving camera, it is necessary to take one of the images and perform the other image. Scan line alignment processing.
  • the vertical alignment of the left and right stereo images to the scene content can be achieved by image scanning line alignment processing. Specifically, the following methods can be used:
  • each search column of the image is decomposed into a plurality of gray sub-images according to the color component;
  • Each point in each sub-image search bar is matched and a vertical offset is calculated, and these vertical offsets are extrapolated to the entire image to adjust the alignment of the two images.
  • Step 204 Reset the maximum parallax to the stereo image pair after the image scan line alignment processing is completed.
  • the maximum parallax of the stereoscopic image is reset because the moving distance between the moving camera and the other angle may be too small, so that the parallax of the two images is small, resulting in a stereoscopic impression when the stereoscopic image is displayed; or Because the spacing of the movement is too large, the parallax is too large, which is likely to cause viewing fatigue. Reset the maximum parallax as shown in Figure 3C. The process is as follows:
  • the two images after the scan line are aligned are stereo-matched to obtain a disparity map corresponding to the first image; the maximum disparity value in the statistical disparity map; the preset optimal disparity value is divided by the obtained maximum disparity value. , thereby obtaining a scaling factor of the disparity; using the scaling factor to perform scaling adjustment on the disparity of each point in the disparity map; reconstructing the second image by using the scaled disparity and the first image, thereby completing resetting the maximum disparity, so that the viewer is When viewing stereoscopic image display, you can get more comfortable stereoscopic viewing.
  • Step 205 Form the first image and the second image into a stereogram for use as the cursor.
  • the method of the embodiment of the present invention uses a dual camera device to perform left and right shooting.
  • the stereoscopic image obtained by the formula is used as a cursor in the three-dimensional stereoscopic display scene, and the cursor can be correspondingly changed according to the displacement information of the depth dimension, thereby being displayed in real time in different depth levels in the three-dimensional stereoscopic scene. It facilitates the user's operation in the three-dimensional stereoscopic display scene, and makes the three-dimensional stereoscopic display scene have a stronger realism.
  • the embodiment of the present invention provides a cursor processing method in another three-dimensional scene, which is similar to the method embodiment 2 in that a stereo image is used as a cursor, and therefore, the displacement according to the depth dimension obtained in step 101 can be directly obtained.
  • the information adjusts the parallax of the stereo image.
  • the method for acquiring a stereoscopic image as a cursor in the embodiment of the present invention is different from the second embodiment.
  • the method includes:
  • Step 301 Perform 3D scene modeling on the cursor used in the three-dimensional scene.
  • FIG. 4B a scene projection model using a dual camera is shown.
  • a and B are the two points in the scene, and the optical centers of the left and right camera units that are parallel to each other.
  • the distance between and is c; the distance from the camera to the imaging plane is D.
  • the camera is located at the midpoint of the line, a certain point in space can be obtained according to the geometric relationship of the sheet, ⁇ , the projected coordinates (", v ) on the plane of projection are:
  • the coordinates of the X-axis can be shifted accordingly, so that the projections of a certain point in the space on the viewpoint of the left and right camera devices are:
  • v denotes the projection of a point in the space on the viewpoint of the left camera (" R , ) represents the projection coordinates of a point in space in the viewpoint of the right camera.
  • Step 302 Render the cursor model after obtaining the cursor model.
  • the user can use the viewpoint conversion and projection transformation functions provided by the application programming interface (Application Programming Interface) to set the position and observation range of the left and right camera units.
  • Application Programming Interface Application Programming Interface
  • the stereoscopic image obtained by the 3D modeling method is used as a cursor in a three-dimensional stereoscopic display scene, and the cursor can be correspondingly changed according to the displacement information of the depth dimension, thereby being displayed in real time in the three-dimensional stereoscopic scene.
  • the cursor can be correspondingly changed according to the displacement information of the depth dimension, thereby being displayed in real time in the three-dimensional stereoscopic scene.
  • different depth levels It is convenient for the user to operate in the three-dimensional stereoscopic display scene, so that the three-dimensional stereoscopic display scene has a stronger realism.
  • the embodiment of the invention provides a cursor processing system. As shown in FIG. 5, the cursor generating device 10 and the cursor adjusting device 20 are included. Its working principle is as follows:
  • the cursor generating device 10 can be a device for controlling a cursor such as a normal mouse.
  • the depth control unit 11 in the cursor generating device 10 controls the cursor movement in the depth direction.
  • the depth control unit 11 may be a scroll wheel, a push button or a MEMS device or the like.
  • the displacement information generating unit 12 in the cursor generating device 10 controls the result of the movement of the cursor in the depth direction according to the depth control unit 11, and the displacement of the cursor generating device 10 generates displacement information of three dimensions, the three dimensions Including the plane two dimensions and Depth dimension.
  • the displacement information of the depth dimension may be obtained according to the information that the depth control unit 11 controls the cursor in the depth direction.
  • the cursor parallax adjustment unit 21 in the cursor adjustment device 20 adjusts the parallax of the two-eye view corresponding to the cursor in real time according to the displacement information obtained by the cursor generating device 10.
  • the specific adjustment process refer to the method embodiments 1, 2, and 3. The description is not repeated here.
  • the cursor output unit 22 in the cursor adjustment device 20 outputs the two-eye view corresponding to the parallax-adjusted cursor to the three-dimensional scene display device.
  • the system according to the embodiment of the invention enables the cursor to be correspondingly changed according to the displacement information of the depth dimension, thereby being displayed in different depth levels in the three-dimensional stereoscopic scene in real time; facilitating the operation of the user in the three-dimensional stereoscopic display scene, Make the 3D stereo display scene more realistic.
  • the embodiment of the invention further provides a cursor processing device, which comprises: a cursor generating unit 40 and a cursor adjusting unit 50 as shown in FIG. Its working principle is as follows:
  • the cursor generating unit 40 may be a unit for controlling a cursor such as a normal mouse.
  • the depth control module 41 in the cursor generating unit 40 controls the cursor movement in the depth direction.
  • the displacement information generating module 42 in the cursor generating unit 40 controls the result of the movement of the cursor in the depth direction according to the depth control module 41, and the displacement of the cursor generating unit 40 generates displacement information of three dimensions, the three dimensions Includes two dimensions and a depth dimension.
  • the displacement information of the depth dimension may be obtained according to the information that the depth control module 41 controls the cursor in the depth direction.
  • the cursor parallax adjustment module 51 in the cursor adjustment unit 50 adjusts the parallax of the two-eye view corresponding to the cursor in real time according to the displacement information obtained by the cursor generating unit 40.
  • the specific adjustment process refer to the method embodiments 1, 2, and 3. The description is not repeated here.
  • the cursor output module 52 in the cursor adjustment unit 50 outputs the two-eye view corresponding to the parallax-adjusted cursor to the three-dimensional scene display device.
  • the apparatus realizes that the cursor can be correspondingly changed according to the displacement information of the depth dimension, so as to be displayed in different depth levels in the three-dimensional stereoscopic scene in real time; facilitating the operation of the user in the three-dimensional stereoscopic display scene, Make three-dimensional display The scene has a stronger sense of reality.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for making a A computer device (which may be a personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present invention.

Description

光标处理方法、 装置及系统 本申请要求于 2008 年 5 月 21 日提交中国专利局, 申请号为 200810100670.5 , 发明名称为 "光标处理方法、 装置及系统" 的中国 专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
本发明涉及图像处理技术,尤其涉及对三维场景中的光标进行处 理的方法、 装置及系统。 背景技术
立体视图或称 3D视图是指由用户的左右眼独立获取图像而得到 的具有场景深度感的视图。现有基于立体视图的三维场景呈现技术主 要基于人的双目视差原理, 即由于两只眼睛位置的不同, 因而可以分 别获得同一场景但略有差异的两幅图像,通过相隔一定间距的两台照 相装置获取同一场景但略有差异的两幅图像,分别显示给人的左眼和 右眼, 从而使观看者获得具有纵深感和层次感的场景景象, 形成一种 身临其境的效果, 带来更好的用户体验。
现有计算机用户图形界面是基于平面二维光标进行操作的,用户 通过鼠标在平面二维的平面上移动光标即可选定用户图形界面上的 某一界面元素。 但在三维立体显示场景中, 由于平面二维光标仅能 在平面二维平面内移动,因此不能满足三维立体显示场景中对有着不 同深度的立体界面元素的操作要求。
例如, 如图 1所示为一个三维立体显示场景的示意图。 其中, A 窗口、 B窗口、 C窗口分别处于三维立体显示场景中的不同深度层次, 当用户需要对 A 窗口上的界面元素进行操作时, 需要控制光标显示 在 A窗口所在的深度层次, 如图中 A位置; 当用户需要对 C窗口上 的界面元素操作时就需要控制光标显示在 C窗口所在的深度层次,如 图中 C位置上。 但现有的平面二维光标无法实现这样的操作。 现有的 3D鼠标的概念, 是指能够同时输出 3种或多种位移信息 的鼠标。 这类设备中用于位移检测和输出方式有很多, 除通过鼠标球 或光电等方式产生上下、 左右等位移信息外, 还可利用滚轮、 摇杆、 按键、 微机电系统(以下筒称: MEMS )装置等技术产生第三维或更 多的位移信息。 但这种 3D鼠标仍然是工作在平面二维场景中的, 只 是解决了位移信息的产生和输入的问题,而并没有解决在三维立体显 示场景中显示在不同深度层次的问题。 发明内容
本发明实施例提供光标处理方法、 装置及系统, 以在三维立体显 示场景中将鼠标等控制操作设备的指示光标根据深度位移信息的变 化实时显示在三维立体场景中的不同深度层次中。
为了解决上述问题, 本发明实施例提供了一种光标处理方法, 包 括:
产生三个维度的位移信息,所述三个维度包括平面两个维度及深 度维度;
根据所述位移信息实时调整所述光标对应两目艮视图的视差; 将视差调整后的光标对应的两目艮视图输出到三维显示设备。 本发明实施例还提供了一种光标处理系统,包括光标产生装置和 光标调整装置, 其中:
所述光标产生装置用于产生三个维度的位移信息,所述三个维度 包括平面两个维度及深度维度;
所述光标调整装置用于根据光标产生装置产生的所述位移信息 实时调整所述光标对应两眼视图的视差,并将视差调整后的光标对应 的两目艮视图输出到三维显示设备。
本发明的实施例还提供了一种光标处理装置,其中包括光标产生 单元和光标调整单元, 其中:
所述光标产生单元用于产生三个维度的位移信息,所述三个维度 包括平面两个维度及深度维度;
所述光标调整单元用于根据光标产生单元产生的所述位移信息 实时调整所述光标对应两眼视图的视差,并将视差调整后的光标对应 的两目艮视图输出到三维显示设备。
与现有技术相比, 本发明实施例具有以下优点:
使三维立体显示场景中的光标能够根据深度维度的位移信息进 行相应变化, 从而实时显示在三维立体场景中的不同深度层次中; 方 便了用户在三维立体显示场景中的操作,使三维立体显示场景具有更 强的真实感。
下面通过附图和实施例,对本发明实施例的技术方案做进一步的 详细描述。 附图说明
图 1为现有三维立体显示场景的举例示意图;
图 2A为本发明方法实施例 1所述光标处理方法的流程图; 图 2B为本发明方法实施例 1所述步骤 102的具体流程图; 图 2C为本发明方法实施例 1所述双目摄像机的拍摄原理示意图; 图 2D 为本发明方法实施例 1 所述基于阈值的迭代算法的流程 图;
图 3A 为本发明方法实施例 2 所述立体图像对获取方法的流程 图;
图 3B为本发明方法实施例 2所述由照相装置拍摄目标场景的示 意图;
图 3C为本发明方法实施例 2所述重新设置最大视差的流程图; 图 4A 为本发明方法实施例 3 所述立体图像对获取方法的流程 图;
图 4B为本发明方法实施例 3所述 3D场景建模的示意图; 图 5为本发明系统实施例所述光标处理系统的结构示意图; 图 6为本发明装置实施例所述光标处理装置的结构示意图。 具体实施方式 方法实施例 1
本发明实施例提供了一种三维场景中的光标处理方法, 如图 2A 所示, 包括:
步骤 101, 光标产生装置根据其位移产生三个维度的位移信息, 所述三个维度包括平面两个维度及深度维度。
其中,光标产生装置是指如鼠标等用于控制光标的装置。具体地, 平面两个维度的位移信息可以通过控制光标产生装置的上下左右移 动获得,而深度维度的位移信息可以通过安装在光标产生装置上的深 度控制单元产生。其中的深度控制单元是用于在深度方向上控制光标 移动的单元, 例如可以为滚轮、 按键或 MEMS装置。
步骤 102, 根据所述位移信息实时调整所述光标对应两眼视图的 视差。
具体在本发明实施例中, 所述光标采用的是平面二维图像, 因此 需要将平面二维图像转换成具有视差的三维立体图像,具体转换过程 如图 2B所示包括:
步骤 102A, 生成所述平面二维图像的深度图。 具体地, 可以先 分割获取所述平面二维图像的前景图和背景图;再根据所述三维场景 中的参考基准深度, 为所述前景图和背景图分别设置不同的深度信 息。
其中, 深度信息的设置可以采用双目摄像机的原理进行。 如图 2C所示, 相距 B的焦距为 f两台摄像机对场景中深度为 Z的目标点 M进行拍摄, 则目标点 M在左右摄像机的成像分别为 1¾和 ι¾, 则根 据图像匹配获得成像点 1¾和1¾, 视差为 d的情况下, 可根据如下关 系式得到目标点的深度信息。
dx(ml,mr) = xl -xr =^-(Xl -Xr)=^- χ'和 分别表示成像 11¾和 mr在 X轴上的坐标; x;和 表示双 目摄像机的两个镜头在 X轴上的坐标。
其中, 参考基准深度是指场景中深度 Z=0的位置所表示的深度; 前景图是指整个光标图片, 如箭头图案等; 背景图是指处于参考基准 深度的显示背景, 如设为显示屏幕所在的平面的背景等。 对于一个普 通的平面二维图像,其前景图和背景图可以采用采用基于阈值的迭代 算法分割获得。 具体的迭代算法如图 2D所示, 主要包括如下步骤:
1 )求得平面二维图像的最大灰度值和最小灰度值, 分别记为 ZMAX和 ZMIN, 令初始阈值 TQ=(ZMAX+ZMIN)/2;
2 )根据阈值 ΤΚ将图象分割为前景和背景,分别求出两者的平 均灰度值 Ζ。和 ΖΒ;
3 )求出新阈值 TK+1=(Z。+ZB)/2;
4 )若 ΤΚΚ+1 ,则所得值即为阈值,结束迭代过程;否则转 2 ), 进行迭代计算。
通过上述迭代算法所得到的阈值分割的图像效果良好。基于迭代 的阈值能区分出平面二维图像的前景图和背景图的主要区域所在,但 在平面二维图像的细微处还没有很好的区分度。为了得到更加准确的 平面二维图像的前景图和背景图,可以采用参考文献 1中提出的一种 基于图论的图像分割算法。
步骤 102B , 根据所述深度维度的位移信息调整所述深度图的视 差。 具体为调整所述深度图中的前景图和背景图的视差。
立体图像中的目标深度用于使用户获得三维显示场景下的深度 感和层次感,深度设置的精度要求相比目标重构和目标识别所要求的 精度低。因此可以采用粗粒度的深度设置场景的前景图和背景图的深 度, 为前景图设置较小的深度, 而为背景图设置较大的深度。 深度的 设置以在重构时可以保证立体图像显示时观看者可以显著感受到场 景中的前景图和背景图的关系为准。
步骤 103 , 将视差调整后的光标所对应的两眼视图输出到三维场 景显示设备中。 有视差的立体图像在显示时被分别显示给观看者的左眼和右眼, 观看者通过左右眼有差异的图像感受到图像内容的深度信息,从而使 用户获得光标的深度感。
通过本发明实施例所述方法,实现了在三维立体显示场景中使采 用平面二维图像的光标能够根据深度维度的位移信息进行相应变化, 从而实时显示在三维立体场景中的不同深度层次中。方便了用户在三 维立体显示场景中的操作, 使三维立体显示场景具有更强的真实感。
方法实施例 2
本发明实施例提供了另一种三维场景中的光标处理方法,该方法 与方法实施例 1的不同之处在于, 在所述步骤 102中, 根据所述位移 信息实时调整所述光标对应两眼视图的视差时,方法实施例 1所述光 标采用的是平面二维图像,而本发明实施例所述光标采用的是已经具 有视差的立体图像, 因此, 可以直接根据在步骤 101中得到的所述深 度维度的位移信息对立体图像的视差进行调整。
以下详细介绍作为光标的立体图像的获取方法, 如图 3A所示包 括:
步骤 201 , 从第一位置拍摄得到第一图像。
其中, 如图 3B所示, 照相装置的焦距为 f, 对距离为 Z的 M点 进行拍摄。 利用照相装置先在某一位置拍摄目标场景得到 M的第一 图像。
步骤 202, 移动到第二位置后拍摄得到 M的第二图像, 从而获 得目标场景的两幅有视差的图像, 即立体图像对, 第一图像和第二 图像的视差为 d。具体地,照相装置移动的距离可以为人眼双目间距 的距离 B。
其中, 当使用双镜头摄像机或使用位于不同位置的两个摄像机 时, 不需要移动到第二位置后拍摄得到 M的第二图像, 可以在第一 位置拍摄得到 M的第二图像, 从而获得目标场景的两幅有视差的图 像, 即立体图像对。
步骤 203, 对所述立体图像对进行图像扫描线对齐处理。 由于拍摄拍摄得到的第一图像和第二图像这一对图像在移动照 相装置时的间距、 角度和垂直方向的位置无法精确控制, 因此需要以 其中一幅图像为准, 对另一幅图像进行扫描线对齐处理。
由于人的双眼水平位置高度相同,因此所观看的场景内容在双眼 上成像时不存在垂直方向的差异。 而在移动照相装置的移动过程中, 可能会存在垂直方向的不一致的问题。通过图像扫描线对齐处理可以 实现左右立体图像对场景内容的垂直方向对齐。具体可以采用以下方 式:
在立体图像对的场景重叠区,分别在第一图像和第二图像中创建 两个或多个搜索栏;将图像的各搜索栏根据颜色分量分解成多个灰度 子图像;采用匹配算法对每个子图像搜索栏中的每个点进行匹配并计 算垂直偏移量,利用这些垂直偏移量外推到整个图像从而调整对齐两 幅图像。
步骤 204, 对完成图像扫描线对齐处理后的立体图像对重新设置 最大视差。
重新设置立体图像的最大视差是因为在移动照相装置到另一角 度拍摄的移动过程中, 由于移动的间距可能过小, 使得两幅图像的视 差较小, 导致立体图显示时立体感不强; 或者由于移动的间距过大, 导致视差过大,容易引起观看疲劳。重新设置最大视差如图 3C所示, 过程如下:
首先对扫描线对齐后的两幅图像进行立体匹配,获得第一图像对 应的视差图; 统计视差图中的最大视差值; 将预设的最佳视差值除以 得到的最大视差值, 从而得到视差的缩放系数; 利用该缩放系数对视 差图中每一点的视差进行缩放调整;利用缩放后的视差和第一图像重 构第二图像, 从而完成重新设置最大视差, 使观看者在观看立体图像 显示时, 可以获得更加舒适的立体观看效果。
步骤 205 ,将所述第一图像和第二图像形成为立体图,用作所述 光标。
通过本发明实施例所述方法,采用双照相装置进行左右拍摄的方 式获得的立体图像作为三维立体显示场景中的光标,并使光标能够根 据深度维度的位移信息进行相应变化,从而实时显示在三维立体场景 中的不同深度层次中。 方便了用户在三维立体显示场景中的操作, 使 三维立体显示场景具有更强的真实感。
方法实施例 3
本发明实施例提供了另一种三维场景中的光标处理方法,该方法 与方法实施例 2类似, 都采用立体图像作为光标, 因此, 可以直接根 据在步骤 101 中得到的所述深度维度的位移信息对立体图像的视差 进行调整。但本发明实施例中作为光标的立体图像的获取方法与方法 实施例 2不同。
以下详细介绍本发明实施例中作为光标的立体图像的获取方法, 如图 4A所示包括:
步骤 301 , 对三维场景中使用的光标进行 3D场景建模。
如图 4B所示, 显示了一个采用双照相装置的场景投影模型。 其 中 A、 B 为场景中的两个点, 和 分别为互相平行的左右照相装 置的光心。 和 之间的距离为 c;照相装置到成像平面的距离为 D。
和 分别为点 A对于左右照相装置在投影平面上的投影点; 和 分别为点 B对于左右照相装置在投影平面上的投影点。 假设照相 装置位于 和 连线的中点上, 则根据筒单的几何关系可以得到空 间中某一点 , Υ, 在投影平面上的投影坐标( ", v)为:
Figure imgf000010_0001
对于左右照相装置, 可以将 X轴的坐标进行相应的偏移, 从而得 到空间中某一点 在左右照相装置视点上的投影分别为:
Figure imgf000010_0002
Figure imgf000010_0003
v 表示空间中某一点在左照相装置视点上的投影坐 标; ("R, )表示空间中某一点在右照相装置视点上的投影坐标。 根据 上述计算得到的投影关系,最终可以得到空间中某一物体在投影平面 上的立体图像对。
从图 4B中可以看出, 视差为正时, 一个物体看上去位于投影平 面的后方; 视差为零时, 左右照相装置在投影平面上的成像点的坐标 正好重合; 视差为负时, 则一个物体看上去位于投影平面的前方。
步骤 302, 在得到光标模型后对光标模型进行渲染。
具体地, 可以采用如 OpenGL或 DirectX 3D等软件进行渲染。 由于在真实世界中人是通过双眼观察物体的, 因此在渲染时需设置左 右两个视点得到左右眼的渲染图像。 在具体实现时, 用户可以采用图 形应用程序接口 ( Application Programming Interface , 筒称: API )提 供的视点变换和投影变换功能来设置左右照相装置的位置和观察范 围。例如对于 OpenGL软件,可以使用 gluLookAt方法设置观察视点, 即设置照相装置的位置; 并使用 glFmstum方法计算透视投影变换。
通过本发明实施例所述方法, 采用 3D建模方法获得的立体图像 作为三维立体显示场景中的光标,并使光标能够根据深度维度的位移 信息进行相应变化,从而实时显示在三维立体场景中的不同深度层次 中。 方便了用户在三维立体显示场景中的操作, 使三维立体显示场景 具有更强的真实感。
系统实施例
本发明实施例提供了一种光标处理系统, 如图 5所示包括: 光标 产生装置 10和光标调整装置 20。 其工作原理如下:
光标产生装置 10可以为普通的鼠标等用于控制光标的装置。 用 户移动光标产生装置 10时, 光标产生装置 10中的深度控制单元 11 在深度方向上控制光标移动。 其中深度控制单元 11可以为滚轮、 按 键或 MEMS装置等。
光标产生装置 10中的位移信息产生单元 12根据所述深度控制单 元 11控制光标在深度方向上移动的结果, 以及所述光标产生装置 10 的位移产生三个维度的位移信息,所述三个维度包括平面两个维度及 深度维度。 其中, 深度维度的位移信息可以根据深度控制单元 11对 光标在深度方向上进行控制的信息得到。
光标调整装置 20中的光标视差调整单元 21根据由光标产生装置 10得到的所述位移信息, 实时调整所述光标对应两眼视图的视差, 具体的调整过程可参见方法实施例 1、 2和 3所述, 此处不再赘述。
完成视差调整后,光标调整装置 20中的光标输出单元 22将视差 调整后的光标所对应的两眼视图输出到三维场景显示设备中。
通过本发明实施例所述系统,实现了使光标能够根据深度维度的 位移信息进行相应变化,从而实时显示在三维立体场景中的不同深度 层次中; 方便了用户在三维立体显示场景中的操作, 使三维立体显示 场景具有更强的真实感。
本发明实施例还提供了一种光标处理装置, 如图 6所示包括: 光 标产生单元 40和光标调整单元 50。 其工作原理如下:
光标产生单元 40可以为普通的鼠标等用于控制光标的单元。 用 户移动光标产生单元 40时, 光标产生单元 40中的深度控制模块 41 在深度方向上控制光标移动。
光标产生单元 40中的位移信息产生模块 42根据所述深度控制模 块 41控制光标在深度方向上移动的结果, 以及所述光标产生单元 40 的位移产生三个维度的位移信息,所述三个维度包括平面两个维度及 深度维度。 其中, 深度维度的位移信息可以根据深度控制模块 41对 光标在深度方向上进行控制的信息得到。
光标调整单元 50中的光标视差调整模块 51根据由光标产生单元 40得到的所述位移信息, 实时调整所述光标对应两眼视图的视差, 具体的调整过程可参见方法实施例 1、 2和 3所述, 此处不再赘述。
完成视差调整后,光标调整单元 50中的光标输出模块 52将视差 调整后的光标所对应的两眼视图输出到三维场景显示设备中。
通过本发明实施例所述装置,实现了使光标能够根据深度维度的 位移信息进行相应变化,从而实时显示在三维立体场景中的不同深度 层次中; 方便了用户在三维立体显示场景中的操作, 使三维立体显示 场景具有更强的真实感。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解 到本发明可借助软件加必需的通用硬件平台的方式来实现, 当然也可 以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解, 本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以 软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质 中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服 务器, 或者网络设备等)执行本发明各个实施例所述的方法。
总之, 以上所述仅为本发明的较佳实施例而已, 并非用于限定本 发明的保护范围。 凡在本发明的精神和原则之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。

Claims

权利要求
1、 一种光标处理方法, 其特征在于, 包括:
产生三个维度的位移信息,所述三个维度包括平面两个维度及深 度维度;
根据所述位移信息实时调整所述光标对应两目艮视图的视差; 将视差调整后的光标对应的两目艮视图输出到三维显示设备。
2、 根据权利要求 1所述的光标处理方法, 其特征在于, 根据所 述位移信息实时调整所述光标对应两目艮视图的视差包括:
选取平面二维图像作为所述光标;
生成所述平面二维图像的深度图;
根据所述深度维度的位移信息调整所述深度图的视差。
3、 根据权利要求 2所述的光标处理方法, 其特征在于, 所述生 成平面二维图像的深度图包括:
获取所述平面二维图像的前景图和背景图;
根据三维场景中的参考基准深度,为所述前景图和背景图分别设 置不同的深度信息。
4、 根据权利要求 3所述的光标处理方法, 其特征在于, 对于所 述获取所述平面二维图像的前景图和背景图包括:采用基于阈值的迭 代算法分割获取所述平面二维图像的前景图和背景图。
5、 根据权利要求 1所述的光标处理方法, 其特征在于, 根据所 述位移信息实时调整所述光标对应两眼视图的视差包括:
获取立体图像对, 并将所述立体图像对作为所述光标; 根据所述深度维度的位移信息调整所述立体图像对的视差。
6、 根据权利要求 5所述的光标处理方法, 其特征在于, 所述获 取立体图像对包括:
从第一位置拍摄得到第一图像;
从第二位置拍摄得到第二图像;
将所述第一图像和第二图像形成为所述立体图像对。
7、 根据权利要求 6所述的光标处理方法, 其特征在于, 还包括: 对所述立体图像对进行图像扫描线对齐处理。
8、 根据权利要求 7所述的光标处理方法, 其特征在于, 进行所 述图像扫描线对齐处理之后还包括:对所述立体图像对重新设置最大 视差。
9、 根据权利要求 6所述的光标处理方法, 其特征在于, 获取立 体图像对包括:
对三维场景中的光标进行 3D场景建模得到光标模型;
对所述光标模型进行渲染得到两眼的立体渲染图像。
10、 一种光标处理系统, 其特征在于, 包括光标产生装置和光标 调整装置, 其中:
所述光标产生装置用于产生三个维度的位移信息,所述三个维度 包括平面两个维度及深度维度;
所述光标调整装置用于根据光标产生装置产生的所述位移信息 实时调整所述光标对应两眼视图的视差,并将视差调整后的光标对应 的两目艮视图输出到三维显示设备。
11、 根据权利要求 10所述的光标处理系统, 其特征在于所述光 标产生装置包括:
深度控制单元, 用于在深度方向上控制光标移动;
位移信息产生单元,用于根据所述深度控制单元控制光标在深度 方向上移动的结果, 以及所述光标产生装置的位移, 产生三个维度的 位移信息, 所述三个维度包括平面两个维度及深度维度。
12、 根据权利要求 11所述的光标处理系统, 其特征在于, 所述 深度控制单元包括: 滚轮或按键或微机电系统装置。
13、 根据权利要求 11所述的光标处理系统, 其特征在于, 所述 光标调整装置包括:
光标视差调整单元, 用于根据光标产生装置得到的所述位移信 息, 实时调整所述光标对应两眼视图的视差;
光标输出单元,用于将视差调整后的光标所对应的两眼视图输出 到三维场景显示设备中。
14、 一种光标处理装置, 其特征在于包括光标产生单元和光标调 整单元, 其中:
所述光标产生单元用于产生三个维度的位移信息,所述三个维度 包括平面两个维度及深度维度;
所述光标调整单元用于根据光标产生单元产生的所述位移信息 实时调整所述光标对应两眼视图的视差,并将视差调整后的光标对应 的两目艮视图输出到三维显示设备。
15、 根据权利要求 14所述的光标处理装置, 其特征在于所述光 标产生单元包括:
深度控制模块, 用于在深度方向上控制光标移动;
位移信息产生模块,用于根据所述深度控制模块控制光标在深度 方向上移动的结果, 以及所述光标产生模块的位移, 产生三个维度的 位移信息, 所述三个维度包括平面两个维度及深度维度。
16、 根据权利要求 15所述的光标处理装置, 其特征在于所述光 标调整单元包括:
光标视差调整模块, 用于根据光标产生模块得到的所述位移信 息, 实时调整所述光标对应两眼视图的视差;
光标输出模块,用于将视差调整后的光标所对应的两目艮视图输出 到三维场景显示设备中。
PCT/CN2009/071844 2008-05-21 2009-05-19 光标处理方法、装置及系统 WO2009140908A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810100670.5 2008-05-21
CN2008101006705A CN101587386B (zh) 2008-05-21 2008-05-21 光标处理方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2009140908A1 true WO2009140908A1 (zh) 2009-11-26

Family

ID=41339781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/071844 WO2009140908A1 (zh) 2008-05-21 2009-05-19 光标处理方法、装置及系统

Country Status (2)

Country Link
CN (1) CN101587386B (zh)
WO (1) WO2009140908A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9253468B2 (en) 2011-09-29 2016-02-02 Superd Co. Ltd. Three-dimensional (3D) user interface method and system

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101708696B1 (ko) 2010-09-15 2017-02-21 엘지전자 주식회사 휴대 단말기 및 그 동작 제어방법
CN102566790B (zh) * 2010-12-28 2015-06-17 康佳集团股份有限公司 一种立体鼠标的实现方法、系统及3d立体显示设备
CN102638689A (zh) * 2011-02-09 2012-08-15 扬智科技股份有限公司 三维显示放大方法
JP2012190184A (ja) * 2011-03-09 2012-10-04 Sony Corp 画像処理装置および方法、並びにプログラム
CN102157012B (zh) * 2011-03-23 2012-11-28 深圳超多维光电子有限公司 对场景进行立体渲染的方法、图形图像处理装置及设备、系统
CN102693065A (zh) * 2011-03-24 2012-09-26 介面光电股份有限公司 立体影像视觉效果处理方法
JP5808146B2 (ja) 2011-05-16 2015-11-10 株式会社東芝 画像処理システム、装置及び方法
TWI486052B (zh) * 2011-07-05 2015-05-21 Realtek Semiconductor Corp 立體影像處理裝置以及立體影像處理方法
US9746989B2 (en) * 2011-10-13 2017-08-29 Toshiba Medical Systems Corporation Three-dimensional image processing apparatus
CN102508546B (zh) * 2011-10-31 2014-04-09 冠捷显示科技(厦门)有限公司 一种3d虚拟投影及虚拟触摸的用户交互界面及实现方法
CN102508562B (zh) * 2011-11-03 2013-04-10 深圳超多维光电子有限公司 一种立体交互系统
CN102662577B (zh) * 2012-03-29 2016-08-10 华为终端有限公司 一种基于三维显示的光标操作方法及移动终端
CN102981700A (zh) * 2012-10-31 2013-03-20 广东威创视讯科技股份有限公司 一种地图管理信息系统组件的鼠标样式显示方法和装置
CN102982233B (zh) * 2012-11-01 2016-02-03 华中科技大学 具有立体视觉显示的医学影像工作站
CN103645841B (zh) * 2013-12-12 2017-11-03 深圳Tcl新技术有限公司 实现鼠标3d景深自适应显示的方法及设备
CN104598035B (zh) * 2015-02-27 2017-12-05 北京极维科技有限公司 基于3d立体图像显示的光标显示方法、智能设备及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004194033A (ja) * 2002-12-12 2004-07-08 Canon Inc 立体画像表示システム及び立体ポインタの表示方法
CN101110007A (zh) * 2007-07-31 2008-01-23 中国科学院软件研究所 一种动态三维光标显示方法
CN101140491A (zh) * 2006-09-07 2008-03-12 王舜清 数字影像光标移动暨定位装置系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004194033A (ja) * 2002-12-12 2004-07-08 Canon Inc 立体画像表示システム及び立体ポインタの表示方法
CN101140491A (zh) * 2006-09-07 2008-03-12 王舜清 数字影像光标移动暨定位装置系统
CN101110007A (zh) * 2007-07-31 2008-01-23 中国科学院软件研究所 一种动态三维光标显示方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9253468B2 (en) 2011-09-29 2016-02-02 Superd Co. Ltd. Three-dimensional (3D) user interface method and system

Also Published As

Publication number Publication date
CN101587386B (zh) 2011-02-02
CN101587386A (zh) 2009-11-25

Similar Documents

Publication Publication Date Title
WO2009140908A1 (zh) 光标处理方法、装置及系统
JP6285941B2 (ja) 制御された三次元通信エンドポイント
US10560683B2 (en) System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
WO2012153447A1 (ja) 画像処理装置、映像処理方法、プログラム、集積回路
WO2014154839A1 (en) High-definition 3d camera device
KR101096617B1 (ko) 공간 멀티 인터랙션 기반 3차원 입체 인터랙티브 비전 시스템 및 그 방법
KR20070047736A (ko) 수평 원근법 디스플레이
EP1836859A1 (en) Automatic conversion from monoscopic video to stereoscopic video
JP6406853B2 (ja) 光フィールド映像を生成する方法及び装置
CN104599317A (zh) 一种实现3d扫描建模功能的移动终端及方法
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、系统
US9025007B1 (en) Configuring stereo cameras
CN104134235A (zh) 真实空间和虚拟空间的融合方法和融合系统
JP2009175866A (ja) 立体像生成装置、その方法およびそのプログラム
EP2926196A1 (en) Method and system for capturing a 3d image using single camera
CN108876840A (zh) 一种使用虚拟3d模型产生垂直或前向投影三维图像的方法
CN111047678B (zh) 一种三维人脸采集装置和方法
TWI589150B (zh) 3d自動對焦顯示方法及其系統
JP2003067784A (ja) 情報処理装置
CN102799378B (zh) 一种立体碰撞检测物体拾取方法及装置
Mulligan et al. Stereo-based environment scanning for immersive telepresence
KR20150058733A (ko) 가상현실 이미지 프레젠테이션 및 3d 공간 내에서의 컨트롤을 위한 3d 기하학 데이터의 이용방법
TWI486052B (zh) 立體影像處理裝置以及立體影像處理方法
TWI478100B (zh) 影像深度估計方法及其裝置
JP6168597B2 (ja) 情報端末装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09749451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09749451

Country of ref document: EP

Kind code of ref document: A1