CN102693065A - Stereoscopic image visual effect processing method - Google Patents

Stereoscopic image visual effect processing method Download PDF

Info

Publication number
CN102693065A
CN102693065A CN2011100719675A CN201110071967A CN102693065A CN 102693065 A CN102693065 A CN 102693065A CN 2011100719675 A CN2011100719675 A CN 2011100719675A CN 201110071967 A CN201110071967 A CN 201110071967A CN 102693065 A CN102693065 A CN 102693065A
Authority
CN
China
Prior art keywords
stereoscopic image
coordinate value
cursor
objects
visual effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100719675A
Other languages
Chinese (zh)
Inventor
叶裕洲
张良诰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
J Touch Corp
Original Assignee
J Touch Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by J Touch Corp filed Critical J Touch Corp
Priority to CN2011100719675A priority Critical patent/CN102693065A/en
Publication of CN102693065A publication Critical patent/CN102693065A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本发明公开一种立体影像视觉效果处理方法,其包括下列步骤:提供一立体影像,该立体影像是由多个对象所组成,每一该子对象具有一对象坐标值;提供一光标,该光标具有一光标坐标值;判断该光标坐标值是否与其中的一该多个对象的该对象坐标值相重合;若该光标坐标值与其中的一该多个对象的该对象坐标值相重合,则改变相对应该多个对象的对象坐标值的一深度坐标参数;重新绘制与该光标值相符合的该对象的影像。由此,可突显与光标相对应的对象立体影像,以强化视觉效果及增加互动感。

Figure 201110071967

The present invention discloses a method for processing stereoscopic image visual effects, which includes the following steps: providing a stereoscopic image, the stereoscopic image is composed of multiple objects, each of which has an object coordinate value; providing a cursor, the cursor has a cursor coordinate value; judging whether the cursor coordinate value coincides with the object coordinate value of one of the multiple objects; if the cursor coordinate value coincides with the object coordinate value of one of the multiple objects, then changing a depth coordinate parameter corresponding to the object coordinate value of the multiple objects; and redrawing the image of the object that matches the cursor value. Thus, the stereoscopic image of the object corresponding to the cursor can be highlighted to enhance the visual effect and increase the sense of interaction.

Figure 201110071967

Description

立体影像视觉效果处理方法Stereoscopic image visual effect processing method

技术领域 technical field

本发明涉及一种影像处理方法,特别是指一种立体影像视觉效果处理方法。The invention relates to an image processing method, in particular to a stereoscopic image visual effect processing method.

背景技术 Background technique

近二十多年来,计算机绘图已成为人机接口中,最重要的数据显示方法,并广泛运用于各种应用中。例如三维(three dimensional,3-D)计算机绘图。而多媒体(multimedia)以及虚拟实境(virtual reality)产品则越来越普及,其不但是人机界面上的重大突破,更在娱乐应用中扮演重要的角色。而上述的应用,多半是以低成本实时3-D计算机绘图技术为基础。一般而言,2-D计算机绘图是一种常用以将数据和内容表现出来的普遍记述,特别是在互动应用上。而3-D计算机绘图则是计算机绘图中一股越来越大的分支,其使用3-D模型和各种影像处理来产生具有三维空间真实感的影像。In the past two decades, computer graphics have become the most important data display method in the human-machine interface, and are widely used in various applications. Such as three-dimensional (three dimensional, 3-D) computer graphics. And multimedia (multimedia) and virtual reality (virtual reality) products are becoming more and more popular. They are not only a major breakthrough in the human-machine interface, but also play an important role in entertainment applications. Most of the above applications are based on low-cost real-time 3-D computer graphics technology. In general, 2-D computer graphics are a common representation of data and content, especially in interactive applications. 3-D computer graphics, on the other hand, is a growing branch of computer graphics that uses 3-D models and various image processing to produce images that are realistic in three-dimensional space.

而立体计算机图形(3D computer graphics)的建构过程主要可依其顺序分为三个基本阶段:The construction process of 3D computer graphics can be divided into three basic stages according to their sequence:

1:建模(modeling):建模阶段可以描述为「确定后面场景所要使用的对象的形状」的过程,并具多种建模技术,如构造实体几何、NURBS建模、多边形建模或细分曲面等。此外,建模过程中可包括编辑物体表面或材料性质,增加纹理、凹凸对应和其它特征。1: Modeling (modeling): The modeling stage can be described as the process of "determining the shape of the object to be used in the subsequent scene", and has a variety of modeling techniques, such as constructing solid geometry, NURBS modeling, polygonal modeling or detailed surface, etc. In addition, the modeling process can include editing the object's surface or material properties, adding texture, bump correspondence, and other features.

2:场景布局及动画生成(layout & animation):场景设定涉及安排一个场景内的虚拟物体、灯光、摄影机和其它实体的位置及大小,而可用于制作一幅静态画面或一段动画。而动画生成则可以使用关键帧(key frame)等技术来建立场景内复杂的运动关系。2: Scene layout and animation generation (layout & animation): Scene setting involves arranging the position and size of virtual objects, lights, cameras and other entities in a scene, and can be used to make a static picture or an animation. Animation generation can use key frame (key frame) and other technologies to establish complex motion relationships in the scene.

3:绘制渲染(rendering):渲染是从准备场景建立实际的二维影像或动画的最终阶段,其可以和现实世界中于布景完成后的照相或摄制场景的过程相比。3: Rendering: Rendering is the final stage of creating an actual 2D image or animation from a prepared scene, comparable to the real-world process of photographing or filming a scene after the set is complete.

而在现有技术中,在交互式媒体,如游戏或各类应用程序中,其经绘制出来的立体对象,其通常无法随当使用者操作鼠标、触控板或触控面板时而改变光标坐标位置而实时地产生对应变化以突显其视觉效果,导致无法给予使用者足够的场景互动感。In the prior art, in interactive media, such as games or various application programs, the drawn three-dimensional objects usually cannot change the coordinates of the cursor when the user operates the mouse, touch panel or touch panel. The corresponding changes are generated in real time to highlight its visual effect, resulting in the inability to give users a sufficient sense of scene interaction.

另外,目前已有先前技术可将2D影像转换为3D影像,通常会在2D影像中选择一主要对象,而将该主要对象设为前景,其余对象设为背景,并分别给予该些对象不同的景深(Depth of field),进而形成3D影像,但使用者的操作鼠标通常与显示屏幕为同一景深,且操作鼠标的位置通常亦为视觉停留所在,若鼠标的景深信息与鼠标所在位置的对象的景深不同,则会有空间视觉上的错乱。In addition, there are prior technologies for converting 2D images into 3D images. Usually, a main object is selected in the 2D image, and the main object is set as the foreground, and the rest of the objects are set as the background, and these objects are given different Depth of field (Depth of field), and then form a 3D image, but the user's operation mouse usually has the same depth of field as the display screen, and the position where the user operates the mouse is usually where the vision stays. If the depth of field is different, there will be spatial visual confusion.

发明内容 Contents of the invention

本发明的主要目的,旨在提供一种立体影像视觉效果处理方法,其可随光标坐标位置来突显对应的对象立体影像,以增强人机互动。The main purpose of the present invention is to provide a method for processing stereoscopic image visual effects, which can highlight the corresponding object stereoscopic image according to the coordinate position of the cursor, so as to enhance human-computer interaction.

为达上述目的,本发明的立体影像视觉效果处理方法,其是包含下列步骤:首先,提供一立体影像,该立体影像是由多个对象所组成,每一该多个对象具有一对象坐标值;接着,提供一光标,该光标具有一光标坐标值;接着,判断该光标坐标值是否与其中的一该多个对象的该对象坐标值相重合;接着,若该光标坐标值与其中的一该多个对象的该对象坐标值相重合,则改变相对应该多个对象的对象坐标值的一深度坐标参数;最后,重新绘制与该光标坐标值相符合的该对象的影像。In order to achieve the above object, the stereoscopic image visual effect processing method of the present invention includes the following steps: first, a stereoscopic image is provided, the stereoscopic image is composed of a plurality of objects, and each of the plurality of objects has an object coordinate value ; Then, provide a cursor, the cursor has a cursor coordinate value; then, judge whether the cursor coordinate value coincides with the object coordinate value of one of the plurality of objects; then, if the cursor coordinate value coincides with one of the multiple objects When the object coordinate values of the plurality of objects coincide, a depth coordinate parameter corresponding to the object coordinate values of the plurality of objects is changed; finally, the image of the object corresponding to the cursor coordinate value is redrawn.

其中,若该光标坐标值改变时,则重新判断该光标坐标值是否与其中的一该多个对象的对象坐标值相重合。Wherein, if the coordinate value of the cursor changes, it is re-judged whether the coordinate value of the cursor coincides with the object coordinate value of one of the plurality of objects.

其中,该多个对象坐标值是对应本地坐标、世界坐标、视角坐标或投影坐标的坐标值。Wherein, the plurality of object coordinate values are coordinate values corresponding to local coordinates, world coordinates, viewing angle coordinates or projection coordinates.

其中,该光标坐标值是由鼠标、触控板或触控面板所产生。Wherein, the cursor coordinate value is generated by a mouse, a touch pad or a touch panel.

其中,该立体影像是依序由建模(modeling)、场景布局及动画生成(layout& animation)及绘制渲染(rendering)等计算机绘图步骤所产生。Wherein, the stereoscopic image is sequentially produced by computer graphics steps such as modeling, scene layout and animation generation (layout & animation), and rendering.

其中,该多个对象的对象坐标值的该深度坐标值是由Z缓冲法(Z buffer)、画家深度排序法、平面法线判定法、曲面法线判定法、最大最小法等方式所决定。Wherein, the depth coordinate values of the object coordinate values of the plurality of objects are determined by methods such as Z buffer method (Z buffer), painter's depth sorting method, plane normal determination method, surface normal determination method, and maximum and minimum method.

附图说明 Description of drawings

图1A为本发明立体影像视觉效果处理方法较佳实施例的步骤流程图;FIG. 1A is a flow chart of the steps of a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图1B为使用本发明立体影像视觉效果处理方法较佳实施例所形成的一立体影像;FIG. 1B is a stereoscopic image formed by using a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图2为本发明立体影像视觉效处理方法较佳实施例的三维绘图流程图;Fig. 2 is a three-dimensional drawing flowchart of a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图3A为本发明立体影像视觉效果处理方法使用并集逻辑运算子建模的示意图;FIG. 3A is a schematic diagram of modeling using a union logical operator in the stereoscopic image visual effect processing method of the present invention;

图3B为本发明立体影像视觉效果处理方法使用交集逻辑运算子建模的示意图;FIG. 3B is a schematic diagram of modeling using intersection logic operators in the stereoscopic image visual effect processing method of the present invention;

图3C为本发明立体影像视觉效果处理方法使用补集建模的示意图;FIG. 3C is a schematic diagram of using complementary set modeling in the stereoscopic image visual effect processing method of the present invention;

图4A为本发明立体影像视觉效果处理方法使用NURBS曲线建模的示意图;FIG. 4A is a schematic diagram of using NURBS curve modeling in the stereoscopic image visual effect processing method of the present invention;

图4B为本发明立体影像视觉效果处理方法使用NURBS曲面建模的示意图;FIG. 4B is a schematic diagram of using NURBS surface modeling in the stereoscopic image visual effect processing method of the present invention;

图5为本发明立体影像视觉效果处理方法使用多边形网格建模示意图;Fig. 5 is a schematic diagram of polygonal grid modeling used in the stereoscopic image visual effect processing method of the present invention;

图6A为本发明立体影像视觉效果处理方法使用细分曲面建模第一示意图;6A is a first schematic diagram of using subdivision surface modeling in the stereoscopic image visual effect processing method of the present invention;

图6B为本发明立体影像视觉效果处理方法使用细分曲面建模第二示意图;6B is a second schematic diagram of using subdivision surface modeling in the stereoscopic image visual effect processing method of the present invention;

图6C为本发明立体影像视觉效果处理方法使用细分曲面建模第三示意图;6C is a third schematic diagram of using subdivision surface modeling in the stereoscopic image visual effect processing method of the present invention;

图6D为本发明立体影像视觉效果处理方法使用细分曲面建模第四示意图;6D is a fourth schematic diagram of using subdivision surface modeling in the stereoscopic image visual effect processing method of the present invention;

图6E为本发明立体影像视觉效果处理方法使用细分曲面建模第五示意图;6E is a fifth schematic diagram of using subdivision surface modeling in the stereoscopic image visual effect processing method of the present invention;

图7为本发明立体影像视觉效果处理方法所使用的标准绘图着色管线示意图;7 is a schematic diagram of a standard drawing and coloring pipeline used in the stereoscopic image visual effect processing method of the present invention;

图8为本发明立体影像视觉效果处理方法较佳实施例的影像显示第一示意图;8 is a first schematic diagram of image display in a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图9为本发明立体影像视觉效果处理方法较佳实施例的影像显示第二示意图;9 is a second schematic diagram of image display in a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图10为本发明立体影像视觉效果处理方法较佳实施例的影像显示第三示意图;10 is a third schematic diagram of image display in a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图11A为本发明立体影像视觉效果处理方法较佳实施例的影像显示第四示意图;11A is a fourth schematic diagram of image display in a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图11B为本发明立体影像视觉效果处理方法较佳实施例的影像显示第五示意图;11B is a fifth schematic diagram of image display in a preferred embodiment of the stereoscopic image visual effect processing method of the present invention;

图12A为本发明立体影像视觉效果处理方法使用Z缓冲绘制对象的第一示意图;12A is a first schematic diagram of drawing an object using a Z buffer in the stereoscopic image visual effect processing method of the present invention;

图12B为本发明立体影像视觉效果处理方法使用Z缓冲绘制对象的第二示意图;12B is a second schematic diagram of drawing an object using a Z buffer in the stereoscopic image visual effect processing method of the present invention;

图13A为本发明立体影像视觉效果处理方法使用画家深度排序法绘制对象的第一示意图;FIG. 13A is a first schematic diagram of drawing objects using the painter's depth sorting method in the stereoscopic image visual effect processing method of the present invention;

图13B为本发明立体影像视觉效果处理方法使用画家深度排序法绘制对象的第二示意图;13B is a second schematic diagram of drawing objects using the painter's depth sorting method in the stereoscopic image visual effect processing method of the present invention;

图13C为本发明立体影像视觉效果处理方法使用画家深度排序法绘制对象的第三示意图;13C is a third schematic diagram of drawing objects using the painter's depth sorting method in the stereoscopic image visual effect processing method of the present invention;

图14为本发明立体影像视觉效果处理方法使用平面法线判定法绘制对象的示意图;14 is a schematic diagram of drawing an object using a plane normal determination method in the stereoscopic image visual effect processing method of the present invention;

图15为本发明立体影像视觉效果处理方法使用最大最小法绘制对象的示意图。FIG. 15 is a schematic diagram of drawing objects using the max-min method in the stereoscopic image visual effect processing method of the present invention.

附图标记说明:11-立体影像;12-对象;21-应用程序;22-操作系统;23-应用程序接口;24-几何转换子系统;25-着色子系统;31-几何转换子系统;32-着色子系统;41-本地坐标空间;42-世界坐标空间;43-视角坐标空间;44-三维屏幕坐标空间;45-显示空间;51-定义对象;52-定义场景、参考视角与光源;53-挑出及修剪至三维视角范围;54-隐藏面消除、着色及阴影处理;61-模型化转换;62-视角转换;700-并集几何图形;701-交集几何图形;702-补集几何图形;703-NURBS曲线;704-NURBS曲面;705-多边形建模对象;706-方体;707-第一类球体;708-第二类球体;709-第三类球体;710-球体;711-Z缓冲立体影像;712-Z缓冲示意影像;713-第一画家深度排序影像;714-第二画家深度排序影像;715-第三画家深度排序影像;716-可视平面;717-隐藏面;718-立体深度影像;S11~S17-步骤流程。Description of reference signs: 11-stereoscopic image; 12-object; 21-application program; 22-operating system; 23-application program interface; 24-geometric conversion subsystem; 25-shading subsystem; 31-geometric conversion subsystem; 32-shading subsystem; 41-local coordinate space; 42-world coordinate space; 43-viewpoint coordinate space; 44-3D screen coordinate space; 45-display space; 51-definition object; 52-definition scene, reference view angle and light source ;53-selection and pruning to the 3D viewing angle range; 54-hidden surface elimination, coloring and shadow processing; 61-model conversion; 62-perspective conversion; 700-union geometry; 701-intersection geometry; Set geometry; 703-NURBS curve; 704-NURBS surface; 705-polygon modeling object; 706-cube; 707-first type sphere; 708-second type sphere; ; 711-Z buffer stereo image; 712-Z buffer schematic image; 713-first painter depth sort image; 714-second painter depth sort image; 715-third painter depth sort image; 716-viewable plane; 717- Hidden surface; 718-stereoscopic depth image; S11-S17-step process.

具体实施方式 Detailed ways

为使贵审查委员能清楚了解本发明的内容,谨以下列说明搭配图式,敬请参阅。In order to enable your examiners to clearly understand the content of the present invention, the following descriptions are provided together with the drawings, please refer to them.

请参阅图1A、图1B及图2所示,其为本发明立体影像视觉效果处理方法较佳实施例的步骤流程图、使用本发明立体影像视觉效果处理方法所形成的一立体影像及一三维绘图流程图。其中,立体影像11是由多个对象12所组成,其依序由应用程序21(Application)、操作系统22(Operation System)、应用程序接口23(Application programming interface,API)、几何转换子系统24(Geometric Subsystem)及着色子系统25(Raster subsystem)所产生。而该立体影像视觉效果处理方法包含下列步骤:Please refer to Fig. 1A, Fig. 1B and Fig. 2, which are the flow chart of steps of a preferred embodiment of the stereoscopic image visual effect processing method of the present invention, a stereoscopic image and a three-dimensional image formed by using the stereoscopic image visual effect processing method of the present invention Drawing flowchart. Among them, the stereoscopic image 11 is composed of a plurality of objects 12, which in turn consists of an application program 21 (Application), an operating system 22 (Operation System), an application programming interface 23 (Application programming interface, API), and a geometric conversion subsystem 24. (Geometric Subsystem) and shading subsystem 25 (Raster subsystem). And the stereoscopic image visual effect processing method comprises the following steps:

S11:提供一立体影像,该立体是由多个对象所组成,每一该对象具有一对象坐标值。S11: Provide a stereoscopic image, the stereoscopic image is composed of multiple objects, and each of the objects has an object coordinate value.

S12:提供一光标,该光标具有一光标坐标值。S12: Provide a cursor, the cursor has a cursor coordinate value.

S13:判断该光标坐标值是否与其中的一该多个对象的该对象坐标值相重合。S13: Determine whether the cursor coordinates coincide with the object coordinates of one of the plurality of objects.

S14:若该光标坐标值与其中的一该多个对象的该对象坐标值相重合,则改变相对应该多个对象的对象坐标值的一深度坐标参数。S14: If the cursor coordinate value coincides with the object coordinate value of one of the plurality of objects, change a depth coordinate parameter corresponding to the object coordinate value of the plurality of objects.

S15:重新绘制与该光标坐标值相符合的该对象的影像。S15: Redraw the image of the object that matches the coordinate value of the cursor.

S16:若该光标坐标值改变时,重新判断该光标坐标值是否与其中的一该多个对象的对象坐标值相重合。S16: If the coordinate value of the cursor changes, re-determine whether the coordinate value of the cursor coincides with the object coordinate value of one of the plurality of objects.

此外,若该光标坐标值与该对象坐标值不相重合时,则于每一预设周期时间后重新判断该光标坐标值是否与其中的一该多个对象的对象坐标值相重合,如步骤S17所示。In addition, if the coordinate value of the cursor does not coincide with the coordinate value of the object, then re-judging whether the coordinate value of the cursor coincides with the object coordinate value of one of the plurality of objects after each preset cycle time, as in step Shown in S17.

其中,该光标坐标值可由鼠标、触控板或触控面板或任何可供使用者与电子装置互动的人机接口(Human-Computer interaction)所产生。Wherein, the cursor coordinate value can be generated by a mouse, a touch panel or a touch panel, or any Human-Computer interaction for the user to interact with the electronic device.

其中,该立体影像11是以立体计算机绘图(3D computer graphic)的方式所绘制。该立体影像可依序由建模(modeling)、场景布局与动画生成(layout &animation)及绘制渲染(rendering)等计算机绘图步骤所产生。Wherein, the three-dimensional image 11 is drawn in a three-dimensional computer graphics (3D computer graphic) manner. The stereoscopic image can be sequentially generated by computer graphics steps such as modeling, scene layout and animation generation (layout & animation), and rendering.

其中,该建模阶段又大致可分为以下几类:Among them, the modeling stage can be roughly divided into the following categories:

1:构造实体几何(constructive solid geometry,CSG),在构造实体几何中,可以使用逻辑运算子(logical operator)将不同物体(如立方体、圆柱体、棱柱、棱锥、球体、圆锥等),以并集、交集及补集等方式组合成复杂的曲面,藉以形成一并集几何图形700、交集几何图形701及补集几何图形702,而可用其建构复杂的模型或曲面。如图3A、图3B及图3C所示。1: Constructive solid geometry (CSG), in constructing solid geometry, you can use logical operators (logical operators) to combine different objects (such as cubes, cylinders, prisms, pyramids, spheres, cones, etc.) Combining sets, intersections, and complements to form complex surfaces, thereby forming a union geometry 700, intersection geometry 701, and complement geometry 702, which can be used to construct complex models or surfaces. As shown in Figure 3A, Figure 3B and Figure 3C.

2:非均匀有理样条(non uniform rational B-spline,NURBS):其可用来产生和表示曲线及曲面,一条NURBS曲线703,其是由阶次(order)、一组具有权重(weight)控制点及一节点向量(knot vector)所决定。其中,NURBS为B-样条(B spline)及贝赛尔曲线(Bézier curves)及曲面两者的广义概念。通过估算一NURBS曲面704的s及t参数,可将此曲面于空间坐标中表示。如图4A及图4B所示。2: Non uniform rational B-spline (NURBS): It can be used to generate and represent curves and surfaces, a NURBS curve 703, which is controlled by an order and a group of weights Point and a node vector (knot vector) determined. Among them, NURBS is a generalized concept of both B-spline (B spline), Bezier curve (Bézier curves) and surface. By estimating the s and t parameters of a NURBS surface 704, the surface can be expressed in spatial coordinates. As shown in Figure 4A and Figure 4B.

3:多边形建模(polygon modeling):多边形建模是以多边形网格(polygonmesh)来表示或是用于近似物体曲面的物体建模方法。而通常网格(mesh)是以三角形、四边形或者其它简单凸多边形所组成一多边型建模对象705。如图5所示。3: Polygon modeling: Polygon modeling is an object modeling method represented by a polygon mesh or used to approximate the surface of an object. Usually, a mesh (mesh) is a polygonal modeling object 705 composed of triangles, quadrilaterals or other simple convex polygons. As shown in Figure 5.

4:细分曲面(subdivision surface):又称为子分曲面,其用于从任意网格建立光滑曲面,通过反复细化初始的多边形网格,可产生一系列网格逼近至无限的细分曲面,且每一细分部都产生更多多边形元素及更光滑的网格,而可由依序由一方体706逼近成一第一类球体707、一第二类球体708、一第三类球体709及一球体710。如图6A、6B、6C、6D及6E所示。4: Subdivision surface: also known as subdivision surface, which is used to create a smooth surface from an arbitrary grid. By repeatedly refining the initial polygonal grid, a series of grids approaching to infinite subdivisions can be generated. surface, and each subdivision produces more polygonal elements and smoother meshes, and can be approximated by a cube 706 into a first-type sphere 707, a second-type sphere 708, and a third-type sphere 709 and a sphere 710 . As shown in Figures 6A, 6B, 6C, 6D and 6E.

而在建模步骤中,亦可视需求编辑物体表面或材料性质,增加纹理,凹凸对应或其它特征。In the modeling step, you can also edit the surface or material properties of the object according to your needs, and add texture, concave-convex correspondence, or other features.

而场景布局及动画生成用于安排一场景内的虚拟物体、灯光、摄影机或其它实体,用于制作静态画面或动画。场景布局用于定义对象于场景中的位置及大小的空间关系。动画生成则用于瞬时描述一对象,如其随时间运动或变形,其可使用关键帧(key framing)、逆运动(inverse kinematic)及动态捕捉(motion capture)来达成。The scene layout and animation generation are used for arranging virtual objects, lights, cameras or other entities in a scene for making static images or animations. The scene layout is used to define the spatial relationship of the position and size of objects in the scene. Animation generation is used to describe an object instantaneously, as it moves or deforms over time, which can be achieved using key framing, inverse kinematic, and motion capture.

绘制渲染则是由准备的场景建立实际的二维景像或动画的最终阶段,其可分为非实时(non real time)方式或实时(real time)方式。Rendering is the final stage of creating an actual two-dimensional scene or animation from the prepared scene, which can be divided into non-real time or real time.

非实时方式其是将模型以仿真光传输(light transport)以获得如相片拟真(photo realistic)的真实效果,通常可用光迹追踪法(ray tracing)或幅射度算法(radiosity)来达成。The non-real-time method is to use the model to simulate light transport to obtain a real effect such as photo realistic, which can usually be achieved by ray tracing or radiosity.

实时(real time)方式则使用非照片拟真(non photo realistic)的渲染法以取得实时的绘制速度,而可用平直着色法(flat shading)、Phong着色法、Gouraud着色、位图纹理(bit map texture)、凹凸纹理对应(bump mapping)、阴影(shading)、运动模糊(motion blur)、景深(depth of field)等各种方式来绘制,如用于游戏或仿真程序等交互式媒体的图像绘制,均需要及时计算和显示,其速度上大约为20至120帧(frame)每秒。The real time method uses non-photorealistic rendering methods to obtain real-time drawing speed, and can use flat shading, Phong shading, Gouraud shading, bitmap texture (bit map texture), bump mapping, shading, motion blur, depth of field, etc., such as images for interactive media such as games or simulation programs Drawing needs to be calculated and displayed in time, and its speed is about 20 to 120 frames (frame) per second.

为更清楚了解三维绘图方式,请一并参照图7,其为一标准三维绘图着色管线的示意图。图中。该着色管线是根据不同的坐标系统而分割成数个部分,大致包括一几何转换子系统31及一着色子系统32。定义对象51内所定义的对象是为三维模型的描述定义,其使用坐标系统参考其本身参考点称为本地坐标空间41(local coordinate space)。当合成一幅三维立体影像,各个不同的对象由数据库中读取,并转换至一个统一的世界坐标空间42(world coordinatespace),并在世界坐标空间42内进行定义场景、参考视角与光源52,而由本体坐标空间41转换至世界坐标空间42的过程称为模型化转换61。接着,须定义观测点(view)的位置。由于绘图系统硬件分辨率的限制,而必须将连续的坐标转换至含有X及Y坐标,以及深度坐标(亦称为Z坐标)的三维屏幕空间,当作隐藏面的消除(hidden surface removal)以及将对象以像素(pixel)的方式绘制出来,而由世界坐标空间42转换至视角坐标空间43,以进行挑出及修剪至三维视角范围53的步骤,此过程又称的为视角转换62。接着,由视角坐标空间43转换至三维屏幕坐标空间44,以进行隐藏面消除、着色及阴影处理54。之后,框架缓冲区(frame buffer)将最终结果的图像输出至屏幕上,而由三维屏幕坐标空间转换至显示空间45。在本实施例中,在该几何转换子系统及该着色子系统的步骤中,可以微处理器来完成,或搭配以硬件加速装置来完成,如图形处理单元(Graphic processing unit,GPU)或3D绘图加速卡等。For a clearer understanding of the 3D rendering method, please also refer to FIG. 7 , which is a schematic diagram of a standard 3D rendering rendering pipeline. in the figure. The shading pipeline is divided into several parts according to different coordinate systems, and generally includes a geometry transformation subsystem 31 and a shading subsystem 32 . The objects defined in the definition object 51 are defined for the description of the three-dimensional model, which uses a coordinate system to refer to its own reference point, which is called a local coordinate space 41 (local coordinate space). When synthesizing a three-dimensional image, various objects are read from the database and converted to a unified world coordinate space 42 (world coordinate space), and the scene, reference angle and light source 52 are defined in the world coordinate space 42, The process of transforming from body coordinate space 41 to world coordinate space 42 is called modeling transformation 61 . Next, the position of the observation point (view) must be defined. Due to the limitation of the hardware resolution of the graphics system, the continuous coordinates must be converted to a three-dimensional screen space containing X and Y coordinates, and depth coordinates (also known as Z coordinates), as hidden surface removal (hidden surface removal) and Objects are drawn in pixels, and transformed from the world coordinate space 42 to the view coordinate space 43 for the steps of picking out and trimming to the 3D view range 53 , this process is also called view conversion 62 . Then, transform from the view coordinate space 43 to the 3D screen coordinate space 44 to perform hidden surface elimination, coloring and shadow processing 54 . Afterwards, the frame buffer (frame buffer) outputs the image of the final result to the screen, and transforms from the three-dimensional screen coordinate space to the display space 45 . In this embodiment, in the steps of the geometry conversion subsystem and the shading subsystem, it can be completed by a microprocessor, or it can be completed with a hardware acceleration device, such as a graphics processing unit (Graphic processing unit, GPU) or 3D Graphics accelerator card, etc.

请参照图8、图9、图10、图11A及图11B,其为本创作立体影像视觉效果处理方法较佳实施例的影像显示第一示意图、第二示意图、第三示意图、第四示意图及第五示意图。当使用者通过操作鼠标、触控板、触控面板或任何人机接口时移动该光标,而改变该光标坐标值时,则重新判断该光标坐标值是否与其中的一该多个对象12的对象坐标值相重合。若不相重合,则维持原显示画面的立体影像11而不重新绘制。若该光标值与其中的一该多个对象12的对象坐标值相重合,则改变相对应该多个对象的对象坐标值的深度坐标参数,并通过上述三维绘图着色管线步骤重新绘制立体影像11。若当光标坐标值改变而与其它对象12相符合时,则原点选对象12回复其原深度坐标参数,另一被点选的对象12则改变其深度坐标参数,当重新绘制整体立体影像11便突显出被点选对象12的立体视觉效果。由此可供使用者操作鼠标等人机接口工具而与立体影像产生一定的互动效果。此外,当其中的一对象12与光标坐标位置相符合而改变其深度坐标位置时,其它对象12的坐标参数亦可随光标坐标位置而随之改变,如此更可突显其视觉感受及互动效果。Please refer to Fig. 8, Fig. 9, Fig. 10, Fig. 11A and Fig. 11B, which are the first schematic diagram, the second schematic diagram, the third schematic diagram, the fourth schematic diagram and Fifth schematic diagram. When the user moves the cursor by operating the mouse, touch pad, touch panel or any man-machine interface to change the coordinate value of the cursor, it is re-judged whether the coordinate value of the cursor is consistent with that of one of the plurality of objects 12 The object coordinate values coincide. If they do not overlap, the stereoscopic image 11 of the original display frame is maintained without redrawing. If the cursor value coincides with the object coordinate values of one of the plurality of objects 12, then change the depth coordinate parameter corresponding to the object coordinate values of the plurality of objects, and re-render the stereoscopic image 11 through the above-mentioned 3D rendering pipeline steps. If when the coordinate value of the cursor changes to match other objects 12, the original depth coordinate parameter of the selected object 12 will be restored, and another selected object 12 will change its depth coordinate parameter. When the overall stereoscopic image 11 is redrawn The stereoscopic visual effect of the selected object 12 is highlighted. Therefore, the user can operate the mouse and other man-machine interface tools to generate a certain interactive effect with the stereoscopic image. In addition, when one of the objects 12 changes its depth coordinate position according to the cursor coordinate position, the coordinate parameters of other objects 12 can also change accordingly with the cursor coordinate position, which can further highlight its visual experience and interactive effect.

其中,该对象的对象坐标的该深度坐标参数可由下述方式所决定:Wherein, the depth coordinate parameter of the object coordinate of the object can be determined in the following manner:

1:Z缓冲法(Z buffering),又称为深度缓冲法,当渲染对象时,每一个生成的像素的深度(即Z坐标)储存于一缓冲区中,该缓冲区亦称为Z缓冲区或深度缓冲区,而该缓冲区则组成一储存每一屏幕像素深度的x-y二维组。若场景中另外一个对象亦于同一像素生成渲染结果,则比较两者的深度值,且保留距离观察者较近的物体,并将此对象深度储存至深度缓冲区内,最后,依据该深度缓冲区正确地深度感知效果,较近的物体遮挡较远的物体。而此过程亦称为Z消隐(Z culling)。如图12A及图12B所示的一Z缓冲立体影像711及一Z缓冲示意影像712。1: Z buffering method (Z buffering), also known as depth buffering method, when rendering an object, the depth (ie Z coordinate) of each generated pixel is stored in a buffer, which is also called Z buffer Or a depth buffer that forms an x-y two-dimensional set that stores the depth of each screen pixel. If another object in the scene also generates a rendering result at the same pixel, compare the depth values of the two, and keep the object closer to the observer, and store the depth of this object in the depth buffer, and finally, according to the depth buffer The area correctly perceives the effect of depth, and closer objects block farther objects. This process is also called Z culling. A Z-buffer stereoscopic image 711 and a Z-buffer schematic image 712 are shown in FIG. 12A and FIG. 12B .

2:画家深度排序法(Painter’salgorithm):其首先绘制距离较远的对象,然后在绘制距离较近的对象以覆盖较远的对象部分,其先将各个对象根据深度进行排序,然后依照顺序进行绘制,而依序形成一第一画家深度排序影像713、一第二画家深度排序影像714及一第三画家深度排序影像715。如图13A、图13B及图13C所示。2: Painter's algorithm (Painter's algorithm): It first draws objects that are farther away, and then draws objects that are closer to cover parts of objects that are farther away. It first sorts each object according to depth, and then in order Rendering is performed to sequentially form a first painter's depth-sorted image 713 , a second painter's depth-sorted image 714 and a third painter's depth-sorted image 715 . As shown in Figure 13A, Figure 13B and Figure 13C.

3:平面法线判定法:其适用于无凹线的凸多面体,例如正多面体或水晶球,其原理为求出每个面的法线向量,若法线向量的Z分量大于0(即面朝线观察者),则该面为可视平面716,若法线向量的Z分量小于0,则判定为隐藏面717,无须绘制。如图14所示3: Plane normal determination method: It is suitable for convex polyhedrons without concave lines, such as regular polyhedrons or crystal balls. The principle is to find the normal vector of each surface. If the Z component of the normal vector is greater than 0 (ie, the surface If the Z component of the normal vector is less than 0, then it is determined to be the hidden surface 717 and no drawing is required. As shown in Figure 14

4:曲面法线判定法:使用曲面方程式作为判定基则,如用于求对象受光亮时,则将每一点的坐标值带入方程式,求得法向量并与光线向量进行内积运算,以求得受光亮,于绘制时由最远的点开始绘制。如此近的点于绘制时将遮盖住远的点,以处理深度问题。4: Judgment method of surface normal: use the surface equation as the basis of judgment, if it is used to find the object is illuminated, then put the coordinate value of each point into the equation, obtain the normal vector and perform inner product operation with the light vector to obtain To be illuminated, draw from the farthest point when drawing. Points that are so close will mask far points when drawing to deal with depth issues.

5:最大最小法:当绘制时从最大的Z坐标开始绘制,而最大最小点根据Y坐标的值来决定哪些点须被绘制,而形成一立体深度影像718。如图15所示。5: Maximum and minimum method: when drawing, start drawing from the maximum Z coordinate, and the maximum and minimum points determine which points are to be drawn according to the value of the Y coordinate, so as to form a three-dimensional depth image 718 . As shown in Figure 15.

本发明立体影像视觉效果处理方法,其功效在于可通过操作光标移动,使对应的对象改变其深度坐标位置而能突显其视觉效果。此外,其它的对象亦对应改变其相对坐标位置,以进一步突显影像视觉的变化。The effect of the method for processing the visual effect of the stereoscopic image of the present invention is that by operating the cursor to move, the corresponding object can change its depth coordinate position to highlight its visual effect. In addition, other objects also change their relative coordinate positions accordingly, so as to further highlight the visual changes of the image.

以上说明对本发明而言只是说明性的,而非限制性的,本领域普通技术人员理解,在不脱离以下所附权利要求所限定的精神和范围的情况下,可做出许多修改,变化,或等效,但都将落入本发明的保护范围内。The above description is only illustrative, rather than restrictive, to the present invention. Those of ordinary skill in the art understand that many modifications and changes can be made without departing from the spirit and scope defined by the following appended claims. Or equivalent, but all will fall within the protection scope of the present invention.

Claims (6)

1.一种立体影像视觉效果处理方法,其特征在于,包含下列步骤:1. a stereoscopic image visual effect processing method, is characterized in that, comprises the following steps: 提供一立体影像,该立体影像是由多个对象所组成,每一该多个对象具有一对象坐标值;providing a stereoscopic image, the stereoscopic image is composed of a plurality of objects, and each of the plurality of objects has an object coordinate value; 提供一光标,该光标具有一光标坐标值;providing a cursor with a cursor coordinate value; 判断该光标坐标值是否与其中的一该多个对象的该对象坐标值相重合;judging whether the cursor coordinate value coincides with the object coordinate value of one of the plurality of objects; 若该光标坐标值与其中的一该多个对象的该对象坐标值相重合,则改变相对应该多个对象的对象坐标值的一深度坐标参数;及If the cursor coordinate value coincides with the object coordinate value of one of the plurality of objects, changing a depth coordinate parameter corresponding to the object coordinate value of the plurality of objects; and 重新绘制与该光标坐标值相符合的该对象的影像。Redraws the image of the object that matches the cursor coordinate values. 2.根据权利要求1所述的立体影像视觉效果处理方法,其特征在于,若该光标坐标值改变时,则重新判断该光标坐标值是否与其中之一该多个对象的对象坐标值相重合。2. The stereoscopic image visual effect processing method according to claim 1, wherein if the coordinate value of the cursor changes, it is re-judged whether the coordinate value of the cursor coincides with the object coordinate value of one of the plurality of objects . 3.根据权利要求1所述的立体影像视觉效果处理方法,其特征在于,该多个对象坐标值是对应本地坐标、世界坐标、视角坐标或投影坐标的坐标值。3. The stereoscopic image visual effect processing method according to claim 1, wherein the plurality of object coordinate values are coordinate values corresponding to local coordinates, world coordinates, viewing angle coordinates or projection coordinates. 4.根据权利要求1所述的立体影像视觉效果处理方法,其特征在于,该光标坐标值是由鼠标、触控板或触控面板所产生。4 . The stereoscopic image visual effect processing method according to claim 1 , wherein the cursor coordinate value is generated by a mouse, a touch panel or a touch panel. 5.根据权利要求1所述的立体影像视觉效果处理方法,其特征在于,该立体影像是依序由建模、场景布局与动画生成及绘制渲染这些计算机绘图步骤所产生。5. The method for processing stereoscopic image visual effects according to claim 1, wherein the stereoscopic image is sequentially generated by computer graphics steps of modeling, scene layout and animation generation, and rendering. 6.根据权利要求1所述的立体影像视觉效果处理方法,其特征在于,该多个对象的该对象坐标值的该深度坐标参数是由Z缓冲法、画家深度排序法、平面法线判定法、曲面法线判定法、最大最小法这些方式所决定。6. The stereoscopic image visual effect processing method according to claim 1, characterized in that, the depth coordinate parameters of the object coordinate values of the plurality of objects are determined by the Z buffer method, the painter's depth sorting method, and the plane normal determination method. , surface normal determination method, and maximum and minimum method.
CN2011100719675A 2011-03-24 2011-03-24 Stereoscopic image visual effect processing method Pending CN102693065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100719675A CN102693065A (en) 2011-03-24 2011-03-24 Stereoscopic image visual effect processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100719675A CN102693065A (en) 2011-03-24 2011-03-24 Stereoscopic image visual effect processing method

Publications (1)

Publication Number Publication Date
CN102693065A true CN102693065A (en) 2012-09-26

Family

ID=46858571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100719675A Pending CN102693065A (en) 2011-03-24 2011-03-24 Stereoscopic image visual effect processing method

Country Status (1)

Country Link
CN (1) CN102693065A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162142A (en) * 2016-06-15 2016-11-23 南京快脚兽软件科技有限公司 A kind of efficient VR scene drawing method
CN106406508A (en) * 2015-07-31 2017-02-15 联想(北京)有限公司 Information processing method and relay equipment
CN104268922B (en) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 A kind of image rendering method and image rendering device
CN108463837A (en) * 2016-01-12 2018-08-28 高通股份有限公司 System and method for rendering multiple detail grades
CN110091614A (en) * 2018-01-30 2019-08-06 东莞市图创智能制造有限公司 Three-dimensional image printing method, device, equipment and storage medium
WO2021110038A1 (en) * 2019-12-05 2021-06-10 北京芯海视界三维科技有限公司 3d display apparatus and 3d image display method
CN114332338A (en) * 2021-12-28 2022-04-12 北京世纪高通科技有限公司 Shadow rendering method, device, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295062B1 (en) * 1997-11-14 2001-09-25 Matsushita Electric Industrial Co., Ltd. Icon display apparatus and method used therein
US20040230918A1 (en) * 2000-12-08 2004-11-18 Fujitsu Limited Window display controlling method, window display controlling apparatus, and computer readable record medium containing a program
CN101587386A (en) * 2008-05-21 2009-11-25 深圳华为通信技术有限公司 Cursor processing method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295062B1 (en) * 1997-11-14 2001-09-25 Matsushita Electric Industrial Co., Ltd. Icon display apparatus and method used therein
US20040230918A1 (en) * 2000-12-08 2004-11-18 Fujitsu Limited Window display controlling method, window display controlling apparatus, and computer readable record medium containing a program
CN101587386A (en) * 2008-05-21 2009-11-25 深圳华为通信技术有限公司 Cursor processing method, device and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268922B (en) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 A kind of image rendering method and image rendering device
CN106406508A (en) * 2015-07-31 2017-02-15 联想(北京)有限公司 Information processing method and relay equipment
CN108463837A (en) * 2016-01-12 2018-08-28 高通股份有限公司 System and method for rendering multiple detail grades
CN106162142A (en) * 2016-06-15 2016-11-23 南京快脚兽软件科技有限公司 A kind of efficient VR scene drawing method
CN110091614A (en) * 2018-01-30 2019-08-06 东莞市图创智能制造有限公司 Three-dimensional image printing method, device, equipment and storage medium
WO2021110038A1 (en) * 2019-12-05 2021-06-10 北京芯海视界三维科技有限公司 3d display apparatus and 3d image display method
CN114332338A (en) * 2021-12-28 2022-04-12 北京世纪高通科技有限公司 Shadow rendering method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US20120229463A1 (en) 3d image visual effect processing method
KR101145260B1 (en) Method and apparatus for mapping a texture to a 3D object model
US10062199B2 (en) Efficient rendering based on ray intersections with virtual objects
JP5055214B2 (en) Image processing apparatus and image processing method
KR101919077B1 (en) Method and apparatus for displaying augmented reality
CN106780709A (en) A kind of method and device for determining global illumination information
CN102693065A (en) Stereoscopic image visual effect processing method
EP2051533A2 (en) 3D image rendering apparatus and method
WO2006122212A2 (en) Statistical rendering acceleration
CN105205861B (en) Tree three-dimensional Visualization Model implementation method based on Sphere Board
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN108804061A (en) The virtual scene display method of virtual reality system
CN103632390A (en) Method for realizing naked eye 3D (three dimensional) animation real-time making by using D3D (Direct three dimensional) technology
US9317967B1 (en) Deformation of surface objects
US9292954B1 (en) Temporal voxel buffer rendering
CN107689076B (en) A kind of efficient rendering intent when the cutting for system of virtual operation
CN114254501B (en) Large-scale grassland rendering and simulating method
CN115686202A (en) Three-dimensional model interactive rendering method across Unity/Optix platform
CA3143520A1 (en) Method of computing simulated surfaces for animation generation and other purposes
Abdallah et al. Internet-Based 3D Volumes with Signed Distance Fields: Establishing a WebGL Rendering Infrastructure
CN111625093B (en) Dynamic scheduling display method of massive digital point cloud data in MR (magnetic resonance) glasses
CN117058301B (en) Knitted fabric real-time rendering method based on delayed coloring
Öhrn Different mapping techniques for realistic surfaces
US8379030B1 (en) Computer graphics variable transformation interface
CN106600677B (en) To the processing method of conventional model in VR system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120926