KR20130137356A - A depth editing apparatus for 3-dimensional images and the editing method thereof - Google Patents

A depth editing apparatus for 3-dimensional images and the editing method thereof Download PDF

Info

Publication number
KR20130137356A
KR20130137356A KR1020120060866A KR20120060866A KR20130137356A KR 20130137356 A KR20130137356 A KR 20130137356A KR 1020120060866 A KR1020120060866 A KR 1020120060866A KR 20120060866 A KR20120060866 A KR 20120060866A KR 20130137356 A KR20130137356 A KR 20130137356A
Authority
KR
South Korea
Prior art keywords
depth
dimensional
image
dimensional shape
authoring
Prior art date
Application number
KR1020120060866A
Other languages
Korean (ko)
Inventor
박영환
최연봉
임해용
이강호
Original Assignee
(주)리얼디스퀘어
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)리얼디스퀘어 filed Critical (주)리얼디스퀘어
Priority to KR1020120060866A priority Critical patent/KR20130137356A/en
Publication of KR20130137356A publication Critical patent/KR20130137356A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

The present invention relates to a depth editing apparatus for three-dimensional image and an editing method for the same which can identify the depth of an object at various angles by displaying a three-dimensional image in a three-dimensional form in a geometrically designed virtual three-dimensional space and rotating the three-dimensional image while changing a view of the three-dimensional form; and can intuitively determine a distance between objects in the three-dimensional space, select a random object in the three-dimensional form and modify the depth of the selected random object or select and modify a mesh point, and can promptly identify the modified results in the three-dimensional space. [Reference numerals] (140) View setting (change);(AA) Depth of second object;(BB) Depth of first object

Description

Depth authoring tool of 3D image and its authoring method {A DEPTH EDITING APPARATUS FOR 3-DIMENSIONAL IMAGES AND THE EDITING METHOD THEREOF}

The present invention relates to a depth authoring tool and a method of authoring a 3D image, and more specifically, to determine the distance between objects in a three-dimensional space by moving the viewpoint so as to intuitively determine and adjust the distance between the objects, The present invention relates to a depth authoring tool for a 3D image and a method for authoring the same, which directly control a surface curvature of an object to be corrected by allowing a mesh point of a specific object to be directly moved in the Z-axis direction.

Recently, as the spread of 3D display (3D display) increases due to the development of technology, users' demand for 3D image is also increasing. Accordingly, in order to provide a scarce 3D image, a method of converting an existing 2D image into a 3D image is widely used. However, since the 3D conversion of the 2D image as described above is complicated and elaborate, it is usually done by hand, and therefore, there is a problem that requires a professional man and a lot of work time. Recently, a tool for automatically converting images has been developed.

However, the conventional 3D image conversion tool requires a lot of time and effort to obtain a 3D image because it does not provide a function to interactively check the stereoscopic sense of the 3D image. For example, since the conventional 3D image conversion tool automatically generates the depth of an object, it is likely to be converted into awkward and unnatural 3D image according to the depth of the converted image. Therefore, a situation arises where the user needs to correct the incorrectly set depth for each object by checking the converted 3D image. In this case, in order to correct the depth of the 3D image, an authoring tool is required for the user to intuitively check and adjust the stereoscopic sense of the 3D image.

However, in order to confirm the stereoscopic sense of the 3D image, the 3D image should be played back through a 3D display device. That is, in the related art, after reproducing a 3D image using a separate 3D display device, the wrong part is memorized and the 3D image is corrected again using a 3D image conversion tool.

For reference, the 3D display device is a display device for presenting a 3D image to a user, and presents a different image (eg, a left image and a right image) to both eyes of a human to make it appear stereoscopically. In order to view a stereoscopic image through the 3D display device, liquid crystal shutter glasses or polarized glasses (hereinafter, referred to as 3D glasses) should be worn. If a 3D image is to be viewed without wearing 3D glasses, a 3D display device with a parallax barrier must be used.

As described above, after performing a process of modifying a 3D image using a 3D image conversion tool, the process is cumbersome because the process of checking and reproducing the modified 3D image through a separate 3D display device must be repeated from time to time. In addition, there is a problem that takes a lot of turnaround time.

The present invention was created to solve the above problems, and even though the 3D image is not reproduced through a separate 3D display device, the depth of the object can be displayed in a 3D shape on a 3D space of the 2D display device. It is an object of the present invention to provide a depth authoring tool and a method of authoring a 3D image that can be checked immediately after correction.

In addition, the present invention was created to improve the above problems, after displaying the object to which the depth of the 3D image is applied in a three-dimensional shape in the three-dimensional space, the user's desired view point (viewing point, It is an object of the present invention to provide a depth authoring tool for a 3D image and an authoring method thereof so that the distance between objects can be intuitively checked by moving or rotating the display to VP).

In addition, the present invention was created to improve the above problems, and when displaying the object to which the depth of the 3D image is applied in a three-dimensional shape in a three-dimensional space by displaying the position of the screen together, the object relative to the screen It is an object of the present invention to provide a depth authoring tool and a method of authoring a 3D image so that it can be protruded or entered.

In addition, the present invention was created to improve the above problems, and after displaying a 3D image in a three-dimensional shape in a three-dimensional space, the mesh point of the object represented by the three-dimensional shape (Mesh Point) in the Z-axis direction It is an object of the present invention to provide a depth authoring tool and a method of authoring a 3D image, which can directly check and adjust the depth of an object requiring correction as well as surface curvature.

A depth authoring method of a 3D image according to an aspect of the present invention may include: a first step of mapping a source image using at least one specific map; A second step of outputting the mapped image in a three-dimensional shape on a geometric virtual three-dimensional space; A third step of moving the three-dimensional shape to a desired viewing point to check a depth of a specific object and then pushing or pulling in a desired direction to modify the depth; And a fourth step of directly applying the corrected information to the three-dimensional shape and outputting the modified information, wherein the specific map is at least one of a depth map, a displacement map, a texture map, and a combination thereof. A fifth step of modeling the three-dimensional shape by applying a mesh; Further comprising, and modifying the surface curvature by selecting at least one or more mesh points in the mesh modeling and moving in the desired direction, and further displays a screen (Screen Plane) for determining the depth in the geometric virtual three-dimensional space It is configured to intuitively check whether the object is visible or protruded based on the screen, the three-dimensional shape according to the depth set in each object, the length and direction of the vanishing line representing the depth and the location of the vanishing point The sixth step of displaying differently, and continuously outputting at least one or more images consisting of the three-dimensional shape to reproduce a three-dimensional three-dimensional video in the geometric three-dimensional space; and further comprising, the three-dimensional three-dimensional video Can be changed during playback. The geometric virtual three-dimensional space and the three-dimensional shape output on the space are at least one of linear perspective, object obstruction, motion parallax, relative size and density, perspective by the atmosphere, change of texture and combinations thereof. It is characterized by being designed to feel three-dimensional by applying empirical factors.

According to an aspect of the present invention, a depth authoring tool for a 3D image may include a user interface unit for calling at least one specific map to map a source image or performing an operation related to correcting or checking the mapped image. ; According to the manipulation, the mapping operation of the source image is performed or the mapped image is output in a three-dimensional shape on a geometric virtual three-dimensional space, and the viewpoint of the three-dimensional shape is shifted and output. An authoring unit for pushing or pulling the depth of the specific object in the desired direction in the dimensional shape; And an output unit configured to display the three-dimensional shape output from the authoring unit on a virtual three-dimensional space, and to apply the modified information directly to the three-dimensional shape when the depth of the specific object is modified. Wherein the three-dimensional shape is modeled by applying a mesh, and at least one mesh point is selected in the mesh modeling to modify surface curvature by moving in a desired direction. In the dimensional space, a screen (Screen Plane) for determining the depth is further displayed, and configured to intuitively check whether the object is visible or enter the object based on the screen, at least one or more images composed of the three-dimensional shape Are continuously output in the geometric three-dimensional space. And reproducing a 3D video volume, it characterized in that it is configured to allow the change of the point during the reproduction of the three-dimensional three-dimensional video.

According to the present invention, even if the 3D image is not reproduced using a separate 3D display device, the 3D image is displayed in a three-dimensional shape in a three-dimensional space of the 2D display device, thereby simultaneously performing correction and confirmation of the depth applied to each object. In addition, by allowing the user to move or rotate the three-dimensional shape to a desired viewpoint (VP), the distance between objects can be intuitively checked, and the position of the screen for determining the depth in the three-dimensional space By displaying together, it is possible to intuitively check whether the object is protruding from the screen or entering further, and by moving the mesh point of the object displayed in the three-dimensional shape in the Z-axis direction, Check the depth of the surface as well as the surface curvature It has the effect of making it possible.

1 is an exemplary view showing an execution screen of a depth previewer in a depth authoring tool of a 3D image according to an embodiment of the present invention.
2 is an exemplary diagram for explaining empirical factors for the three-dimensional display of the depth previewer function in FIG.
3A to 3D are exemplary diagrams for describing a depth authoring method using a depth authoring tool of a 3D image according to an embodiment of the present invention.
4 is an exemplary diagram for describing a method of displaying a 3D shape at various views using a depth previewer execution screen according to an embodiment of the present invention.
5 is an exemplary diagram for describing a method of correcting surface curvature of an object in a depth previewer execution screen according to an embodiment of the present invention.
FIG. 6 is an exemplary view showing an execution screen of a depth previewer in which a final object completed until modification of a depth or mesh point is displayed according to an embodiment of the present invention. FIG.
7 is a block diagram schematically illustrating a configuration of a depth authoring tool for a 3D image according to an embodiment of the present invention.
8 is a flowchart illustrating a depth authoring method of a 3D image according to an embodiment of the present invention.

Hereinafter, a depth authoring tool for a 3D image and an authoring method thereof according to the present invention will be described with reference to the accompanying drawings.

The depth authoring tool of a 3D image according to an embodiment of the present invention includes a depth previewer function. Accordingly, even without using the 3D display device, the 3D image may be interactively checked through the depth previewer function implemented in the 2D display device.

1 is an exemplary view showing an execution screen of a depth previewer in a depth authoring tool of a 3D image according to an embodiment of the present invention. As shown in the drawing, the depth previewer displays a geometrical three-dimensional space on a 2D display device so that a 3D image can be felt by empirical factors, and configures the 3D image in a three-dimensional shape in the three-dimensional space. To display.

Empirical factors include: (a) linear perspective, (b) obstruction of object, (c) motion parallax, (d) relative size and density, (e) perspective, and (f) texture change.

FIG. 2 is an exemplary view for explaining empirical factors for displaying a three-dimensional shape of the depth previewer function in FIG. 1, wherein (a) the linear perspective is located at the vanishing point (the same object in the two-dimensional image). When two lines are drawn, the object closest to the point where the line meets makes the object feel farther than the object far from the vanishing point, and (b) the obscurity of the object makes the object feel farther than the obscured object, and (c) Motion parallax makes a fast moving object feel closer to the object than it is. (D) Relative size makes objects closer in size and objects in the distance appear relatively small, while relative density is the same density. Has a dense object, but the density of nearby objects feels less than the density of distant objects, and (e) Is a phenomenon that becomes the object in the distance seemed vaguely close objects clearly visible and, (f) changes in texture than the texture of the object is simply to feel that the texture of the object in the near distance.

Accordingly, the depth previewer uses the empirical factors (particularly, vanishing line and vanishing point factors, which are geometrical information), to form a virtual virtual 3 having a specific viewpoint (VP) on the 2D display device. Design a dimensional space, and display the 3D image in a three-dimensional shape on the three-dimensional space. That is, the three-dimensional shape is distinguished from the 3D image in which the left image and the right image are superimposed, and is an image which makes a sense of three-dimensional feeling in the geometric virtual three-dimensional space.

The depth previewer sets a ground plane by designating a horizontal line of an image, and then sets each depth to an object on the ground by using a depth-map. At this time, according to the depth set for each object, the length and direction of the vanishing line representing the depth and the position of the vanishing point are displayed differently so that the distance between the objects can be intuitively confirmed.

At this time, in order to make it easier to check the depth of each object, a ruler (or lattice) of a certain interval on one side (eg, bottom, left, right, top) of the geometric virtual three-dimensional space 120 may be displayed, and the specific line 121 of the rulers may be displayed thicker or thicker.

In addition, a screen plane (see FIG. 4) may be displayed in order to determine whether the depth of each object enters or exits from the screen. For example, in FIG. 1, an object having a large depth (for example, a background, a sky, or a horizon) is displayed closer to the vanishing point than an object in the foreground and is displayed smaller so that the object is viewed backward. Displays farther to the vanishing point than the background object and pops forward to make it appear more prominent.

As described above, the present invention designs and displays a geometric virtual three-dimensional space on a 2D display device by applying empirical factors (particularly, vanishing point and vanishing line factors), and displays the 3D image on the virtual three-dimensional space in three dimensions. By displaying in the shape 110 to be able to feel a three-dimensional feeling.

In other words, the depth previewer execution screen designs a geometric virtual three-dimensional space on a 2D display device and displays a 3D image reconstructed into a three-dimensional shape on the three-dimensional space, thereby not displaying a 2D display device even without using the 3D display device. Since the device can display 3D images in three dimensions, it is possible to intuitively check and modify the depth applied to each object.

The depth authoring tool of a 3D image according to an embodiment of the present invention displays a source image in three-dimensional shape by displaying a source image and applying a depth through the depth previewer execution screen. That is, the 3D image can be previewed in three dimensions through the depth previewer execution screen, and the editing and correction of the depth or surface curvature (surface curvature expressed by mesh points) of the 3D image can be performed.

In addition, the depth previewer displays a 3D image displayed in a three-dimensional shape on the geometric virtual three-dimensional space as a three-dimensional mesh model, or applies displacement mapping or texture mapping. The three-dimensionally displayed three-dimensional shape may be displayed by moving to a desired viewpoint (VP) or by rotating (for example, rotating it 180 degrees up / down and left / right).

Here, changing the viewpoint VP actually means that the viewpoint is changed along with the three-dimensional shape displayed on the three-dimensional space by rotating the three-dimensional space according to the viewpoint.

In addition, the depth authoring tool of the 3D image may directly modify a mesh point displaying a depth or surface curvature by selecting an object of a 3D image displayed in a 3D shape on the depth previewer execution screen, and simultaneously correcting a mesh point in a 3D space. Stereoscopic confirmation is possible, and the modified 3D image can be immediately stored and updated.

In addition, the depth authoring tool of the 3D image includes control buttons 130 for manipulating the depth previewer and editing the 3D image. For example, control buttons for calling or applying a source image, a depth map, a texture map, or a displacement map, and control buttons for modifying or storing a depth or mesh point by selecting an object displayed on the depth previewer execution screen. Include.

3A to 3D are exemplary views for describing a depth authoring method using a depth authoring tool of a 3D image according to an embodiment of the present invention.

As shown in FIG. 3A, a user first loads a depth map of a 3D image to be authored. The depth map represents distance information of a 3D image as an image and is generally provided as a gray image including a depth value of 256 levels. The pixel value of the depth map means depth information at a corresponding position. In this case, the depth map may be displayed on the depth previewer execution screen. However, the depth map may not be displayed until the source image is loaded and the depth is applied. The order of importing the depth map and the source image is not limited, and the order may be changed by changing the order.

After loading the depth map as described above, as shown in Fig. 3B, displacement mapping is performed. The displacement mapping is a modeling technique that automatically forms surface curvatures based on light and dark of a bitmap image applied to curves that are difficult to model. In general, the lighter parts pop out and the darker parts are modeled. That is, the displacement mapping deforms the surface of the object three-dimensionally, by moving the surface points spatially to actually concave and convex, and subtle concave and convex on the model surface without complex modeling To indicate the volume.

After the displacement mapping is performed as described above, the texture map is loaded as shown in FIG. 3C to perform texture mapping. The texture mapping is a surface mapping of one texture map. In other words, the texture mapping is a process of attaching a rectangular planar image composed of pixels to an object located in space. We're going to treat the surface of the object so we know. For example, a chair is made and the seating area where a person sits is made of leather, and the legs are made of wood or metal to perform various surface treatments. That is, the texture mapping has an advantage of making an object appear more realistic by applying an image on 3D.

After the displacement mapping and the texture mapping are performed as described above, as shown in FIG. 3D, three-dimensional and realism of the object appear.

However, since the 3D image illustrated in FIG. 3D is a front view (VP) image, it is not easy to compare depths with other objects included in the 3D image and the surface of the object to which the displacement mapping is applied. It is not easy to determine whether the bend is properly formed. Therefore, in order to compare depths between objects or to compare and determine the state of surface curvature, it is necessary to look at the 3D image from the left / right side or from the top / bottom. Because there may be parts that are hidden from other objects.

In this case, the above-described mapping processes are not necessarily performed, and the mapping processes may be selectively performed in combination. For example, a mesh may be applied after texture mapping the depth map image, or a mesh may be applied after texture mapping the source image.

FIG. 4 is an exemplary diagram for describing a method of displaying a 3D shape at various points of time using a depth previewer execution screen according to an embodiment of the present invention, and the 3D shape viewed from the side of the depth previewer execution screen. (Three-dimensional shape of the side view) 110 is displayed.

Since the three-dimensional shape 110 is displayed so that the first object 111 protrudes forward than the screen 150 according to the depth, and the second object 112 is displayed to go further behind the screen 150, each of the three-dimensional shapes 110 is displayed. You can intuitively see how the actual depth of the object is applied. In addition, the surface curvature of the object can also be checked, for example, the part (eg, nose, forehead, eyes, chin, etc.) 160 which should be further protruded from the face of the first object 111 is also formed correctly. can see.

In this case, the viewpoint of displaying the 3D shape 110 is not fixed, but the viewpoint VP may be changed through the viewpoint setting menu (or the viewpoint changing menu) 140. That is, the axis to be rotated (for example, x, y, z), the rotation direction (for example, clockwise and counterclockwise), and the rotation angle (for example, 0 degrees to 180 degrees) through the viewpoint setting menu 140. You can select the desired time point (VP) by typing. The viewpoint VP rotates and displays the three-dimensional shape in the three-dimensional space 110 according to the user's viewing direction by rotating the geometric three-dimensional space in which the three-dimensional shape 110 is displayed based on a direction viewed by the user. To display.

Here, the viewpoint setting (viewing point change) menu is merely illustrated for illustrative purposes, and in reality, one side of the three-dimensional shape (or three-dimensional space) is selected by using an input device (eg, a mouse) and a desired direction. It can also be rotated by dragging.

As described above, by checking the three-dimensional shape at various viewpoints (or various angles) through the viewpoint setting (or changing the viewpoint), it is possible to determine how the shape of the hole is formed and to which point VP can be used. Can be determined. That is, the three-dimensional shape can be rotated (or moved) to the viewpoint VP where the hole area is not visible.

By displaying the three-dimensional shape 110 on the basis of an arbitrary viewpoint (VP) as described above to compare the depth of each object, by selecting a specific object to push or pull in the desired direction (for example, z-axis direction) Can be naturally modified to the desired depth value. At this time, the surface curvature of the specific object is corrected by displaying a mesh model.

5 is an exemplary diagram for describing a method of correcting surface curvature of an object in a depth previewer execution screen according to an embodiment of the present invention.

As shown in the drawing, the object is displayed as a 3D mesh model to correct surface curvature of the object. The mesh model may be displayed as a mesh model using a source image or only a mesh may be displayed without a source image. 5 is a mesh model using a source image.

The three-dimensional mesh model is a set of polygons consisting of a three-dimensional plane in a three-dimensional space. Generally, a three-dimensional mesh model in which a three-dimensional surface is expressed only as a triangle is called a triangular mesh, and the three-dimensional is represented by a quadrangle only. The mesh model is called a square mesh. Various types of polygons can be used at the same time to represent the three-dimensional surface, but the triangle mesh is mainly used because of the increased complexity. The three-dimensional mesh model is composed of a number of three-dimensional points in three-dimensional space, this three-dimensional point is called a vertex (mesh). In addition, a connection point of a straight line generated by connecting two vertices is called an edge or a mesh line, and a surface created by connecting three or more edges is called a face.

Therefore, the user selects at least one of a mesh point, a mesh line, or a surface to be modified in the three-dimensional mesh model to change the shape by stretching in the direction of a desired axis (x, y, z) or moving to a desired position. . When the deformation (or modification) of the 3D mesh model is completed as described above, the modified 3D mesh model is stored. In order to correct the depth or mesh point as described above, the depth previewer may enlarge and display a specific portion of the three-dimensional shape.

When the correction is completed to the surface curvature by using the correction of the depth or the mesh model as described above, as shown in Figure 6, the final object having a three-dimensional shape is completed. FIG. 6 is an exemplary view showing an execution screen of a depth previewer in which a final object completed until modification of a depth or mesh point is displayed according to an embodiment of the present invention.

On the other hand, when the correction of the 3D images (each frame image constituting the 3D video) as described above is completed, it is possible to continuously play each frame converted to the 3D image using a depth previewer. That is, the stereoscopic 3D video can be reproduced in the geometric 3D space designed for the 2D display device. At this time, it is obvious that the playback viewpoint VP of the stereoscopic 3D video to be reproduced can be changed interactively. In other words, by changing the viewpoint of the three-dimensional space of the 2D display device, each 3D image displayed in the three-dimensional shape on the three-dimensional space is continuously played at the same viewpoint (VP), thereby reproducing at the desired viewpoint (VP) desired by the user. Through the three-dimensional 3D video, the user can confirm whether the depth information is correctly implemented, and can also check what visual effect the depth information brings.

FIG. 7 is a block diagram schematically illustrating a configuration of a depth authoring tool of a 3D image according to an embodiment of the present invention. As shown therein, the depth authoring tool calls maps necessary for converting a source image and a 3D image. Or a mapping operation or a modification operation of a depth / mesh point according to the information input through the user interface unit 210 and the user interface unit 210 to perform an operation related to authoring of a 3D image using the imported maps. Authoring unit 220 for processing the, output unit 230 for displaying the 3D image processed in the authoring unit 220 in a three-dimensional shape on a geometrically designed three-dimensional space through the DIPS previewer execution screen and the And a storage unit 240 for finally storing various map information necessary for converting the 3D image and the converted 3D image.

The user interface unit 210 includes control buttons for manipulation related to depth authoring of a 3D image. The control buttons include at least one of a source image loading, a depth map loading, a displacement map loading, a texture map loading, a viewpoint change, a modeling change, a button for inputting a depth value or a mesh point value, and combinations thereof.

The control buttons are not necessarily configured in the form of a button, but may be configured by additionally using an image or text. In addition, the depth value or the mesh point value may be directly input using a keyboard, or a specific object or a mesh point or mesh line may be selected using a mouse in a three-dimensional shape, and the position may be changed in a desired direction (eg, z-axis direction). When pushing or pulling to adjust, the value corresponding to the adjusted position is automatically entered.

In addition, a value (eg, reference axis, rotation direction, rotation angle) for setting the viewpoint (viewing point change) can also be directly input by using a keyboard, or by selecting a specific point using a mouse and dragging it in a desired direction. And a value corresponding to the angle is automatically input.

The authoring unit 220 performs a mapping operation on a source image loaded from the user interface unit 210 using a specific map (for example, a depth map, a displacement map, and a texture map). A process of converting the processed 3D image into a 3D shape is performed to display the processed 3D image on a geometric 3D space. The converted three-dimensional shape is output through the output unit 230. In addition, when the three-dimensional shape is modified according to the value input using the input device (eg, keyboard, mouse), the modified value (eg, depth value, mesh point value, viewpoint setting value) is converted into the three-dimensional shape. It is immediately reflected in the shape and processed.

The output unit 230 outputs the three-dimensional shape converted by the authoring unit 220 through the DIPS previewer execution screen. In this case, when the three-dimensional shape is modified, the modified three-dimensional shape is immediately output, and the authoring unit 220 continuously outputs the image frames composed of the three-dimensional shape to output a three-dimensional three-dimensional video.

The storage unit 240 may store and update various maps necessary for converting a 3D image such as a source image, a depth map, a displacement map, and a texture map.

FIG. 8 is a flowchart illustrating a depth authoring method of a 3D image according to an embodiment of the present invention. As shown in FIG. 8, a source map to be authored is loaded (S101) and a depth map of the source image is loaded (FIG. S102). The depth map and the source image may be loaded in a reverse order, and depth mapping is performed on the source image (S103). Typically, the left image and the right image are generated through the depth mapping as described above, but the present invention generates a three-dimensional shape to be output in a geometrically designed three-dimensional space using the depth map.

Subsequently, the displacement map is loaded (S104) and displacement mapping is performed on the source image (or the depth-mapped image) (S105), and the texture map is loaded (S106) to further perform texture mapping (S106). S107).

The order of displacement mapping and texture mapping may be changed. That is, it is also possible to execute texture mapping first and then displacement mapping. Alternatively, texture mapping may be performed without executing the displacement mapping.

By performing at least one of texture mapping, displacement mapping, depth mapping, and a combination thereof as described above, a realistic and three-dimensional three-dimensional shape is generated. The present invention displays the three-dimensional three-dimensional shape through the depth previewer execution screen. In this case, the 3D shape may be displayed by changing the viewpoint, and after performing the mapping for each step, the viewpoint (VP) of the 3D shape may be changed at any time to compare depths of each object or to check surface curvature. There is (S108).

In addition, the image displayed in the three-dimensional shape on the virtual three-dimensional space as described above can be displayed as a mesh model, even in the state displayed in the mesh model, the rotation of the three-dimensional space, that is, the viewpoint of the three-dimensional shape ( It is possible to change VP).

In addition, as described above, the depth of the 3D image is displayed in a three-dimensional shape in a three-dimensional space, or the depth of the mesh is displayed in a three-dimensional mesh model. The process of checking and modifying the depth and mesh points by changing the viewpoint of the three-dimensional shape may be repeatedly performed.

When the modification is completed by performing the check and correction process of the three-dimensional shape, the object is finally stored (S110). That is, by storing the final depth map of the object, it is possible to generate a left image and a right image of the 3D image using the final depth map.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. I will understand. Accordingly, the technical scope of the present invention should be defined by the following claims.

210: user interface unit 220: authoring unit
230: output unit 240: storage unit

Claims (11)

A first step of mapping a source image using at least one specific map;
A second step of outputting the mapped image in a three-dimensional shape on a geometric virtual three-dimensional space;
A third step of moving the three-dimensional shape to a desired viewing point to check a depth of a specific object and then pushing or pulling in a desired direction to modify the depth; And
And a fourth step of directly applying the corrected information to the three-dimensional shape and outputting the corrected information. 3.
The depth authoring method of claim 1, wherein the specific map is at least one of a depth map, a displacement map, a texture map, and a combination thereof.
The method of claim 1, wherein the three-dimensional shape comprises: a fifth step of modeling by applying a mesh; The depth authoring method of 3D image, characterized in that for modifying the surface curvature by selecting at least one or more mesh points in the mesh modeling and moving in the desired direction.
The apparatus of claim 1, further comprising a screen plane for determining a depth in the geometric virtual three-dimensional space, and configured to intuitively check whether the object is a protruding object or a visible object based on the screen. Depth authoring method of 3D image, characterized in that.
The depth authoring method of claim 1, wherein the three-dimensional shape displays different vanishing line lengths, directions, and positions of vanishing points representing depths of the three-dimensional shapes.
The method of claim 1, further comprising a sixth step of continuously outputting at least one or more images having the three-dimensional shape and reproducing the three-dimensional three-dimensional video in the geometric three-dimensional space. The depth authoring method of the 3D image, characterized in that the viewpoint can be changed during playback.
The method of claim 1, wherein the geometric virtual three-dimensional space and the three-dimensional shape to be output on the space, linear perspective, object obstruction, motion parallax, relative size and density, perspective by the atmosphere, changes in texture and their A depth authoring method of a 3D image, which is generated to apply a three-dimensional effect by applying at least one empirical factor among the combinations.
A user interface unit capable of importing at least one specific map to map a source image or performing an operation related to correcting or confirming the mapped image;
According to the manipulation, the mapping operation of the source image is performed or the mapped image is output in a three-dimensional shape on a geometric virtual three-dimensional space, and the viewpoint of the three-dimensional shape is shifted and output. An authoring unit for pushing or pulling the depth of the specific object in the desired direction in the dimensional shape; And
And an output unit configured to display the three-dimensional shape output from the authoring unit on a virtual three-dimensional space, and to apply the modified information directly to the three-dimensional shape when the depth of the specific object is modified. Depth authoring tool, characterized in that the configuration.
The method of claim 8, wherein the 3D shape is modeled by applying a mesh, and at least one mesh point is selected in the mesh modeling to modify surface curvature by moving in a desired direction. Depth authoring tool.
The system of claim 8, further comprising a screen plane for determining a depth in the geometric virtual three-dimensional space, and configured to intuitively check whether the object is a protruding object or a visible object based on the screen. Depth authoring tool, characterized in that the 3D image.
The method of claim 8, wherein the at least one image composed of the three-dimensional shape is continuously output and reproduced as a three-dimensional three-dimensional video in the geometric three-dimensional space, and configured to change the viewpoint during the reproduction of the three-dimensional three-dimensional video Depth authoring tool, characterized in that the 3D image.
KR1020120060866A 2012-06-07 2012-06-07 A depth editing apparatus for 3-dimensional images and the editing method thereof KR20130137356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120060866A KR20130137356A (en) 2012-06-07 2012-06-07 A depth editing apparatus for 3-dimensional images and the editing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120060866A KR20130137356A (en) 2012-06-07 2012-06-07 A depth editing apparatus for 3-dimensional images and the editing method thereof

Publications (1)

Publication Number Publication Date
KR20130137356A true KR20130137356A (en) 2013-12-17

Family

ID=49983606

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120060866A KR20130137356A (en) 2012-06-07 2012-06-07 A depth editing apparatus for 3-dimensional images and the editing method thereof

Country Status (1)

Country Link
KR (1) KR20130137356A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160058563A (en) * 2014-11-17 2016-05-25 한국전자통신연구원 Method and Apparatus for verifying 3D mesh

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160058563A (en) * 2014-11-17 2016-05-25 한국전자통신연구원 Method and Apparatus for verifying 3D mesh

Similar Documents

Publication Publication Date Title
US9282321B2 (en) 3D model multi-reviewer system
US9288476B2 (en) System and method for real-time depth modification of stereo images of a virtual reality environment
Rivers et al. 2.5 D cartoon models
US10346950B2 (en) System and method of capturing and rendering a stereoscopic panorama using a depth buffer
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US20190362539A1 (en) Environment Synthesis for Lighting An Object
US8791942B2 (en) Computer method and apparatus for rotating 2D cartoons using 2.5D cartoon models
US9202309B2 (en) Methods and apparatus for digital stereo drawing
US20050264559A1 (en) Multi-plane horizontal perspective hands-on simulator
KR102281462B1 (en) Systems, methods and software for creating virtual three-dimensional images that appear to be projected in front of or on an electronic display
CN109598796A (en) Real scene is subjected to the method and apparatus that 3D merges display with dummy object
US11557077B2 (en) System and method for retexturing of images of three-dimensional objects
JP2006178900A (en) Stereoscopic image generating device
US9401044B1 (en) Method for conformal visualization
CN102695070B (en) Depth consistency fusion processing method for stereo image
CN102693065A (en) Method for processing visual effect of stereo image
KR101919077B1 (en) Method and apparatus for displaying augmented reality
Barham et al. Comparison of stereoscopic cursors for the interactive manipulation of b-splines
KR101428577B1 (en) Method of providing a 3d earth globes based on natural user interface using motion-recognition infrared camera
KR20130137356A (en) A depth editing apparatus for 3-dimensional images and the editing method thereof
JP5807569B2 (en) Image processing apparatus, image processing method, and image processing program
JP4269951B2 (en) 3D computer graphics modeling system
KR20150107747A (en) Video generation device, video generation program, and video generation method
JP2020013390A (en) Information processing apparatus, information processing program, and information processing method
JP7475625B2 (en) Method and program for receiving and displaying input in three-dimensional space, and device for receiving and displaying input in three-dimensional space

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right