KR20140145217A - 3d virtual modeling system using spatial information and method thereof - Google Patents

3d virtual modeling system using spatial information and method thereof Download PDF

Info

Publication number
KR20140145217A
KR20140145217A KR20130061841A KR20130061841A KR20140145217A KR 20140145217 A KR20140145217 A KR 20140145217A KR 20130061841 A KR20130061841 A KR 20130061841A KR 20130061841 A KR20130061841 A KR 20130061841A KR 20140145217 A KR20140145217 A KR 20140145217A
Authority
KR
South Korea
Prior art keywords
dimensional
space
dimensional space
spatial information
virtual object
Prior art date
Application number
KR20130061841A
Other languages
Korean (ko)
Inventor
박재범
Original Assignee
(주)브이알엑스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)브이알엑스 filed Critical (주)브이알엑스
Priority to KR20130061841A priority Critical patent/KR20140145217A/en
Publication of KR20140145217A publication Critical patent/KR20140145217A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a three-dimensional virtual modeling system for performing a three-dimensional virtual modeling in real time by using space information, a method therefor, and a computer-readable recording medium storing a program for realizing the method. The three-dimensional virtual modeling method for the three-dimensional virtual modeling system comprises the steps of: (a) recognizing a three-dimensional space; (b) obtaining three-dimensional space information from the recognized three-dimensional space; (c) arranging a three-dimensional virtual object in the recognized three-dimensional space by using the obtained three-dimensional space information; (d) displaying ″the three-dimensional space in which the three-dimensional virtual object is arranged″; (e) recognizing a displacement value of the three-dimensional space; (f) changing the obtained three-dimensional space information according to the recognized displacement value; (g) rearranging the arranged three-dimensional virtual object by using the recognized displacement value and the changed three-dimensional space information; and (h) displaying ″the three-dimensional space in which the three-dimensional virtual object is rearranged″.

Description

TECHNICAL FIELD [0001] The present invention relates to a 3D virtual modeling system using spatial information,

The present invention relates to a three-dimensional virtual modeling system, a method thereof, and a computer-readable recording medium storing a program for realizing the method. More particularly, the present invention relates to a three- The present invention relates to a three-dimensional virtual modeling system capable of performing modeling, and a computer-readable recording medium storing a program for realizing the method.

As the society evolved from the industrial society to the information society, the construction company introduced a virtual reality (VR) sample house that could replace the function of the existing sample house.

First, the outline of the virtual reality and the prior research will be described as follows.

Virtual reality is a combination of reality and unreal. It was first named by Jarrow Lanier of VPL Research in the US in 1989. This virtual reality technology is a technology that gives a feeling of actually experiencing the building before the construction through the computer system or the simulated environment and the situation realized through simulations. In other words, virtual reality technology can be said to be 'a technology that makes people feel real in cyber space created by computer'.

These virtual reality technologies can be classified into an immersive virtual reality system (Impressive VR system), a desktop / vehicle virtual reality (VR), and augmented reality. Here, an immersive virtual reality (Impressive VR) system is a system in which a user wears necessary basic equipment in a three-dimensional space created by a computer, and is completely immersed in the world to experience and interact with a defined world It is a virtual reality system. The Desktop / Vehicle VR method uses a traditional monitor screen as a perspective view.

The desktop / non-virtual reality method is most commonly used for the existing cyber model house. The panorama format using QTVR (Quicktime Virtual Reality) is most commonly used when the desktop / non-virtual reality method is divided into three categories. The format using X3D (Extensible Three Dimension), which is an alternative to VRML (Virtual Reality Modeling Language) And a format using a tool (TURNTOOL).

Next, a cyber model house using VRML will be described as follows.

VRML is a language for modeling the 3D world and multimedia such as sound and video on the WWW (World Wide Web). Accordingly, the IVR (Internet Virtual Reality) using the Internet can be utilized to enable a plurality of observers to directly experience the space desired by the user at any place where the Internet is connected.

VRML is a language specification for displaying three-dimensional virtual space. Just as HTML (HyperText Markup Language) documents tag files created by scripts, VRML is surrounded by tags, each object is a node. VRML is surprisingly simple compared to 3D having a significant amount of data. In a normal editor, enter a text format script and save the file so that it has an extension of * .wrl.

In the future, a VRML-formatted 3D space description file (* .wrl) installed on a WWW server on the Internet is transmitted from a general user, and VRML browser mounted on the computer of the VRML user side converts the text format data into three-dimensional data By rendering, the three-dimensional virtual space is displayed on the display device. Accordingly, the displayed three-dimensional virtual space can be moved and controlled through a mouse operation or an input device such as a keyboard. Further, the object can be linked to an object in the three-dimensional space, and the object can be controlled by clicking with the mouse, and the two-dimensional HTML file can be read.

In the field of architecture, VRML is used to effectively advertise already-designed buildings, and CAD (Computer Aided Design) data can be effectively utilized by using converters. In addition, it is possible to simulate the building before completion by using VRML so that it can virtually walk inside the completed building.

This VRML is certified by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as a standard for representing 3D graphics on the Internet, and supports image-based 3D images and 3D based on modeling. It has the advantage of being able to create and edit in any text editor without requiring a special editor in the form of HTML, because the license is freely available for anyone to freely express on the web.

On the other hand, VRML has a disadvantage of slow processing speed. Although VRML is represented in a much simpler format than the previous three-dimensional data representation, it does not represent a satisfactory file load speed with current network performance and rendering performance. This may not be the direct cause of VRML, but it is necessary to develop a structure that can improve the speed. In addition, VRML has a disadvantage in reducing the immersion feeling of the user represented by the quality of the screen. Although VRML provides a framework for interacting with the virtual world, it is still unsatisfactory and requires many functions and performance improvements.

Next, a cyber model house using the panorama VR will be described as follows.

Panorama VR is a technique to experience image-based space on the web. A typical technology of panorama VR is QTVR technology implemented based on Apple Computer's Quicktime. QTVR is based on a photo still image.

1 is a diagram showing an operation principle of a conventional QTVR.

As shown in Fig. 1, the QTVR of the panorama VR format is used most often. QTVR has a center point in the shape as shown in Fig. The camera at this center point is the viewpoint of the viewer viewing the virtual space. From this center point, the viewer can see anywhere in the circular panorama. The QTVR can produce a result based on a real examination. In order to obtain a photograph for a scene, the camera should be rotated at a certain angle as shown in FIG. 2, and the photograph should be photographed at 360 degrees.

2 is a view showing the principle of photography of a conventional panorama VR camera.

The QTVR used in the conventional VR model house is composed of a simple six-sided cube that enables the virtual reality observer to view any room based on the actual nature of the program. It is a method of photographing each face of a cube and photographing it while the camera is rotating at the center point.

3 is a diagram showing an example of a conventional QTVR cyber model house.

Since QTVR cyber model house is image based, it takes advantage of less computer load than other methods because it realizes virtual reality as a rendered image image with shorter waiting time than modeling based web 3D technology.

On the other hand, the QTVR cyber model house can be manufactured only when a sample house is built based on a due diligence, and since the camera is fixed to the center point, it is possible to perform only a rotational motion and a slight upward / downward motion around the axis, Since the map is not a high-resolution image due to the bitmap method, the image is distorted when it is enlarged. The 2D image is attached to the 3D surface and the surface is constrained. Therefore, the 2D image is strong and the space is weak. There is a fear that it will convey the distorted fact to the person.

In conclusion, the conventional VRML-based cyber model house and the panorama VR-based cyber model house have a problem that the streaming time is longer than the time that the person can wait psychologically and control is difficult in the virtual space.

An embodiment of the present invention provides a three-dimensional virtual modeling system capable of performing three-dimensional virtual modeling in real time using spatial information, a method thereof, and a computer-readable recording medium on which a program for realizing the method is recorded .

The objects of the present invention are not limited to the above-mentioned objects, and other objects and advantages of the present invention which are not mentioned can be understood by the following description, and will be more clearly understood by the embodiments of the present invention. It will also be readily apparent that the objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.

A three-dimensional virtual modeling system according to an embodiment of the present invention includes: a three-dimensional space recognition unit for recognizing a three-dimensional space; A three-dimensional spatial information acquisition unit for acquiring three-dimensional spatial information from the three-dimensional space recognized by the three-dimensional spatial recognition unit; A three-dimensional virtual object arrangement unit for arranging a three-dimensional virtual object in the three-dimensional space recognized by the three-dimensional spatial recognition unit using the three-dimensional spatial information acquired by the three-dimensional spatial information acquisition unit; A spatial displacement recognition unit for recognizing a displacement value of the three-dimensional space; A three-dimensional spatial information changing unit for changing the three-dimensional spatial information obtained by the three-dimensional spatial information obtaining unit according to the displacement value recognized by the spatial change recognizing unit; Dimensional virtual object rearranging unit for rearranging the three-dimensional virtual objects arranged in the three-dimensional virtual object arranging unit by using the displacement values recognized by the space change recognizing unit and the three-dimensional spatial information changed in the three- part; And a display unit for displaying a "three-dimensional space in which a three-dimensional virtual object is arranged" from the three-dimensional virtual object arrangement unit and displaying the "three- And a display unit.

A three-dimensional virtual modeling method in a three-dimensional virtual modeling system according to an embodiment of the present invention includes: (a) recognizing a three-dimensional space; (b) obtaining three-dimensional spatial information from the recognized three-dimensional space; (c) arranging a three-dimensional virtual object in the recognized three-dimensional space using the obtained three-dimensional spatial information; (d) displaying the "three-dimensional space in which the three-dimensional virtual objects are arranged" (e) recognizing a displacement value of the three-dimensional space; (f) changing the obtained three-dimensional spatial information according to the recognized displacement value; (g) relocating the disposed three-dimensional virtual objects using the recognized displacement value and the changed three-dimensional spatial information; And (h) displaying the "three-dimensional space in which the three-dimensional virtual object has been rearranged ".

According to an embodiment of the present invention, there is provided a recording medium including: a three-dimensional virtual modeling system having a processor; Acquiring three-dimensional spatial information from the recognized three-dimensional space; Arranging a three-dimensional virtual object in the recognized three-dimensional space using the obtained three-dimensional spatial information; Displaying the "three-dimensional space in which a three-dimensional virtual object is disposed" Recognizing a displacement value of the three-dimensional space; Changing the obtained three-dimensional spatial information according to the recognized displacement value; Rearranging the arranged 3D virtual objects using the recognized displacement values and the changed 3D spatial information; And a step of displaying the "three-dimensional virtual object rearranged three-dimensional space ".

According to the embodiment of the present invention, 3D virtual modeling can be performed in real time using spatial information.

Further, according to the embodiment of the present invention, not only can the user display the three-dimensional virtual object disposed thereon along with the three-dimensional space by displaying the "three-dimensional space in which the three- The three-dimensional virtual object rearranged with the three-dimensional space changed according to the movement or movement of the user can be displayed by displaying the three-dimensional virtual object rearranged in the three-dimensional virtual space.

1 is a diagram showing an operation principle of a conventional QTVR.
2 is a view showing the principle of photography of a conventional panorama VR camera.
3 is a diagram showing an example of a conventional QTVR cyber model house.
4 is a block diagram of a 3D virtual modeling system using spatial information according to an embodiment of the present invention.
5 is a flowchart illustrating a 3D virtual modeling method using spatial information according to an embodiment of the present invention.
FIG. 6 is a view for explaining the concept of space recognition according to an embodiment of the present invention.
FIG. 7 is a view for explaining a concept of space recognition according to movement of a user according to an embodiment of the present invention.
8A to 8F are views showing a case where a three-dimensional virtual object is arranged and rearranged in a three-dimensional space according to an embodiment of the present invention.

In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings in order to facilitate a person skilled in the art to easily carry out the technical idea of the present invention. .

And throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between. Also, when a component is referred to as " comprising "or" comprising ", it does not exclude other components unless specifically stated to the contrary . In addition, in the description of the entire specification, it should be understood that the description of some elements in a singular form does not limit the present invention, and that a plurality of the constituent elements may be formed.

4 is a block diagram of a 3D virtual modeling system using spatial information according to an embodiment of the present invention.

4, a three-dimensional virtual modeling system using spatial information according to an embodiment of the present invention includes a three-dimensional space recognition unit 401 for recognizing a three-dimensional space, a three-dimensional space recognition unit 401 Dimensional space information obtained from the three-dimensional space information obtained by the three-dimensional space information acquiring unit 402. The three-dimensional space information acquiring unit 402 acquires three-dimensional space information from the three- A three-dimensional virtual object arrangement unit 403 for arranging in a three-dimensional space recognized by the three-dimensional space recognition unit 401, a spatial displacement recognition unit 405 for recognizing a displacement value of the three-dimensional space, A three-dimensional spatial information changing unit 406 for changing the three-dimensional spatial information obtained by the obtaining unit 402 according to the displacement value recognized by the spatial change recognizing unit 405; The displacement value and the three-dimensional space information changed by the three-dimensional spatial information changing unit 406 A three-dimensional virtual object rearrangement unit 407 for rearranging the three-dimensional virtual objects disposed in the three-dimensional virtual object arrangement unit 403 using the beam, and a three-dimensional virtual object rearrangement unit 407 for rearranging the three- Dimensional space in which a three-dimensional virtual object is rearranged "from the three-dimensional virtual object rearrangement unit 407 while displaying the " three-dimensional space "

Next, the respective components will be described in more detail as follows.

In one embodiment of the present invention, a space may include both a space having an actual shape or a virtual space.

Therefore, the three-dimensional space recognition unit 401 can recognize the basic structure of the three-dimensional space from the spatial image or the virtual data. That is, the three-dimensional space recognition unit 401 recognizes the basic structure of the three-dimensional space through a spatial image captured and input from a camera such as a smart phone or a notebook in the case of an actual space, Dimensional spatial property data of the three-dimensional space. At this time, the three-dimensional space recognition unit 401 recognizes the walls, the floor, and the ceiling three-dimensionally from spatial images or virtual data. In an embodiment of the present invention, for example, a smart phone or a notebook computer can be used as a recognition device.

In an embodiment of the present invention, markers may be provided at each vertex of the space (vertices of ceiling and wall, vertex of floor and wall) for accurate recognition of the actual space. In the embodiment of the present invention, the automatic mode in which the size of the space is automatically recognized by using the marker or the manual mode in which the accurate numerical value (width, height, depth) Manual mode can be implemented.

The three-dimensional spatial information obtaining unit 402 obtains three-dimensional spatial information such as each vertex data, each wall surface data, ceiling data, floor data, space size data, etc. from the three-dimensional space recognized by the three- Can be obtained.

The three-dimensional virtual object arrangement unit 403 uses the three-dimensional spatial information obtained by the three-dimensional spatial information acquisition unit 402 to generate a three-dimensional virtual object (three-dimensional virtual shape) Dimensional space that is recognized by the user. To do this, we first need to create a 3D virtual object (wall, flooring, lighting fixtures, furniture for bed, sofa, closet, household appliances such as refrigerator, washing machine, TV ...) And is selected from a user using an input device such as a touch screen or a mouse. Then, the position on the three-dimensional space in which the selected three-dimensional virtual object is to be placed is designated from the user by using an input device such as a touch screen or a mouse. At this time, when the relative size of the three-dimensional virtual object is different from the recognized size of the three-dimensional space, the size of the three-dimensional virtual object can be adjusted through enlargement or reduction.

Then, the display unit 404 displays the "three-dimensional space in which the three-dimensional virtual objects are arranged" received from the three-dimensional virtual object arrangement unit 403 so that the user can see the three- So that it can be confirmed. The display unit 404 may be a display device such as a smart phone or a notebook computer or a separate display device such as an HMD (Head Mounted Display), an FMD (Face Mounted Display), or a HUD Or it may be a general TV or a stereoscopic TV.

On the other hand, if a change occurs in the recognition space according to the movement of the user or the movement of the user while the "three-dimensional space in which the three-dimensional virtual objects are arranged" is displayed as described above, the space displacement recognition unit 405 ) Recognizes the displacement value of the three-dimensional space by using a marker or an acceleration sensor and a gyro sensor of a smartphone. That is, the spatial displacement recognizing unit 405 recognizes the displacement value of the three-dimensional space by tracking the change of the relative position of the marker in real time, or detects the displacement value of the three-dimensional space using the displacement value sensed by the acceleration sensor and the gyro sensor of the smartphone. And recognizes the displacement value.

The three-dimensional spatial information changing unit 406 changes the three-dimensional spatial information obtained by the three-dimensional spatial information obtaining unit 402 to match the displacement value recognized by the spatial change recognizing unit 405, Dimensional virtual object arrangement unit 403 using the displacement values recognized by the spatial change recognition unit 405 and the three-dimensional spatial information changed in the three-dimensional spatial information modification unit 406. The three- Relocate virtual objects. That is, in an embodiment of the present invention, for example, in the case of a smart phone having an acceleration sensor and a gyro sensor, a three-dimensional virtual object disposed in a three- Relocate to the displacement value. Alternatively, for example, when using a device such as an HMD, a HUD, or an FMD, the position of the space moves or rotates according to the movement of the user's head to change the position value, and the three- To the changed displacement value.

Then, the display unit 404 displays the "three-dimensional space in which the three-dimensional virtual object has been rearranged" received from the three-dimensional virtual object rearranger 407 so that the user can change the three- Make the object visible.

FIG. 5 is a flowchart of a three-dimensional virtual modeling method using spatial information according to an embodiment of the present invention. The specific embodiment of the method is as described above. Here, only the operation processing procedure will be briefly described.

First, the three-dimensional space recognition unit 401 recognizes a three-dimensional space (501). That is, the three-dimensional space recognition unit 401 can recognize a basic structure (wall, floor, and ceiling) of a three-dimensional space from a spatial image or virtual data.

Thereafter, the three-dimensional spatial information obtaining unit 402 obtains three-dimensional spatial information from the three-dimensional space recognized by the three-dimensional spatial recognizing unit 401 (502). That is, the three-dimensional spatial information acquiring unit 402 acquires three-dimensional space information such as each vertex data, each wall surface data, ceiling data, floor data, space size data, and the like from the three-dimensional space recognized by the three- Information can be obtained.

Thereafter, the 3D virtual object arrangement unit 403 uses the 3D spatial information obtained by the 3D spatial information acquisition unit 402 to calculate the 3D virtual object in the 3D space recognized by the 3D spatial recognition unit 401 (503). To do this, we first need to create a 3D virtual object (wall, flooring, lighting fixtures, furniture for bed, sofa, closet, household appliances such as refrigerator, washing machine, TV ...) And is selected from a user using an input device such as a touch screen or a mouse. Then, the position on the three-dimensional space in which the selected three-dimensional virtual object is to be placed is designated from the user by using an input device such as a touch screen or a mouse. At this time, when the relative size of the three-dimensional virtual object is different from the recognized size of the three-dimensional space, the size of the three-dimensional virtual object can be adjusted through enlargement or reduction.

Thereafter, the display unit 404 displays 504 a "three-dimensional space in which a three-dimensional virtual object is arranged" from the three-dimensional virtual object arrangement unit 403. In other words, the display unit 404 displays the "three-dimensional space in which the three-dimensional virtual objects are arranged" received from the three-dimensional virtual object arrangement unit 403 so that the user can view the three- .

When a change occurs in the recognition space according to the movement of the user or the movement of the user while the "three-dimensional space in which the three-dimensional virtual objects are arranged" is displayed as described above, the space displacement recognition unit 405 The displacement value of the three-dimensional space is recognized (505). That is, the spatial displacement recognition unit 405 recognizes the displacement value of the three-dimensional space using the marker, the acceleration sensor of the smartphone, and the gyro sensor.

Thereafter, the three-dimensional spatial information changing unit 406 changes the three-dimensional spatial information obtained by the three-dimensional spatial information obtaining unit 402 according to the displacement value recognized by the spatial change recognizing unit 405 (506).

The 3D virtual object rearrangement unit 407 rearranges the three-dimensional virtual object rearrangement unit 407 using the displacement value recognized by the spatial change recognition unit 405 and the 3D spatial information changed in the 3D spatial information modification unit 406 403) are rearranged (507).

Then, the display unit 404 displays 508 the "three-dimensional virtual object rearranged three-dimensional space" from the three-dimensional virtual object rearrangement unit 407. That is, the display unit 404 displays the "three-dimensional space in which the three-dimensional virtual objects are rearranged" received from the three-dimensional virtual object rearranger unit 407, thereby displaying the three- Make the object visible.

FIG. 6 is a view for explaining the concept of space recognition according to an embodiment of the present invention.

First, a case where a marker is not provided will be described as follows.

- Extract vertices 1, 2, 3, 4 of space through a recognizer (eg smart phone or notebook).

- Form a front wall A through a virtual vector that meets each vertex.

- Form wall surface B through lines B1 and B3 having a certain angle (angle B1, angle B2) from wall A

- Form wall C through lines C2 and C4 having a certain angle (angle C2, angle C4) from wall A

- Form floor D through lines D3 and D4 with a certain angle (angle D3, angle D4) from wall A

- Form ceiling E through lines E1 and E2 with a certain angle (angle E1, angle E2) from wall A

- Enter the values of width (w), length (l) and height (h) to match the scale of the formed space.

- The recognized three-dimensional space corresponding to the actual space is completed.

Next, a case where a marker is provided will be described.

- Extract vertices 1, 2, 3, 4 of space through the recognition device.

- Form a front wall A through a virtual vector that meets each vertex.

- Form wall surface B through lines B1 and B3 having a certain angle (angle B1, angle B2) from wall A

- Form wall C through lines C2 and C4 having a certain angle (angle C2, angle C4) from wall A

- Form floor D through lines D3 and D4 with a certain angle (angle D3, angle D4) from wall A

- Form ceiling E through lines E1 and E2 with a certain angle (angle E1, angle E2) from wall A

- Since the length values of each marker located at vertices 1, 2, 3, and 4 are predefined, the scale of space is calculated without any additional numerical input.

- The recognized three-dimensional space corresponding to the actual space is completed.

FIG. 7 is a view for explaining a concept of space recognition according to movement of a user according to an embodiment of the present invention.

First, a case where a marker is not provided will be described as follows.

- Extracts the changed position of the recognition device through the acceleration sensor and the gyro sensor of smart phone (recognition device).

- Extract the vertices 1, 2, 3, 4 of the space.

- Form wall A, B, C, floor D, and ceiling E through imaginary vectors that meet each vertex.

- Calculate the scale value based on the entered width (w), length (l), and height (h).

- The recognized three-dimensional space corresponding to the position change is completed.

Next, a case where a marker is provided will be described.

- Track the markers at vertices 1, 2, 3 and 4 of the changed space in real time through the recognition device.

- Form wall A, B, C, floor D, and ceiling E through imaginary vectors that meet each vertex.

- The scale of the space is calculated according to the predetermined length value of each marker.

- The recognized three-dimensional space corresponding to the actual space is completed.

8A to 8F are views showing a case where a three-dimensional virtual object is arranged and rearranged in a three-dimensional space according to an embodiment of the present invention.

8A shows a case where the recognized three-dimensional space form is displayed through the display unit 404.

FIG. 8B shows a case in which a three-dimensional virtual object such as a floor material, wallpaper, and a luminaire is selected and placed in a recognized three-dimensional space.

FIG. 8C shows a case where a three-dimensional virtual object such as a TV or a sofa is selected and placed in a recognized three-dimensional space.

FIGS. 8D to 8F show a case where a recognized three-dimensional space and a view of a three-dimensional virtual object disposed thereon are automatically converted when a change occurs in the recognition space according to a movement of the user, such as movement or direction change.

Meanwhile, the 3D virtual modeling method using spatial information according to the present invention may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. The medium may be a transmission medium such as an optical or metal line, a wave guide, or the like, including a carrier wave for transmitting a signal designating a program command, a data structure, or the like. Examples of program instructions include machine language code such as those generated by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware device may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, Various permutations, modifications and variations are possible without departing from the spirit of the invention. Therefore, the scope of the present invention should not be construed as being limited to the embodiments described, but should be determined by the scope of the appended claims, as well as the appended claims.

401: three-dimensional space recognition unit 402: three-dimensional spatial information acquisition unit
403: 3D virtual object arrangement unit 404:
405: Space Displacement Recognition Unit 406: 3D Space Information Modification Unit
407: 3D virtual object relocation unit

Claims (11)

In a three-dimensional virtual modeling system,
A three-dimensional space recognition unit for recognizing a three-dimensional space;
A three-dimensional spatial information acquisition unit for acquiring three-dimensional spatial information from the three-dimensional space recognized by the three-dimensional spatial recognition unit;
A three-dimensional virtual object arrangement unit for arranging a three-dimensional virtual object in the three-dimensional space recognized by the three-dimensional spatial recognition unit using the three-dimensional spatial information acquired by the three-dimensional spatial information acquisition unit;
A spatial displacement recognition unit for recognizing a displacement value of the three-dimensional space;
A three-dimensional spatial information changing unit for changing the three-dimensional spatial information obtained by the three-dimensional spatial information obtaining unit according to the displacement value recognized by the spatial change recognizing unit;
Dimensional virtual object rearranging unit for rearranging the three-dimensional virtual objects arranged in the three-dimensional virtual object arranging unit by using the displacement values recognized by the space change recognizing unit and the three-dimensional spatial information changed in the three- part; And
Dimensional space in which a three-dimensional virtual object is arranged "from the three-dimensional virtual object arrangement unit and displays the " three-dimensional space in which a three-dimensional virtual object has been rearranged" part
Dimensional virtual modeling system.
The method according to claim 1,
The three-dimensional space recognizing unit includes:
A 3D virtual modeling system that recognizes the basic structure of a three - dimensional space from spatial images or virtual data.
The method according to claim 1,
The three-dimensional space recognizing unit includes:
A three - dimensional virtual modeling system that recognizes three - dimensional space using markers at each vertex of space.
The method according to claim 1,
Wherein the 3D virtual object arrangement unit comprises:
Dimensional virtual object to be placed in a three-dimensional space, and after a position on the three-dimensional space in which the selected three-dimensional virtual object is to be arranged is specified, the three-dimensional space information obtained in the three- And arranging the selected three-dimensional virtual objects in the designated three-dimensional space.
The method according to claim 1,
The space-
A three - dimensional virtual modeling system that recognizes the displacement value of a three - dimensional space using an acceleration sensor and a gyro sensor of a marker or a recognition device.
A three-dimensional virtual modeling method in a three-dimensional virtual modeling system,
(a) recognizing a three-dimensional space;
(b) obtaining three-dimensional spatial information from the recognized three-dimensional space;
(c) arranging a three-dimensional virtual object in the recognized three-dimensional space using the obtained three-dimensional spatial information;
(d) displaying the "three-dimensional space in which the three-dimensional virtual objects are arranged"
(e) recognizing a displacement value of the three-dimensional space;
(f) changing the obtained three-dimensional spatial information according to the recognized displacement value;
(g) relocating the disposed three-dimensional virtual objects using the recognized displacement value and the changed three-dimensional spatial information; And
(h) displaying the "three-dimensional virtual object rearranged three-dimensional space"
Dimensional virtual modeling method.
The method according to claim 6,
The step (a)
A three - dimensional virtual modeling method for recognizing a basic structure of a three - dimensional space from a spatial image or virtual data.
The method according to claim 6,
The step (a)
A three dimensional virtual modeling method for recognizing a three dimensional space using markers provided at each vertex of a space.
The method according to claim 6,
The step (c)
Selecting a three-dimensional virtual object to be placed in a three-dimensional space;
Receiving a position on the three-dimensional space in which the selected three-dimensional virtual object is to be arranged; And
And arranging the selected three-dimensional virtual object in the designated three-dimensional space using the obtained three-dimensional spatial information
Dimensional virtual modeling method.
The method according to claim 6,
The step (e)
A three-dimensional virtual modeling method for recognizing a displacement value of a three-dimensional space using an acceleration sensor and a gyro sensor of a marker or a recognition device.
In a three-dimensional virtual modeling system having a processor,
Recognizing a three-dimensional space;
Acquiring three-dimensional spatial information from the recognized three-dimensional space;
Arranging a three-dimensional virtual object in the recognized three-dimensional space using the obtained three-dimensional spatial information;
Displaying the "three-dimensional space in which a three-dimensional virtual object is disposed"
Recognizing a displacement value of the three-dimensional space;
Changing the obtained three-dimensional spatial information according to the recognized displacement value;
Rearranging the arranged 3D virtual objects using the recognized displacement values and the changed 3D spatial information; And
Displaying the "three-dimensional space in which the three-dimensional virtual object has been rearranged"
A computer-readable recording medium having recorded thereon a program for realizing a computer-readable recording medium.
KR20130061841A 2013-05-30 2013-05-30 3d virtual modeling system using spatial information and method thereof KR20140145217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130061841A KR20140145217A (en) 2013-05-30 2013-05-30 3d virtual modeling system using spatial information and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130061841A KR20140145217A (en) 2013-05-30 2013-05-30 3d virtual modeling system using spatial information and method thereof

Publications (1)

Publication Number Publication Date
KR20140145217A true KR20140145217A (en) 2014-12-23

Family

ID=52675066

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130061841A KR20140145217A (en) 2013-05-30 2013-05-30 3d virtual modeling system using spatial information and method thereof

Country Status (1)

Country Link
KR (1) KR20140145217A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892323B2 (en) 2016-01-05 2018-02-13 Electronics And Telecommunications Research Institute Augmented reality device based on recognition of spatial structure and method thereof
WO2018164287A1 (en) * 2017-03-06 2018-09-13 라인 가부시키가이샤 Method and device for providing augmented reality, and computer program
KR20190000069A (en) * 2017-06-22 2019-01-02 한국전자통신연구원 Method for providing virtual experience contents and apparatus using the same
WO2019212129A1 (en) * 2018-05-04 2019-11-07 디프트 주식회사 Virtual exhibition space providing method for efficient data management
WO2019216528A1 (en) * 2018-05-08 2019-11-14 디프트 주식회사 Method for providing virtual exhibition space by using 2.5-dimensionalization
WO2020091182A1 (en) * 2018-10-30 2020-05-07 삼성전자주식회사 Electronic device for providing image data using augmented reality and control method for same
KR102214204B1 (en) 2019-12-26 2021-02-10 한국건설기술연구원 Column element generating system for generating 3-dimensional (3d) model of the building using 3d scanning, and method for the same
KR102314556B1 (en) * 2021-06-09 2021-10-19 대아티아이 (주) Object Editor Based 3D Station Integrated Supervisory System
US11232306B2 (en) 2018-11-28 2022-01-25 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
KR102430041B1 (en) 2022-04-22 2022-08-08 (주)브리콘랩 facility inspection system and electronic device operating method that inspects facilities using augmented reality technology
KR20220125982A (en) * 2021-03-08 2022-09-15 주식회사 지엔아이티 3D digital twin dam safety management system and method using 3D modeling

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892323B2 (en) 2016-01-05 2018-02-13 Electronics And Telecommunications Research Institute Augmented reality device based on recognition of spatial structure and method thereof
US11120629B2 (en) 2017-03-06 2021-09-14 Line Corporation Method and device for providing augmented reality, and computer program
WO2018164287A1 (en) * 2017-03-06 2018-09-13 라인 가부시키가이샤 Method and device for providing augmented reality, and computer program
US11562545B2 (en) 2017-03-06 2023-01-24 Line Corporation Method and device for providing augmented reality, and computer program
KR20190000069A (en) * 2017-06-22 2019-01-02 한국전자통신연구원 Method for providing virtual experience contents and apparatus using the same
WO2019212129A1 (en) * 2018-05-04 2019-11-07 디프트 주식회사 Virtual exhibition space providing method for efficient data management
US11651556B2 (en) 2018-05-04 2023-05-16 Dift Corporation Virtual exhibition space providing method for efficient data management
US11250643B2 (en) 2018-05-08 2022-02-15 Dift Corporation Method of providing virtual exhibition space using 2.5-dimensionalization
KR20190128393A (en) * 2018-05-08 2019-11-18 디프트(주) Method of providing virtual exhibition space using 2.5 dimensionalization
WO2019216528A1 (en) * 2018-05-08 2019-11-14 디프트 주식회사 Method for providing virtual exhibition space by using 2.5-dimensionalization
KR20200048714A (en) * 2018-10-30 2020-05-08 삼성전자주식회사 Electronic apparatus for providing image data by using augmented reality and control method thereof
WO2020091182A1 (en) * 2018-10-30 2020-05-07 삼성전자주식회사 Electronic device for providing image data using augmented reality and control method for same
US11867495B2 (en) 2018-10-30 2024-01-09 Samsung Electronics Co., Ltd. Electronic device for providing image data using augmented reality and control method for same
US11232306B2 (en) 2018-11-28 2022-01-25 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
KR102214204B1 (en) 2019-12-26 2021-02-10 한국건설기술연구원 Column element generating system for generating 3-dimensional (3d) model of the building using 3d scanning, and method for the same
KR20220125982A (en) * 2021-03-08 2022-09-15 주식회사 지엔아이티 3D digital twin dam safety management system and method using 3D modeling
KR102314556B1 (en) * 2021-06-09 2021-10-19 대아티아이 (주) Object Editor Based 3D Station Integrated Supervisory System
KR102430041B1 (en) 2022-04-22 2022-08-08 (주)브리콘랩 facility inspection system and electronic device operating method that inspects facilities using augmented reality technology

Similar Documents

Publication Publication Date Title
KR20140145217A (en) 3d virtual modeling system using spatial information and method thereof
US10755485B2 (en) Augmented reality product preview
US20230016490A1 (en) Systems and methods for virtual and augmented reality
JP6246914B2 (en) Augmented reality content generation for unknown objects
CN105637559B (en) Use the structural modeling of depth transducer
CN107590771A (en) With the 2D videos for the option of projection viewing in 3d space is modeled
JP2016516241A (en) Mapping augmented reality experiences to different environments
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
JP2014504384A (en) Generation of 3D virtual tour from 2D images
AU2016336030A1 (en) Volumetric depth video recording and playback
KR102158324B1 (en) Apparatus and method for generating point cloud
JP2022077148A (en) Image processing method, program, and image processing system
KR20180120456A (en) Apparatus for providing virtual reality contents based on panoramic image and method for the same
US11217002B2 (en) Method for efficiently computing and specifying level sets for use in computer simulations, computer graphics and other purposes
KR20090000729A (en) System and method for web based cyber model house
JP6152888B2 (en) Information processing apparatus, control method and program thereof, and information processing system, control method and program thereof
CN109710054B (en) Virtual object presenting method and device for head-mounted display equipment
US11756260B1 (en) Visualization of configurable three-dimensional environments in a virtual reality system
KR101428577B1 (en) Method of providing a 3d earth globes based on natural user interface using motion-recognition infrared camera
JP2017084215A (en) Information processing system, control method thereof, and program
Trapp et al. Communication of digital cultural heritage in public spaces by the example of roman cologne
CN108920598A (en) Panorama sketch browsing method, device, terminal device, server and storage medium
KR101159705B1 (en) An object guiding system for producing virtual reality based on the billboard mapping and the method thereof
Asiminidis Augmented and Virtual Reality: Extensive Review
Lee et al. Mirage: A touch screen based mixed reality interface for space planning applications

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E601 Decision to refuse application