KR20180070082A - Vr contents generating system - Google Patents
Vr contents generating system Download PDFInfo
- Publication number
- KR20180070082A KR20180070082A KR1020160172343A KR20160172343A KR20180070082A KR 20180070082 A KR20180070082 A KR 20180070082A KR 1020160172343 A KR1020160172343 A KR 1020160172343A KR 20160172343 A KR20160172343 A KR 20160172343A KR 20180070082 A KR20180070082 A KR 20180070082A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- turntable
- user
- server
- editing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/232—Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/356—Image reproducers having separate monoscopic and stereoscopic modes
- H04N13/359—Switching between monoscopic and stereoscopic modes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Primary Health Care (AREA)
- Computer Hardware Design (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Architecture (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
A VR content generation system according to the present invention includes an image pickup unit for capturing a 2D image of a target object, a server for receiving the 2D image and providing a 3D image editing environment, and generating an edited 3D image as VR content, And a terminal for transmitting the 2D image picked up by the image pickup unit to the server and transmitting an image edit command.
Description
The present invention relates to a VR content generation system, and more particularly, to a VR content generation system capable of generating and providing VR content by receiving a plurality of 2D images obtained by capturing an object from various angles from a user.
Virtual Reality (VR) is a technology that provides a three-dimensional environment created by a computer so that people can experience it as if they are interacting with the real environment. Recently, VR technology has been applied in various fields such as video, advertisement, medical education, and education beyond the field of entertainment game.
In the field of advertising, existing two-dimensional planar images are provided to consumers through images or images, and consumers select desired products through images or images. However, this method determines the purchase of the product by only one side of the product prepared by the advertisement producer, so that the satisfaction with the purchase is remarkably reduced due to the difference with the delivered product, and even the product is returned.
Recently, technologies for creating advertisements using virtual reality have been developed to more effectively inform consumers of products.
Open Patent Publication No. 2002-0005336 discloses an advertising method for searching for an advertisement product using virtual reality modeling. It outputs a virtual three-dimensional space having information on a moving route on a screen, The advertisement selection window is classified by category, the advertisement is selected, and the advertisement is displayed.
As an example of other advertisements using VR, the advertising technique in the virtual reality space of Unexamined Patent Application Publication No. 2000-0037114 provides users with virtual space in the form of a home in real life, and real product models provided by advertisers And the products of the advertiser company's logos or product names, so that the user can decorate the inside of the house.
An object of the present invention is to provide a VR content generation system capable of generating VR content by converting a 2D image taken by a user into a 3D image.
An object of the present invention is to provide a VR content creation system capable of verifying a product at various points of time in a virtual reality space.
The problems to be solved by the present invention are not limited to the above-mentioned problems. Other technical subjects not mentioned will be apparent to those skilled in the art from the following description.
A VR content generation system according to the present invention includes an image pickup unit for capturing a 2D image of a target object, a server for receiving the 2D image and providing a 3D image editing environment, and generating an edited 3D image as VR content, And a terminal for transmitting the 2D image picked up by the image pickup unit to the server and transmitting an image edit command.
In addition, the image generating apparatus includes a turntable installed on the upper side of the body, a " D "-shaped turntable installed on the side of the body and rotating about the lateral axis, and a central upper portion of the turntable And a control unit. When the turntable rotates, the turntable is always installed so as to face the center of the turntable.
The server of the VR contents creation system according to the present invention includes a user information unit for receiving user identification information and object identification information, a database for recording user information, 2D image, 3D image, VR content, An editing screen providing unit for providing an editing screen for the 3D image, and a VR content generating unit for converting the 3D image into the VR content.
Also, the editing screen providing unit includes a viewer editing module and an object correcting module, and the viewer editing module performs image rotation, rotation direction adjustment, and rotation axis adjusting functions, and the object correction module performs object background removal and image color correction do.
In addition, the object background removal includes an image sharpening step, a binarization step, an appearance line detection step, and a background separation step, and provides an algorithm for the user to select an appearance line.
According to the present invention, it is possible to improve the user's satisfaction with the VR content by directly editing the image of the product taken by the user.
In addition, it is possible to show a plurality of images photographed while rotating a product at various angles at a desired speed and angle as if it is a three-dimensional image.
In addition, since the background image excluding the product can be removed from the photographed image, the concentration of the product in the provided VR content can be increased.
In addition, a detailed description of the product can be added by providing a tag at a specific location on the rotating merchandise.
1 shows a VR content generation system according to the present invention.
Fig. 2 shows an image generating apparatus equipped with an image pickup section according to the present invention.
3 is a view showing a position at which an image of a target article is picked up by using the image generating apparatus.
4 shows a server according to the present invention.
5 shows an editing screen providing unit according to the present invention.
6 illustrates a background removal process according to the present invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. The embodiments disclosed in the following description may have various applications.
1 shows a VR content generation system according to the present invention.
The VR contents generation system according to the present invention includes an image pickup unit 100 for capturing a product and obtaining a 2D image by a user, a 3D image editing environment for receiving the captured 2D images, And a server 200 for generating the server 200.
In addition, the VR contents generation system according to the present invention includes a terminal 300 that transmits a 2D image picked up by the image pickup section to a server and transmits an image edit command.
The imaging unit 100 captures a plurality of 2D images by capturing an object at various angles. The imaging section may be a camera or a smart terminal capable of capturing a 2D image of a target article by fixedly arranging the target article and repositioning the imaging section at various angles.
The image pickup unit is mounted on an image generating apparatus that can rotate a target article and pick up a 2D image at various angles around a target article to be rotated.
Fig. 2 shows an image generating apparatus equipped with an image pickup section according to the present invention, and Fig. 3 shows a position where an object image is picked up using an image generating apparatus.
The image generating apparatus according to the present invention includes a
Referring to FIG. 1, a
A camera or a smart phone is fixedly coupled to the camera mount. The turntable can be raised or lowered at a predetermined angle in accordance with one rotation of the turntable. Accordingly, when the object is placed on the turntable, the imaging distance from the object to the camera (or smartphone) is constant.
All the 2D images captured by the camera have the same center so that the object can be moved from the fixed center even when the 3D image is converted into the 3D image, thereby facilitating the observation by the user.
As another embodiment, a camera can be used to image a target article. The object is fixedly placed, and the user uses the camera to pick up images of various angles.
The controller controls the rotation of the turntable of the image generating apparatus and the rotation of the camera mount. When the image photographing mode is selected, the controller rotates the turntable according to an input signal by a preset rotation angle unit, for example, a unit of 5 degrees or a unit of 10 degrees, and outputs a photographing signal after every unit rotation with the camera.
Specifically, the first camera mount is located at the first height. The camera captures the object at the first height when the tentable is rotated 10 degrees and then stops. The turntable then rotates 10 degrees again. When the rotation of the turntable and the imaging of the camera are repeated and the turntable completes one turn, all 36 captured images are acquired at the first height. When all the images are taken at the first height, the control unit raises the camera mount to the second height higher than the first height. The turntable makes one rotation, acquiring 36 images in all at the second height, and raises the camera mount to the third height again from the second height.
In the present invention, the turntable is rotated after being rotated at intervals of 10 degrees, and then stopped. However, the turntable can be continuously rotated without stopping. In this case, imaging can be performed by setting the imaging mode of the camera installed in the camera mount to the continuous shooting mode.
In addition, when capturing a lower portion of a target article, a transparent support may be provided on the turntable, and then an image of the object may be placed on the transparent support.
FIG. 3 shows the imaging positions. When the unit height is set to 8, a plurality of images are taken at the lowest position from the first height C1 to the highest position at the eighth height C8. Referring to the drawings, it is shown that the object is picked up at various angles at the fifth height C5. Each point represents an imaging angle position.
In the image pickup according to the present invention, the background image pickup can be performed before the image pickup of the object article. Background imaging means picking up an object article without placing it on a turntable. The background imaging is performed while the turntable is stopped, and one background imaging image is obtained for each of the heights (C1 to C8). In the drawing, H1 to H8 denote points where background imaging is performed. After the background imaging is done, the subject article is placed on top of the turntable and the subject article imaging is started. The background imaging may be used for image processing to separate the background from the 2D image. For example, the image of the target article can be obtained by superimposing the captured image of the target article and the background sensed image, and then deleting the background from the sensed image of the detected target article.
The picked-up image of the object article may include identification information including picked-up position information. For example, if the imaged image is ITG_P025_256.jpg, ITG is the object identification code, 025 means the angle taken at 25 degrees in the downward direction, 256 means the 256th image Lt; / RTI >
In the present invention, the image picked up by the image production apparatus is used as the picked-up image of the object article, but the picked-up image of the target article can be obtained by using other types of picked- .
The captured image according to the present invention can also acquire an image of a target article with images taken by the user at various angles using the portable terminal. In this case, the user can pick up a plurality of captured images along the periphery of the target article while fixing the target article in position.
As described above, it is sufficient that the picked-up image according to the present invention is obtained by picking up an image of the target article at various angles and in different directions.
The 2D images thus picked up are transmitted to the terminal. The terminal transmits and receives data to and from the server. The terminal may be a smart phone, a PDA, a notebook computer, or a user PC.
Meanwhile, in place of the camera coupled to the camera mount, a terminal including a camera may be coupled.
The 2D image picked up by the image pickup unit is transmitted to the server together with the user information through the terminal.
The server 200 converts the captured 2D image into a 3D image and generates VR content. The server receives the 2D image from the terminal. The server also receives a user command from the terminal. The server converts the 2D image according to a user command and generates VR contents. The server may output the generated VR content to the terminal upon request of the terminal.
4 shows a server according to the present invention.
The server 200 includes a user information unit 210, a database 220, an editing screen providing unit 230, and a VR content generating unit 240.
The user information unit 210 receives user identification information and object identification information from a user terminal. The user information unit is connected to a database and displays related contents of the user.
The user identification information may include a user's ID. The user ID is associated with the user's name, e-mail, telephone number, and the like. The object identification information may include an object identification code, a height angle, and an image index.
The database 220 records user information, a 2D image, a converted 3D image, and VR contents transmitted from a user's terminal.
The editing screen providing unit 230 receives the 2D image stored in the database and provides the editing screen to the user terminal.
The VR contents generation unit 240 converts the 3D image into VR contents when the editing is completed according to the input instruction of the user. The converted VR contents are stored in the database.
5 shows an editing screen providing unit according to the present invention.
The edit screen providing unit 230 according to the present invention includes a viewer edit module 231 and an object correction module 232.
The view editing module 231 includes an image rotation function, a rotation direction adjustment function, and a rotation axis adjustment function.
The image rotation function gives a function of rotating an object in the VR image by a touch operation or a drag operation of the user. The rotation direction adjustment function determines the rotation direction of the viewer. For example, when dragging from left to right on the screen, the image of the target article is viewed from the right side. At this time, it is possible to detect the dragging speed from left to right and to give a display function in proportion to the rotating speed from left to right.
On the other hand, it is not easy to place the object at the precise center point of the turntable. Accordingly, the rotation axis adjustment function adjusts the rotation axis so that a plurality of captured 2D images have the same imaging center, So that even if the article is rotated, it is adjusted so as not to vibrate. Accordingly, a stable and balanced rotation viewer can be provided.
The object correction module includes an object background removal function and an image color correction function.
The object background removal function removes a background excluding a target object from a captured image. A detailed description thereof will be described later.
The image color correction function corrects the shade and blur included in the image. This correction increases the hue, contrast, and obtains a sharp image. For example, the color correction may be implemented by giving effects such as a smart filter, a miniature filter, a tilt shift filter, and the like.
The image color correction function is selectively applied before and after the background removal function so that image correction can be performed. When the image color correction function is performed before the background removal function, the image color correction function can be performed as a preprocessing step for clearly dividing the boundary for removing the background. When the image color correction function is performed after the background removal function, the image color correction function has a clear and clear visual effect on the object to give a feeling of looking at a stereoscopic real object on the screen.
6 illustrates a background removal process according to the present invention.
The background removal function according to the present invention may include an image sharpening step, a binarization step, an outline detection step, and a background separation step.
The image sharpening step and the binarization step are performed as a preprocessing step for detecting an outline. The sharpening step significantly corrects the original image to separate the object from the background. For example, the entire image is blurred, and then the image is corrected to a sharp image through a tone contrast. This clarifies the boundaries of the background of the object.
The binarization step removes the RGB information from the captured image and extracts it using the contrast information, thereby shortening the processing amount by simplifying the processing amount in the subsequent image processing. For example, the monochrome image may be converted to an 8-bit image. After the binarization step, a neural processing process can be performed. Thus, the boundary between the object and the background can be easily obtained.
The outline detection step detects a discontinuous portion of the pixel, which is a difference in density or concentration, as an outline. The outline detection can be performed using a Sobel edge algorithm or Canny edge algorithm. Such an algorithm can be selected by the user on the editing screen of the terminal. In particular, edge detection using the Sobel edge operator shows a detection performance that is sensitive to brightness although the operation speed is slow.
In the present invention, the outline is detected using the Sobel edge operator as the outline detection step. The Sobel edge operator computes the derivatives in the X and Y directions using the vertical mask and the horizontal mask and combines the derivatives in the X and Y directions to obtain the edge size E (x, y) at the point (x, y) And the tilt direction (?).
In the present invention, the Sobel boundary value is set to a low boundary value. The lower boundary value includes neighbors around the outline, requiring a lot of computation, but it is possible to extract the boundary clearly.
The direction of the tilt is rounded in a plurality of divided angular directions for subsequent steps.
When the detection of the outline is completed in this manner, the inside is defined as the target article area with respect to the outline, and the outside of the outline is defined as the background area. In the background separating step, an area including only the target article is obtained by removing the area defined as the background. The background image is deleted from the original image by superimposing the image that has detected the outline from the original image and then deleting the image of the detected outline area.
When the background separation is completed, a series of 2D images are generated as one 3D image. More specifically, the imaging of one object combines a plurality of 2D images as a 3D image for one object article. The combined image has a visual effect as if it were a 3D image of the target article according to a user's input command.
For example, when a user uses a terminal to drag with a finger, a predetermined number of images are successively output according to the dragged distance.
When the editing is completed in this manner, the edited 3D image is converted into the VR content. The conversion creates a VR content file from the 3D image using the user identification information of the user information and the object identification information.
The user can access the server through the terminal, download the VR content, and use it for advertisement.
The target article image used in the conventional advertisement should be determined by purchasing only a few target article images photographed at the set angle. The VR contents according to the present invention can observe a product in a position and a direction desired by a consumer, thereby increasing the degree of understanding of a target article.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the present invention is not limited to the disclosed exemplary embodiments, but various changes and modifications may be made by those skilled in the art without departing from the scope of the present invention.
10: Turntable
20:
30: Camera mounting part
100:
200: Server
210: User Information Section
220: Database
230: edit screen
231: viewer edit module
232: object correction module
240: VR content generation unit
300:
Claims (5)
And a terminal for transmitting the 2D image picked up by the image pickup unit to a server and transmitting an image edit command.
The image generating apparatus includes a turntable installed on an upper side of a body, a rotation table installed on a side surface of the body and rotated about a lateral axis, And a control unit, and when the turntable is rotated, the turntable is always installed to face the center of the turntable.
The server includes a user information unit for receiving user identification information and object identification information, a database for recording user information, a 2D image, a 3D image, VR contents, and an editing screen for providing an editing screen for a 2D image And a VR content generation unit for converting the 3D image into VR content.
The editing screen providing unit comprises a viewer editing module and an object editing module. The viewer editing module performs image rotation, rotation direction adjustment and rotation axis adjustment, and the object correction module performs object background removal and image color correction And the VR content generation system.
Wherein the object background removal comprises an image sharpening step, a binarization step, an appearance line detection step, and a background separation step, and provides the user with an algorithm for selecting the appearance line detection.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160172343A KR20180070082A (en) | 2016-12-16 | 2016-12-16 | Vr contents generating system |
PCT/KR2017/010700 WO2018110810A1 (en) | 2016-12-16 | 2017-09-27 | Vr content creation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160172343A KR20180070082A (en) | 2016-12-16 | 2016-12-16 | Vr contents generating system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020180117051A Division KR20180113944A (en) | 2018-10-01 | 2018-10-01 | Vr contents generating system |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20180070082A true KR20180070082A (en) | 2018-06-26 |
Family
ID=62559570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160172343A KR20180070082A (en) | 2016-12-16 | 2016-12-16 | Vr contents generating system |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20180070082A (en) |
WO (1) | WO2018110810A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210047216A (en) * | 2019-10-21 | 2021-04-29 | 이혁락 | Apparatus, system and method for producing virtual reality contents |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021204305A2 (en) * | 2021-07-20 | 2021-10-14 | 郑州航空工业管理学院 | Ar interaction apparatus based on digital twins |
CN116614617B (en) * | 2023-05-29 | 2024-03-19 | 广东横琴全域空间人工智能有限公司 | Multi-view three-dimensional modeling method, system, automation equipment and shooting terminal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000030712A (en) | 2000-03-13 | 2000-06-05 | 김관민 | A method of cyber virtual reality advertising by use photography |
KR20000037114A (en) | 2000-04-07 | 2000-07-05 | 송찬호 | commercial method for internet virtual reality |
KR20020005336A (en) | 2000-07-10 | 2002-01-17 | 이성열 | Advertisement method and apparatus by virtual reality modelling in internet |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003296587A (en) * | 2002-04-03 | 2003-10-17 | Yappa Corp | Virtual art museum system using 3d image |
KR20040049832A (en) * | 2004-05-07 | 2004-06-12 | 주식회사 오피브이알솔루션 | Method and Apparatus for Visualization and Manipulation of Real 3-D Objects in Networked Environments |
EP2313847A4 (en) * | 2008-08-19 | 2015-12-09 | Digimarc Corp | Methods and systems for content processing |
KR101603247B1 (en) * | 2014-03-26 | 2016-03-15 | (주)제일윈도텍스 | System of Virtual-Reality blind catalog containing authoring and simulating of blind products, and method for providing virtual-Reality blind catalog thereof |
KR20150129260A (en) * | 2014-05-09 | 2015-11-19 | 주식회사 아이너지 | Service System and Method for Object Virtual Reality Contents |
KR20160001120A (en) * | 2014-06-26 | 2016-01-06 | (주)지쓰리 | system and method for controlling turntable and capturing image for taking 3-d picture |
-
2016
- 2016-12-16 KR KR1020160172343A patent/KR20180070082A/en active Search and Examination
-
2017
- 2017-09-27 WO PCT/KR2017/010700 patent/WO2018110810A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000030712A (en) | 2000-03-13 | 2000-06-05 | 김관민 | A method of cyber virtual reality advertising by use photography |
KR20000037114A (en) | 2000-04-07 | 2000-07-05 | 송찬호 | commercial method for internet virtual reality |
KR20020005336A (en) | 2000-07-10 | 2002-01-17 | 이성열 | Advertisement method and apparatus by virtual reality modelling in internet |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210047216A (en) * | 2019-10-21 | 2021-04-29 | 이혁락 | Apparatus, system and method for producing virtual reality contents |
Also Published As
Publication number | Publication date |
---|---|
WO2018110810A1 (en) | 2018-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200221022A1 (en) | Background separated images for print and on-line use | |
US10192364B2 (en) | Augmented reality product preview | |
US9369638B2 (en) | Methods for extracting objects from digital images and for performing color change on the object | |
US10579134B2 (en) | Improving advertisement relevance | |
US8982110B2 (en) | Method for image transformation, augmented reality, and teleperence | |
CN105809620B (en) | Preview image for linear Panorama Mosaic obtains user interface | |
KR102506341B1 (en) | Devices, systems and methods of virtualizing a mirror | |
US8976160B2 (en) | User interface and authentication for a virtual mirror | |
US20190206031A1 (en) | Facial Contour Correcting Method and Device | |
CN101859433B (en) | Image mosaic device and method | |
US20080181507A1 (en) | Image manipulation for videos and still images | |
US11900552B2 (en) | System and method for generating virtual pseudo 3D outputs from images | |
CN107944420A (en) | The photo-irradiation treatment method and apparatus of facial image | |
KR20180070082A (en) | Vr contents generating system | |
KR101759799B1 (en) | Method for providing 3d image | |
KR20230016781A (en) | A method of producing environmental contents using AR/VR technology related to metabuses | |
KR20180113944A (en) | Vr contents generating system | |
EP3396964A1 (en) | Dynamic content placement in media | |
EP3177005B1 (en) | Display control system, display control device, display control method, and program | |
KR101662738B1 (en) | Method and apparatus of processing image | |
Heindl et al. | Capturing photorealistic and printable 3d models using low-cost hardware | |
EP3396963A1 (en) | Dynamic media content rendering | |
CN111399655B (en) | Image processing method and device based on VR synchronization | |
CN108062403A (en) | Old scape detection method and terminal | |
CN116012564B (en) | Equipment and method for intelligent fusion of three-dimensional model and live-action photo |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
N231 | Notification of change of applicant | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment |