WO2022119039A1 - Augmented reality object generation apparatus and method using virtual cameras - Google Patents

Augmented reality object generation apparatus and method using virtual cameras Download PDF

Info

Publication number
WO2022119039A1
WO2022119039A1 PCT/KR2020/018407 KR2020018407W WO2022119039A1 WO 2022119039 A1 WO2022119039 A1 WO 2022119039A1 KR 2020018407 W KR2020018407 W KR 2020018407W WO 2022119039 A1 WO2022119039 A1 WO 2022119039A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented reality
reality object
virtual camera
image
point cloud
Prior art date
Application number
PCT/KR2020/018407
Other languages
French (fr)
Korean (ko)
Inventor
장준환
박우출
양진욱
윤상필
최민수
이준석
송수호
구본재
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2022119039A1 publication Critical patent/WO2022119039A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the present invention relates to augmented reality object acquisition technology, and more particularly, an apparatus and method for generating an augmented reality object using a virtual camera for generating a 3D augmented reality object based on an image including an object photographed from a plurality of virtual cameras. is about
  • Augmented reality is emerging as a major issue in the information and communications technologies (ICT) industry, and information technology (IT) companies are making profits by selling augmented reality devices.
  • ICT information and communications technologies
  • IT information technology
  • augmented reality image technology a method for realizing game contents into augmented reality images is being developed, and in particular, a method for producing augmented reality contents that can be checked in multiple views of e-sports contents relayed in real time is being requested. the current situation.
  • the technical problem to be achieved by the present invention is to create a virtual 3D augmented reality object for expressing in augmented reality using information in an in-game situation by combining the work of creating a 3D object with e-sports.
  • An object of the present invention is to provide an apparatus and method for generating an augmented reality object using a camera.
  • an apparatus for generating an augmented reality object using a virtual camera includes an input unit for receiving a plurality of images including an object photographed from multiple angles by a virtual camera, and a point cloud ( PointCloud) to generate a three-dimensional point cloud for each image, and a control unit for generating a three-dimensional augmented reality object by matching the generated plurality of three-dimensional point clouds.
  • PointCloud point cloud
  • the controller further performs scaling on depth information for each image, and points each scaled image It is characterized by converting to the cloud.
  • the controller may divide the resolution value by the distance value and then multiply the depth value of each pixel to perform scaling on the depth information.
  • the controller aligns the 3D point cloud using the angle of each virtual camera and the resolution value of the image, and then performs the registration.
  • the image is characterized in that it is an RGBD (RGB-Depth) image including color information and depth information.
  • RGBD RGB-Depth
  • the virtual camera is characterized in that it is arranged to photograph the front, rear, right, left, upper, and lower surfaces of the object.
  • the augmented reality object creation method using a virtual camera comprises the steps of: receiving, by an augmented reality object generating device, a plurality of images including objects photographed from multiple angles by a virtual camera; Creating a three-dimensional point cloud for each image by converting a plurality of images into a point cloud and generating a three-dimensional augmented reality object by matching the generated three-dimensional point cloud by the augmented reality object generating device include
  • performing scaling on the depth information for each image between the receiving the input and the converting step further comprising characterized in that
  • the apparatus and method for generating an augmented reality object using a virtual camera of the present invention may generate a 3D augmented reality object for expressing in augmented reality as an image captured in an in-game situation through a plurality of virtual cameras.
  • FIG. 1 is a schematic diagram for explaining the creation of an augmented reality object according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an apparatus for generating an augmented reality object according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating the control unit of FIG. 2 .
  • FIG. 4 is a diagram for explaining depth scaling according to an embodiment of the present invention.
  • FIG. 5 is a view for explaining a process of generating an augmented reality object according to an embodiment of the present invention.
  • FIG. 6 is a flowchart illustrating a method for generating an augmented reality object according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram for explaining an augmented reality object creation according to an embodiment of the present invention
  • FIG. 2 is a block diagram for explaining an augmented reality object generating apparatus according to an embodiment of the present invention
  • FIG. It is a block diagram for explaining a control unit
  • FIG. 4 is a diagram for explaining depth scaling according to an embodiment of the present invention
  • FIG. 5 is a diagram for explaining a process of generating an augmented reality object according to an embodiment of the present invention .
  • the augmented reality object generating apparatus 100 combines the work of creating a three-dimensional object with e-sports to express in augmented reality using information in an in-game situation.
  • the augmented reality object generating device 100 includes a head mounted display (HMD), AR glasses, a smartphone, a laptop, a desktop, a tablet PC, a handheld PC, and the like.
  • the augmented reality object generating apparatus 100 may include an input unit 10 and a control unit 30 , and may further include a communication unit (not shown), an output unit 50 , and a storage unit 70 .
  • the input unit 10 receives a plurality of images 300 including the object 250 photographed from multiple angles by the virtual camera 200 .
  • the virtual camera 200 is a virtual camera that captures an in-game situation, and may photograph the front, rear, right, left, upper, and lower surfaces of the object 250 in the game, but is not limited thereto, and objects in more directions
  • the object 250 may be photographed, or the object 250 may be photographed from a smaller direction.
  • the plurality of images 300 are RGBD (RGB-Depth) images including color information and depth information, and the RGBD image basically has a four-channel image structure.
  • the image 300 may be an image of 1k (1024 ⁇ 1024) or more for realizing a high-level three-dimensional object.
  • the augmented reality object generating apparatus 100 may receive a plurality of images from an external communication network including a communication unit instead of the input unit 10, and include both the input unit 10 and the communication unit to receive images in various ways. can be entered or received.
  • the controller 30 controls the overall operation of the augmented reality object generating apparatus 100 .
  • the control unit 30 includes a generating unit 33 and a matching unit 35 , and may further include a scaling unit 31 .
  • Each image 300 input from the input unit 10 is photographed in a 1k (1024 ⁇ 1024) scale, and since it is photographed from various angles, scaling of depth information is required. To this end, the scaling unit 31 scales the depth information of the image 300 (41). At this time, if the resolution value (R) of the image 30 and the distance value (L x ) between the virtual camera 200 and the object 250 are the same, the resolution value (R) and the distance value (L x ) are 1:1 Since it has a ratio value, the scaling unit 31 directly generates a three-dimensional point cloud (3D PointCloud) without performing additional scaling.
  • 3D PointCloud three-dimensional point cloud
  • the scaling unit 31 scales the depth information of the image. .
  • the scaling unit 31 divides the resolution value (R) by the distance value (L x ) and then multiplies the depth value (D x ) of each pixel (D x (R/L X )) to the depth information. scaling can be performed.
  • the generating unit 33 converts a plurality of input images into a point cloud to generate a three-dimensional point cloud corresponding to each image (43). In this case, when the plurality of images are scaled by the scaling unit 31 , the generating unit 33 converts the plurality of scaled images into a point cloud to generate a three-dimensional point cloud.
  • the generator 33 collects points constituting the object 250 located in the three-dimensional space and converts the object 250 included in the image 300 into a point cloud. Through this, the generator 33 may generate a 3D point cloud for each image 300 in the direction in which the virtual camera 200 photographed the object 250 .
  • the generating unit 33 generates a three-dimensional point cloud for the front of the object 250 , and in the case of an image obtained by photographing the rear of the object 250 . , creates a three-dimensional point cloud for the rear surface of the object 250, and in the case of an image obtained by photographing the right surface of the object 250, creates a three-dimensional point cloud for the right surface of the object 250, and the object 250
  • a three-dimensional point cloud for the left side of the object 250 is generated.
  • the matching unit 35 matches the generated three-dimensional point clouds (45).
  • the matching unit 35 may align a plurality of 3D point clouds and then perform registration. That is, the registration unit 35 rotates or moves the three-dimensional point cloud using the angle A n (n is the number of cameras) for each virtual camera 200 and the resolution value R of the image 300 for registration. Sort to make this possible.
  • the matching unit 35 generates a 3D augmented reality object 400 by matching the aligned plurality of 3D point clouds into one object.
  • the 3D augmented reality object 400 is an object that can be used in augmented reality content.
  • the output unit 50 outputs the 3D augmented reality object generated by the control unit 30 . Also, the output unit 50 may output a plurality of images input from the input unit 10 and a three-dimensional point cloud for each image generated by the control unit 30 .
  • the storage unit 70 stores an algorithm or program for driving the augmented reality object generating apparatus 100 .
  • the storage unit 70 stores a plurality of images input from the input unit 10 and a three-dimensional point cloud for each image generated by the control unit 30 .
  • the storage unit 70 stores the 3D augmented reality object generated by the control unit 30 .
  • the storage unit 70 includes a flash memory type, a hard disk type, a media card micro type, a card type memory (eg, SD or XD memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, It may include at least one storage medium of a magnetic disk and an optical disk.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • Magnetic Memory It may include at least one storage medium of a magnetic disk and an optical disk.
  • FIG. 6 is a flowchart illustrating a method for generating an augmented reality object according to an embodiment of the present invention.
  • the augmented reality object generating method creates a 3D augmented reality object for expressing in augmented reality with images captured in-game situations through a plurality of virtual cameras 200 .
  • the augmented reality object creation method can provide a 3D augmented reality object with which the user can have friendliness.
  • the augmented reality object generating apparatus 100 receives a plurality of images.
  • the plurality of images are RGBD images including color information and depth information, and are images including the object 250 photographed from multiple angles by the virtual camera 200 .
  • the augmented reality object generating apparatus 100 scales depth information for each of the plurality of input images.
  • the augmented reality object generating apparatus 100 divides the resolution value (R) by the distance value (L x ) and then multiplies the depth value (D x ) of each pixel (D x (R/L X )) to the depth information. scaling can be performed.
  • the augmented reality object generating apparatus 100 determines the resolution value R and the distance value when the resolution value R of the image 300 and the distance value L x between the virtual camera 200 and the object 250 are the same. Since (L x ) has a ratio of 1:1, separate scaling may not be performed.
  • the augmented reality object generating apparatus 100 converts each scaled image into a point cloud to generate a three-dimensional point cloud.
  • the augmented reality object generating apparatus 100 collects points constituting the object 250 located in a three-dimensional space and converts the object 250 included in the image 300 into a point cloud. Through this, the augmented reality object generating apparatus 100 may generate a three-dimensional point cloud for each image 300 with respect to the direction in which the virtual camera 200 photographed the object 250 .
  • the augmented reality object generating apparatus 100 matches the generated 3D point cloud.
  • the augmented reality object generating apparatus 100 may align a plurality of 3D point clouds and then perform registration. That is, the augmented reality object generating device 100 rotates or moves the 3D point cloud using the angle (A n ) for each virtual camera 200 and the resolution value (R) of the image 300 so that registration is possible.
  • the three-dimensional augmented reality object 400 is generated by matching the aligned plurality of three-dimensional point clouds into one object.
  • the method according to an embodiment of the present invention may be provided in the form of a computer-readable medium suitable for storing computer program instructions and data.
  • a computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination, and includes all types of recording devices in which data readable by a computer system is stored.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, compact disk read only memory (CD-ROM), and optical recording media such as DVD (Digital Video Disk).
  • Stores program instructions such as Magneto-Optical Media, ROM (Read Only Memory), RAM (Random Access Memory), Flash memory, etc.
  • the computer-readable recording medium is distributed in a computer system connected to a network, so that the computer-readable code can be stored and executed in a distributed manner.
  • functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers in the technical field to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An augmented reality object generation apparatus and method using virtual cameras are disclosed. The augmented reality object generation apparatus according to the present invention comprises: an input unit for receiving a plurality of images, including an object, captured at multiple angles by means of virtual cameras; and a control unit which converts the inputted plurality of images into PointClouds to generate a three-dimensional PointCloud for each image, and which matches the generated plurality of three-dimensional PointClouds to generate a three-dimensional augmented reality object.

Description

가상 카메라를 이용한 증강현실 객체 생성 장치 및 방법Apparatus and method for creating augmented reality objects using virtual cameras
본 발명은 증강현실 객체 획득 기술에 관한 것으로, 더욱 상세하게는 복수의 가상 카메라로부터 촬영된 객체를 포함하는 이미지를 기반으로 3차원 증강현실 객체를 생성하는 가상 카메라를 이용한 증강현실 객체 생성 장치 및 방법에 관한 것이다.The present invention relates to augmented reality object acquisition technology, and more particularly, an apparatus and method for generating an augmented reality object using a virtual camera for generating a 3D augmented reality object based on an image including an object photographed from a plurality of virtual cameras. is about
증강현실(Augmented Reality, AR)은 ICT(Information and Communications Technologies) 산업의 주요 이슈로 떠오르고 있으며, IT(Information Technology) 기업들은 증강현실 장치를 판매하여 이윤을 창출하고 있다.Augmented reality (AR) is emerging as a major issue in the information and communications technologies (ICT) industry, and information technology (IT) companies are making profits by selling augmented reality devices.
한편 증강현실 영상 기술의 일환으로 게임 콘텐츠를 증강현실 영상으로 구현하는 방안이 개발되고 있으며, 특히 실시간으로 중계되는 e-스포츠 콘텐츠를 다시점으로 확인 가능한 증강현실 콘텐츠로 제작할 수 있는 방안이 요청되고 있는 실정이다.Meanwhile, as part of the augmented reality image technology, a method for realizing game contents into augmented reality images is being developed, and in particular, a method for producing augmented reality contents that can be checked in multiple views of e-sports contents relayed in real time is being requested. the current situation.
본 발명이 이루고자 하는 기술적 과제는 3차원 객체를 만드는 작업을 e-스포츠와 접목하여 인 게임(in-game) 상황에서의 정보를 이용하여 증강현실로 표현하기 위한 3차원 증강현실 객체를 생성하는 가상 카메라를 이용한 증강현실 객체 생성 장치 및 방법을 제공하는데 목적이 있다.The technical problem to be achieved by the present invention is to create a virtual 3D augmented reality object for expressing in augmented reality using information in an in-game situation by combining the work of creating a 3D object with e-sports. An object of the present invention is to provide an apparatus and method for generating an augmented reality object using a camera.
상기 목적을 달성하기 위해 본 발명에 따른 가상 카메라를 이용한 증강현실 객체 생성 장치는 가상 카메라에 의해 다각도에서 촬영된 객체가 포함된 복수의 이미지를 입력받는 입력부 및 상기 입력된 복수의 이미지를 포인트 클라우드(PointCloud)로 변환하여 각 이미지별 3차원 포인트 클라우드를 생성하고, 상기 생성된 복수의 3차원 포인트 클라우드를 정합하여 3차원 증강현실 객체를 생성하는 제어부를 포함한다.In order to achieve the above object, an apparatus for generating an augmented reality object using a virtual camera according to the present invention includes an input unit for receiving a plurality of images including an object photographed from multiple angles by a virtual camera, and a point cloud ( PointCloud) to generate a three-dimensional point cloud for each image, and a control unit for generating a three-dimensional augmented reality object by matching the generated plurality of three-dimensional point clouds.
또한 상기 제어부는, 상기 이미지의 해상도 값 및 상기 가상 카메라와 상기 객체 사이의 거리 값이 차이가 있으면, 각 이미지별로 깊이(Depth) 정보에 대한 스케일링을 더 수행한 후, 상기 스케일링된 각 이미지를 포인트 클라우드로 변환하는 것을 특징으로 한다.In addition, if there is a difference between the resolution value of the image and the distance value between the virtual camera and the object, the controller further performs scaling on depth information for each image, and points each scaled image It is characterized by converting to the cloud.
또한 상기 제어부는, 상기 해상도 값을 상기 거리 값에 나눈 다음 각 픽셀의 깊이 값을 곱하여 상기 깊이 정보에 대한 스케일링을 수행하는 것을 특징으로 한다.In addition, the controller may divide the resolution value by the distance value and then multiply the depth value of each pixel to perform scaling on the depth information.
또한 상기 제어부는, 상기 가상 카메라별 각도와 상기 이미지의 해상도 값을 이용하여 상기 3차원 포인트 클라우드를 정렬시킨 후, 상기 정합을 하는 것을 특징으로 한다.In addition, the controller aligns the 3D point cloud using the angle of each virtual camera and the resolution value of the image, and then performs the registration.
또한 상기 이미지는, 색상(color) 정보 및 깊이 정보를 포함하는 RGBD(RGB-Depth) 이미지인 것을 특징으로 한다.In addition, the image is characterized in that it is an RGBD (RGB-Depth) image including color information and depth information.
또한 상기 가상 카메라는, 상기 객체의 정면, 후면, 우측면, 좌측면, 상면 및 하면을 촬영하도록 배치되는 것을 특징으로 한다.In addition, the virtual camera is characterized in that it is arranged to photograph the front, rear, right, left, upper, and lower surfaces of the object.
본 발명에 따른 가상 카메라를 이용한 증강현실 객체 생성 방법은 증강현실 객체 생성 장치가 가상 카메라에 의해 다각도에서 촬영된 객체가 포함된 복수의 이미지를 입력받는 단계, 상기 증강현실 객체 생성 장치가 상기 입력된 복수의 이미지를 포인트 클라우드로 변환하여 각 이미지별 3차원 포인트 클라우드를 생성하는 단계 및 상기 증강현실 객체 생성 장치가 상기 생성된 복수의 3차원 포인트 클라우드를 정합하여 3차원 증강현실 객체를 생성하는 단계를 포함한다.The augmented reality object creation method using a virtual camera according to the present invention comprises the steps of: receiving, by an augmented reality object generating device, a plurality of images including objects photographed from multiple angles by a virtual camera; Creating a three-dimensional point cloud for each image by converting a plurality of images into a point cloud and generating a three-dimensional augmented reality object by matching the generated three-dimensional point cloud by the augmented reality object generating device include
또한 상기 이미지의 해상도 값 및 상기 가상 카메라와 상기 객체 사이의 거리 값이 차이가 있으면, 상기 입력받는 단계 및 상기 변환하는 단계 사이에, 각 이미지별로 깊이 정보에 대한 스케일링을 수행하는 단계를 더 포함하는 것을 특징으로 한다.In addition, if there is a difference between the resolution value of the image and the distance value between the virtual camera and the object, performing scaling on the depth information for each image between the receiving the input and the converting step further comprising characterized in that
본 발명의 가상 카메라를 이용한 증강현실 객체 생성 장치 및 방법은 복수의 가상 카메라를 통해 인 게임 상황을 촬영한 이미지로 증강현실로 표현하기 위한 3차원 증강현실 객체를 생성할 수 있다.The apparatus and method for generating an augmented reality object using a virtual camera of the present invention may generate a 3D augmented reality object for expressing in augmented reality as an image captured in an in-game situation through a plurality of virtual cameras.
이를 통해 사용자가 친근감을 가질 수 있는 3차원 증강현실 객체를 제공할 수 있다.Through this, it is possible to provide a 3D augmented reality object with which the user can have friendliness.
도 1은 본 발명의 실시예에 따른 증강현실 객체 생성을 설명하기 위한 개략도이다.1 is a schematic diagram for explaining the creation of an augmented reality object according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 증강현실 객체 생성 장치를 설명하기 위한 블록도이다.2 is a block diagram illustrating an apparatus for generating an augmented reality object according to an embodiment of the present invention.
도 3은 도 2의 제어부를 설명하기 위한 블록도이다.FIG. 3 is a block diagram illustrating the control unit of FIG. 2 .
도 4는 본 발명의 실시예에 따른 깊이 스케일링을 설명하기 위한 도면이다.4 is a diagram for explaining depth scaling according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 증강현실 객체가 생성되는 과정을 설명하는 도면이다.5 is a view for explaining a process of generating an augmented reality object according to an embodiment of the present invention.
도 6은 본 발명의 실시예에 따른 증강현실 객체 생성 방법을 설명하기 위한 순서도이다.6 is a flowchart illustrating a method for generating an augmented reality object according to an embodiment of the present invention.
이하 본 발명의 실시예를 첨부된 도면들을 참조하여 상세히 설명한다. 우선 각 도면의 구성요소들에 참조부호를 부가함에 있어서, 동일한 구성요소들에 대해서는 비록 다른 도면상에 표시되더라도 가능한 한 동일한 부호를 가지도록 하고 있음에 유의한다. 또한 본 발명을 설명함에 있어, 관련된 공지 구성 또는 기능에 대한 구체적인 설명이 당업자에게 자명하거나 본 발명의 요지를 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명은 생략한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. First, in adding reference numerals to the components of each drawing, it should be noted that the same components are given the same reference numerals as much as possible even though they are indicated on different drawings. In addition, in describing the present invention, if it is determined that a detailed description of a related known configuration or function is obvious to those skilled in the art or may obscure the gist of the present invention, the detailed description thereof will be omitted.
도 1은 본 발명의 실시예에 따른 증강현실 객체 생성을 설명하기 위한 개략도이고, 도 2는 본 발명의 실시예에 따른 증강현실 객체 생성 장치를 설명하기 위한 블록도이며, 도 3은 도 2의 제어부를 설명하기 위한 블록도이고, 도 4는 본 발명의 실시예에 따른 깊이 스케일링을 설명하기 위한 도면이며, 도 5는 본 발명의 실시예에 따른 증강현실 객체가 생성되는 과정을 설명하는 도면이다.1 is a schematic diagram for explaining an augmented reality object creation according to an embodiment of the present invention, FIG. 2 is a block diagram for explaining an augmented reality object generating apparatus according to an embodiment of the present invention, and FIG. It is a block diagram for explaining a control unit, FIG. 4 is a diagram for explaining depth scaling according to an embodiment of the present invention, and FIG. 5 is a diagram for explaining a process of generating an augmented reality object according to an embodiment of the present invention .
도 1 내지 도 5를 참조하면, 증강현실 객체 생성 장치(100)는 3차원 객체를 만드는 작업을 e-스포츠와 접목하여 인 게임(in-game) 상황에서의 정보를 이용하여 증강현실로 표현하기 위한 3차원 증강현실 객체(400)를 생성한다. 이를 구현하기 위해 증강현실 객체 생성 장치(100)는 HMD(Head mounted Display), AR 글라스, 스마트폰, 랩톱, 데스크톱, 태블릿PC, 핸드헬드PC 등을 포함한다. 또한 증강현실 객체 생성 장치(100)는 입력부(10) 및 제어부(30)를 포함하고, 통신부(미도시), 출력부(50) 및 저장부(70)를 더 포함할 수 있다.1 to 5 , the augmented reality object generating apparatus 100 combines the work of creating a three-dimensional object with e-sports to express in augmented reality using information in an in-game situation. To create a 3D augmented reality object 400 for. To implement this, the augmented reality object generating device 100 includes a head mounted display (HMD), AR glasses, a smartphone, a laptop, a desktop, a tablet PC, a handheld PC, and the like. In addition, the augmented reality object generating apparatus 100 may include an input unit 10 and a control unit 30 , and may further include a communication unit (not shown), an output unit 50 , and a storage unit 70 .
입력부(10)는 가상 카메라(200)에 의해 다각도에서 촬영된 객체(250)가 포함된 복수의 이미지(300)를 입력받는다. 여기서 가상 카메라(200)는 인 게임 상황을 촬영하는 가상 카메라로써, 게임 속의 객체(250)의 정면, 후면, 우측면, 좌측면, 상면 및 하면을 촬영할 수 있으나, 이에 한정하지 않고 더 많은 방향에서 객체(250)를 촬영하거나, 더 적은 방향에서 객체(250)를 촬영할 수 있다. 복수의 이미지(300)는 색상(color) 정보 및 깊이(Depth) 정보를 포함하는 RGBD(RGB-Depth) 이미지로써, RGBD 이미지는 기본적으로 4채널 이미지 구조를 이루고 있다. 이때 이미지(300)는 높은 수준의 3차원 객체 구현을 위해 1k(1024×1024) 이상의 이미지일 수 있다. The input unit 10 receives a plurality of images 300 including the object 250 photographed from multiple angles by the virtual camera 200 . Here, the virtual camera 200 is a virtual camera that captures an in-game situation, and may photograph the front, rear, right, left, upper, and lower surfaces of the object 250 in the game, but is not limited thereto, and objects in more directions The object 250 may be photographed, or the object 250 may be photographed from a smaller direction. The plurality of images 300 are RGBD (RGB-Depth) images including color information and depth information, and the RGBD image basically has a four-channel image structure. In this case, the image 300 may be an image of 1k (1024×1024) or more for realizing a high-level three-dimensional object.
도면에는 도시되지 않았지만, 증강현실 객체 생성 장치(100)는 입력부(10) 대신 통신부를 포함하여 외부 통신망으로부터 복수의 이미지를 수신할 수 있고, 입력부(10) 및 통신부를 모두 포함하여 다양한 방법으로 이미지를 입력 또는 수신받을 수 있다.Although not shown in the drawings, the augmented reality object generating apparatus 100 may receive a plurality of images from an external communication network including a communication unit instead of the input unit 10, and include both the input unit 10 and the communication unit to receive images in various ways. can be entered or received.
제어부(30)는 증강현실 객체 생성 장치(100)의 전반적인 구동을 제어한다. 제어부(30)는 생성부(33) 및 정합부(35)를 포함하고, 스케일링부(31)를 더 포함할 수 있다.The controller 30 controls the overall operation of the augmented reality object generating apparatus 100 . The control unit 30 includes a generating unit 33 and a matching unit 35 , and may further include a scaling unit 31 .
입력부(10)로부터 입력된 각 이미지(300)는 1k(1024×1024) 스케일로 촬영되며, 여러 각도에서 촬영되기 때문에 깊이 정보에 대한 스케일링이 필요하다. 이를 위해 스케일링부(31)는 이미지(300)의 깊이 정보를 스케일링한다(41). 이때 이미지(30)의 해상도 값(R) 및 가상 카메라(200)와 객체(250) 사이의 거리 값(Lx)이 동일하면 해상도 값(R) 및 거리 값(Lx)이 1:1의 비율 값을 가지므로 스케일링부(31)는 별도의 스케일링을 수행하지 않고, 바로 3차원 포인트 클라우드(3D PointCloud)를 생성한다. 하지만 이미지(300)의 해상도 값(R) 및 가상 카메라(200)와 객체(250) 사이의 거리 값(Lx)이 차이가 있으면 스케일링부(31)는 이미지의 깊이 정보에 대한 스케일링을 수행한다. 상세하게는 스케일링부(31)는 해상도 값(R)을 거리 값(Lx)에 나눈 다음 각 픽셀의 깊이 값(Dx)을 곱하는 방식(Dx(R/LX))으로 깊이 정보에 대한 스케일링을 수행할 수 있다.Each image 300 input from the input unit 10 is photographed in a 1k (1024×1024) scale, and since it is photographed from various angles, scaling of depth information is required. To this end, the scaling unit 31 scales the depth information of the image 300 (41). At this time, if the resolution value (R) of the image 30 and the distance value (L x ) between the virtual camera 200 and the object 250 are the same, the resolution value (R) and the distance value (L x ) are 1:1 Since it has a ratio value, the scaling unit 31 directly generates a three-dimensional point cloud (3D PointCloud) without performing additional scaling. However, if there is a difference between the resolution value R of the image 300 and the distance value L x between the virtual camera 200 and the object 250, the scaling unit 31 scales the depth information of the image. . In detail, the scaling unit 31 divides the resolution value (R) by the distance value (L x ) and then multiplies the depth value (D x ) of each pixel (D x (R/L X )) to the depth information. scaling can be performed.
생성부(33)는 입력된 복수의 이미지를 포인트 클라우드로 변환하여 각 이미지에 해당하는 3차원 포인트 클라우드를 생성한다(43). 이때 생성부(33)는 스케일링부(31)에서 복수의 이미지가 스케일링된 경우, 스케일링된 복수의 이미지를 포인트 클라우드로 변환하여 3차원 포인트 클라우드를 생성한다. 생성부(33)는 3차원 공간에 위치한 객체(250)를 이루는 포인트들을 수집하여 이미지(300)에 포함된 객체(250)를 포인트 클라우드로 변환한다. 이를 통해 생성부(33)는 가상 카메라(200)가 객체(250)를 촬영한 방향에 대한 3차원 포인트 클라우드를 각 이미지(300)별로 생성할 수 있다. 예를 들어 생성부(33)는 객체(250)의 정면을 촬영한 이미지인 경우, 객체(250)의 정면에 대한 3차원 포인트 클라우드를 생성하고, 객체(250)의 후면을 촬영한 이미지인 경우, 객체(250)의 후면에 대한 3차원 포인트 클라우드를 생성하며, 객체(250)의 우측면을 촬영한 이미지인 경우, 객체(250)의 우측면에 대한 3차원 포인트 클라우드를 생성하고, 객체(250)의 좌측면을 촬영한 이미지인 경우, 객체(250)의 좌측면에 대한 3차원 포인트 클라우드를 생성한다.The generating unit 33 converts a plurality of input images into a point cloud to generate a three-dimensional point cloud corresponding to each image (43). In this case, when the plurality of images are scaled by the scaling unit 31 , the generating unit 33 converts the plurality of scaled images into a point cloud to generate a three-dimensional point cloud. The generator 33 collects points constituting the object 250 located in the three-dimensional space and converts the object 250 included in the image 300 into a point cloud. Through this, the generator 33 may generate a 3D point cloud for each image 300 in the direction in which the virtual camera 200 photographed the object 250 . For example, in the case of an image obtained by photographing the front of the object 250 , the generating unit 33 generates a three-dimensional point cloud for the front of the object 250 , and in the case of an image obtained by photographing the rear of the object 250 . , creates a three-dimensional point cloud for the rear surface of the object 250, and in the case of an image obtained by photographing the right surface of the object 250, creates a three-dimensional point cloud for the right surface of the object 250, and the object 250 In the case of an image obtained by photographing the left side of , a three-dimensional point cloud for the left side of the object 250 is generated.
정합부(35)는 생성된 복수의 3차원 포인트 클라우드를 정합한다(45). 정합부(35)는 복수의 3차원 포인트 클라우드를 정렬시킨 후, 정합을 할 수 있다. 즉 정합부(35)는 가상 카메라(200)별 각도(An)(n은 카메라 수)와 이미지(300)의 해상도 값(R)을 이용하여 3차원 포인트 클라우드를 회전시키거나, 이동시켜 정합이 가능하도록 정렬을 한다. 정합부(35)는 정렬된 복수의 3차원 포인트 클라우드를 하나의 객체로 정합하여 3차원 증강현실 객체(400)를 생성한다. 여기서 3차원 증강현실 객체(400)는 증강현실 콘텐츠에서 사용할 수 있는 객체이다.The matching unit 35 matches the generated three-dimensional point clouds (45). The matching unit 35 may align a plurality of 3D point clouds and then perform registration. That is, the registration unit 35 rotates or moves the three-dimensional point cloud using the angle A n (n is the number of cameras) for each virtual camera 200 and the resolution value R of the image 300 for registration. Sort to make this possible. The matching unit 35 generates a 3D augmented reality object 400 by matching the aligned plurality of 3D point clouds into one object. Here, the 3D augmented reality object 400 is an object that can be used in augmented reality content.
출력부(50)는 제어부(30)로부터 생성된 3차원 증강현실 객체를 출력한다. 또한 출력부(50)는 입력부(10)로부터 입력된 복수의 이미지 및 제어부(30)로부터 생성된 각 이미지별 3차원 포인트 클라우드도 출력할 수 있다.The output unit 50 outputs the 3D augmented reality object generated by the control unit 30 . Also, the output unit 50 may output a plurality of images input from the input unit 10 and a three-dimensional point cloud for each image generated by the control unit 30 .
저장부(70)는 증강현실 객체 생성 장치(100)를 구동하기 위한 알고리즘 또는 프로그램이 저장된다. 저장부(70)는 입력부(10)로부터 입력된 복수의 이미지 및 제어부(30)로부터 생성된 각 이미지별 3차원 포인트 클라우드가 저장된다. 또한 저장부(70)는 제어부(30)로부터 생성된 3차원 증강현실 객체가 저장된다. 저장부(70)는 플래시 메모리 타입(flash memory type), 하드디스크 타입(hard disk type), 미디어 카드 마이크로 타입(multimedia card micro type), 카드 타입의 메모리(예를 들어 SD 또는 XD 메모리 등), 램(Random Access Memory, RAM), SRAM(Static Random Access Memory), 롬(Read-Only Memory, ROM), EEPROM(Electrically Erasable Programmable Read-Only Memory), PROM(Programmable Read-Only Memory), 자기메모리, 자기 디스크 및 광디스크 중 적어도 하나의 저장매체를 포함할 수 있다. The storage unit 70 stores an algorithm or program for driving the augmented reality object generating apparatus 100 . The storage unit 70 stores a plurality of images input from the input unit 10 and a three-dimensional point cloud for each image generated by the control unit 30 . In addition, the storage unit 70 stores the 3D augmented reality object generated by the control unit 30 . The storage unit 70 includes a flash memory type, a hard disk type, a media card micro type, a card type memory (eg, SD or XD memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, It may include at least one storage medium of a magnetic disk and an optical disk.
도 6은 본 발명의 실시예에 따른 증강현실 객체 생성 방법을 설명하기 위한 순서도이다.6 is a flowchart illustrating a method for generating an augmented reality object according to an embodiment of the present invention.
도 1 및 도 6을 참조하면, 증강현실 객체 생성 방법은 복수의 가상 카메라(200)를 통해 인 게임 상황을 촬영한 이미지로 증강현실로 표현하기 위한 3차원 증강현실 객체를 생성한다. 이를 통해 증강현실 객체 생성 방법은 사용자가 친근감을 가질 수 있는 3차원 증강현실 객체를 제공할 수 있다.Referring to FIGS. 1 and 6 , the augmented reality object generating method creates a 3D augmented reality object for expressing in augmented reality with images captured in-game situations through a plurality of virtual cameras 200 . Through this, the augmented reality object creation method can provide a 3D augmented reality object with which the user can have friendliness.
S110 단계에서, 증강현실 객체 생성 장치(100)는 복수의 이미지를 입력받는다. 여기서 복수의 이미지는 색상 정보 및 깊이 정보를 포함하는 RGBD 이미지로써, 가상 카메라(200)에 의해 다각도에서 촬영된 객체(250)가 포함된 이미지이다. In step S110, the augmented reality object generating apparatus 100 receives a plurality of images. Here, the plurality of images are RGBD images including color information and depth information, and are images including the object 250 photographed from multiple angles by the virtual camera 200 .
S120 단계에서, 증강현실 객체 생성 장치(100)는 입력된 복수의 이미지별로 깊이 정보에 대한 스케일링을 수행한다. 증강현실 객체 생성 장치(100)는 해상도 값(R)을 거리 값(Lx)에 나눈 다음 각 픽셀의 깊이 값(Dx)을 곱하는 방식(Dx(R/LX))으로 깊이 정보에 대한 스케일링을 수행할 수 있다. 이때 증강현실 객체 생성장치(100)는 이미지(300)의 해상도 값(R) 및 가상 카메라(200)와 객체(250) 사이의 거리 값(Lx)이 동일하면 해상도 값(R) 및 거리 값(Lx)이 1:1의 비율 값을 가지므로 별도의 스케일링을 수행하지 않을 수 있다.In step S120 , the augmented reality object generating apparatus 100 scales depth information for each of the plurality of input images. The augmented reality object generating apparatus 100 divides the resolution value (R) by the distance value (L x ) and then multiplies the depth value (D x ) of each pixel (D x (R/L X )) to the depth information. scaling can be performed. At this time, the augmented reality object generating apparatus 100 determines the resolution value R and the distance value when the resolution value R of the image 300 and the distance value L x between the virtual camera 200 and the object 250 are the same. Since (L x ) has a ratio of 1:1, separate scaling may not be performed.
S130 단계에서, 증강현실 객체 생성 장치(100)는 스케일링된 각 이미지를 포인트 클라우드로 변환하여 3차원 포인트 클라우드를 생성한다. 증강현실 객체 생성 장치(100)는 3차원 공간에 위치한 객체(250)를 이루는 포인트들을 수집하여 이미지(300)에 포함된 객체(250)를 포인트 클라우드로 변환한다. 이를 통해 증강현실 객체 생성 장치(100)는 가상 카메라(200)가 객체(250)를 촬영한 방향에 대한 3차원 포인트 클라우드를 각 이미지(300)별로 생성할 수 있다. In step S130, the augmented reality object generating apparatus 100 converts each scaled image into a point cloud to generate a three-dimensional point cloud. The augmented reality object generating apparatus 100 collects points constituting the object 250 located in a three-dimensional space and converts the object 250 included in the image 300 into a point cloud. Through this, the augmented reality object generating apparatus 100 may generate a three-dimensional point cloud for each image 300 with respect to the direction in which the virtual camera 200 photographed the object 250 .
S140 단계에서, 증강현실 객체 생성 장치(100)는 생성된 3차원 포인트 클라우드를 정합한다. 이때 증강현실 객체 생성 장치(100)는 복수의 3차원 포인트 클라우드를 정렬시킨 후, 정합을 할 수 있다. 즉 증강현실 객체 생성 장치(100)는 가상 카메라(200)별 각도(An)와 이미지(300)의 해상도 값(R)을 이용하여 3차원 포인트 클라우드를 회전시키거나, 이동시켜 정합이 가능하도록 정렬을 한 다음 정렬된 복수의 3차원 포인트 클라우드를 하나의 객체로 정합하여 3차원 증강현실 객체(400)를 생성한다. In step S140, the augmented reality object generating apparatus 100 matches the generated 3D point cloud. In this case, the augmented reality object generating apparatus 100 may align a plurality of 3D point clouds and then perform registration. That is, the augmented reality object generating device 100 rotates or moves the 3D point cloud using the angle (A n ) for each virtual camera 200 and the resolution value (R) of the image 300 so that registration is possible. After alignment, the three-dimensional augmented reality object 400 is generated by matching the aligned plurality of three-dimensional point clouds into one object.
본 발명의 실시 예에 따른 방법은 컴퓨터 프로그램 명령어와 데이터를 저장하기에 적합한 컴퓨터로 판독 가능한 매체의 형태로 제공될 수도 있다. 이러한, 컴퓨터가 읽을 수 있는 기록매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있으며, 컴퓨터 시스템에 의해 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록장치를 포함한다. 컴퓨터가 읽을 수 있는 기록매체의 예로는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(Magnetic Media), CD-ROM(Compact Disk Read Only Memory), DVD(Digital Video Disk)와 같은 광기록 매체(Optical Media), 플롭티컬 디스크(Floptical Disk)와 같은 자기-광 매체(Magneto-Optical Media) 및 롬(ROM, Read Only Memory), 램(RAM, Random Access Memory), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치를 포함한다. 또한, 컴퓨터가 읽을 수 있는 기록매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장되고 실행될 수 있다. 그리고, 본 발명을 구현하기 위한 기능적인(functional) 프로그램, 코드 및 코드 세그먼트들은 본 발명이 속하는 기술분야의 프로그래머들에 의해 용이하게 추론될 수 있다.The method according to an embodiment of the present invention may be provided in the form of a computer-readable medium suitable for storing computer program instructions and data. Such a computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination, and includes all types of recording devices in which data readable by a computer system is stored. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, compact disk read only memory (CD-ROM), and optical recording media such as DVD (Digital Video Disk). Stores program instructions such as Magneto-Optical Media, ROM (Read Only Memory), RAM (Random Access Memory), Flash memory, etc. and hardware devices specially configured to perform In addition, the computer-readable recording medium is distributed in a computer system connected to a network, so that the computer-readable code can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers in the technical field to which the present invention pertains.
이상으로 본 발명의 기술적 사상을 예시하기 위한 바람직한 실시예와 관련하여 설명하고 도시하였지만, 본 발명은 이와 같이 도시되고 설명된 그대로의 구성 및 작용에만 국한되는 것은 아니며, 기술적 사상의 범주를 이탈함없이 본 발명에 대해 다수의 변경 및 수정이 가능함을 당업자들은 잘 이해할 수 있을 것이다. 따라서 그러한 모든 적절한 변경 및 수정과 균등물들도 본 발명의 범위에 속하는 것으로 간주되어야 할 것이다. Although described and illustrated in relation to a preferred embodiment for illustrating the technical idea of the present invention above, the present invention is not limited to the configuration and operation as shown and described as such, and without departing from the scope of the technical idea. It will be apparent to those skilled in the art that many changes and modifications to the present invention are possible. Accordingly, all such suitable alterations and modifications and equivalents are to be considered as falling within the scope of the present invention.

Claims (8)

  1. 가상 카메라에 의해 다각도에서 촬영된 객체가 포함된 복수의 이미지를 입력받는 입력부; 및an input unit for receiving a plurality of images including objects photographed from multiple angles by a virtual camera; and
    상기 입력된 복수의 이미지를 포인트 클라우드(PointCloud)로 변환하여 각 이미지별 3차원 포인트 클라우드를 생성하고, 상기 생성된 복수의 3차원 포인트 클라우드를 정합하여 3차원 증강현실 객체를 생성하는 제어부;a controller for generating a 3D point cloud for each image by converting the plurality of input images into a point cloud, and generating a 3D augmented reality object by matching the generated 3D point cloud;
    를 포함하는 가상 카메라를 이용한 증강현실 객체 생성 장치.Augmented reality object creation device using a virtual camera comprising a.
  2. 제 1항에 있어서,The method of claim 1,
    상기 제어부는,The control unit is
    상기 이미지의 해상도 값 및 상기 가상 카메라와 상기 객체 사이의 거리 값이 차이가 있으면, 각 이미지별로 깊이(Depth) 정보에 대한 스케일링을 더 수행한 후, 상기 스케일링된 각 이미지를 포인트 클라우드로 변환하는 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 장치. If there is a difference between the resolution value of the image and the distance value between the virtual camera and the object, scaling the depth information for each image is further performed, and then converting each scaled image into a point cloud Augmented reality object creation device using a virtual camera, characterized in that.
  3. 제 2항에 있어서,3. The method of claim 2,
    상기 제어부는,The control unit is
    상기 해상도 값을 상기 거리 값에 나눈 다음 각 픽셀의 깊이 값을 곱하여 상기 깊이 정보에 대한 스케일링을 수행하는 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 장치. The apparatus for generating an augmented reality object using a virtual camera, characterized in that the depth information is scaled by dividing the resolution value by the distance value and then multiplying the depth value of each pixel.
  4. 제 1항에 있어서,The method of claim 1,
    상기 제어부는,The control unit is
    상기 가상 카메라별 각도와 상기 이미지의 해상도 값을 이용하여 상기 3차원 포인트 클라우드를 정렬시킨 후, 상기 정합을 하는 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 장치.Augmented reality object generating apparatus using a virtual camera, characterized in that after aligning the 3D point cloud by using the angle of each virtual camera and the resolution value of the image, the registration is performed.
  5. 제 1항에 있어서,The method of claim 1,
    상기 이미지는,The image is
    색상(color) 정보 및 깊이 정보를 포함하는 RGBD(RGB-Depth) 이미지인 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 장치.Augmented reality object generating apparatus using a virtual camera, characterized in that the RGBD (RGB-Depth) image including color information and depth information.
  6. 제 1항에 있어서,The method of claim 1,
    상기 가상 카메라는,The virtual camera,
    상기 객체의 정면, 후면, 우측면, 좌측면, 상면 및 하면을 촬영하도록 배치되는 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 장치.Augmented reality object creation device using a virtual camera, characterized in that arranged to photograph the front, rear, right, left, upper and lower surfaces of the object.
  7. 증강현실 객체 생성 장치가 가상 카메라에 의해 다각도에서 촬영된 객체가 포함된 복수의 이미지를 입력받는 단계;Receiving, by an augmented reality object generating device, a plurality of images including objects photographed from multiple angles by a virtual camera;
    상기 증강현실 객체 생성 장치가 상기 입력된 복수의 이미지를 포인트 클라우드로 변환하여 각 이미지별 3차원 포인트 클라우드를 생성하는 단계; 및generating, by the augmented reality object generating device, a three-dimensional point cloud for each image by converting the plurality of input images into a point cloud; and
    상기 증강현실 객체 생성 장치가 상기 생성된 복수의 3차원 포인트 클라우드를 정합하여 3차원 증강현실 객체를 생성하는 단계;generating, by the augmented reality object generating device, a 3D augmented reality object by matching the plurality of generated 3D point clouds;
    를 포함하는 가상 카메라를 이용한 증강현실 객체 생성 방법.Augmented reality object creation method using a virtual camera comprising a.
  8. 제 7항에 있어서,8. The method of claim 7,
    상기 이미지의 해상도 값 및 상기 가상 카메라와 상기 객체 사이의 거리 값이 차이가 있으면, If there is a difference between the resolution value of the image and the distance value between the virtual camera and the object,
    상기 입력받는 단계 및 상기 변환하는 단계 사이에,Between the step of receiving the input and the step of converting,
    각 이미지별로 깊이 정보에 대한 스케일링을 수행하는 단계;performing scaling on depth information for each image;
    를 더 포함하는 것을 특징으로 하는 가상 카메라를 이용한 증강현실 객체 생성 방법.Augmented reality object creation method using a virtual camera, characterized in that it further comprises.
PCT/KR2020/018407 2020-12-04 2020-12-16 Augmented reality object generation apparatus and method using virtual cameras WO2022119039A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200168771A KR102366847B1 (en) 2020-12-04 2020-12-04 Augmented Reality object generation device and method using virtual camera
KR10-2020-0168771 2020-12-04

Publications (1)

Publication Number Publication Date
WO2022119039A1 true WO2022119039A1 (en) 2022-06-09

Family

ID=80490349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/018407 WO2022119039A1 (en) 2020-12-04 2020-12-16 Augmented reality object generation apparatus and method using virtual cameras

Country Status (2)

Country Link
KR (1) KR102366847B1 (en)
WO (1) WO2022119039A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110133677A (en) * 2010-06-07 2011-12-14 삼성전자주식회사 Method and apparatus for processing 3d image
KR101875047B1 (en) * 2018-04-24 2018-07-06 주식회사 예간아이티 System and method for 3d modelling using photogrammetry
US20180220125A1 (en) * 2017-01-31 2018-08-02 Tetavi Ltd. System and method for rendering free viewpoint video for sport applications
KR20200049337A (en) * 2018-10-31 2020-05-08 에스케이텔레콤 주식회사 Apparatus and method for registering images
KR20200102114A (en) * 2019-02-21 2020-08-31 한국전자통신연구원 Method and appartus for learning-based generating 3d model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101940720B1 (en) * 2016-08-19 2019-04-17 한국전자통신연구원 Contents authoring tool for augmented reality based on space and thereof method
US10527711B2 (en) * 2017-07-10 2020-01-07 Aurora Flight Sciences Corporation Laser speckle system and method for an aircraft
KR102065632B1 (en) 2018-10-22 2020-02-11 전자부품연구원 Device and method for acquiring 360 VR images in a game using a plurality of virtual cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110133677A (en) * 2010-06-07 2011-12-14 삼성전자주식회사 Method and apparatus for processing 3d image
US20180220125A1 (en) * 2017-01-31 2018-08-02 Tetavi Ltd. System and method for rendering free viewpoint video for sport applications
KR101875047B1 (en) * 2018-04-24 2018-07-06 주식회사 예간아이티 System and method for 3d modelling using photogrammetry
KR20200049337A (en) * 2018-10-31 2020-05-08 에스케이텔레콤 주식회사 Apparatus and method for registering images
KR20200102114A (en) * 2019-02-21 2020-08-31 한국전자통신연구원 Method and appartus for learning-based generating 3d model

Also Published As

Publication number Publication date
KR102366847B1 (en) 2022-02-25

Similar Documents

Publication Publication Date Title
US20110254835A1 (en) System and method for the creation of 3-dimensional images
CN108833877B (en) Image processing method and device, computer device and readable storage medium
CN111008985A (en) Panorama picture seam detection method and device, readable storage medium and electronic equipment
AU2013273829A1 (en) Time constrained augmented reality
KR20210147868A (en) Video processing method and device
US20140198187A1 (en) Camera with plenoptic lens
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
CN108694389A (en) Safe verification method based on preposition dual camera and electronic equipment
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
JP2023053039A (en) Information processing apparatus, information processing method and program
EP3757945A1 (en) Device for generating an augmented reality image
WO2022119039A1 (en) Augmented reality object generation apparatus and method using virtual cameras
CN116801037A (en) Augmented reality live broadcast method for projecting image of live person to remote real environment
WO2019103192A1 (en) Device and method for acquiring 360 vr image in game using virtual camera
WO2021147749A1 (en) Method and apparatus for realizing 3d display, and 3d display system
CN114339071A (en) Image processing circuit, image processing method and electronic device
CN109727315B (en) One-to-many cluster rendering method, device, equipment and storage medium
CN108255986A (en) Information identifying method, system and computer readable storage medium
WO2024117556A1 (en) Holographic image rendering method for printing that does not require rearrangement
CN112584130A (en) Method and device for realizing 3D display and 3D display terminal
WO2019124802A1 (en) Apparatus and method for providing mapping pseudo-hologram by using individual image signal output
CN106375750B (en) A kind of image display method and display device
KR20200097543A (en) Augmented Reality-based performance video viewing system and performance image providing method using it
WO2020111471A1 (en) Mobile hologram display terminal for augmented reality
WO2020004695A1 (en) System for performing real-time parallel rendering of motion capture image by using gpu

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964387

Country of ref document: EP

Kind code of ref document: A1