WO2021137348A1 - Method for generating space map in order to share three-dimensional space information among plurality of terminals and reading command execution point - Google Patents

Method for generating space map in order to share three-dimensional space information among plurality of terminals and reading command execution point Download PDF

Info

Publication number
WO2021137348A1
WO2021137348A1 PCT/KR2020/000608 KR2020000608W WO2021137348A1 WO 2021137348 A1 WO2021137348 A1 WO 2021137348A1 KR 2020000608 W KR2020000608 W KR 2020000608W WO 2021137348 A1 WO2021137348 A1 WO 2021137348A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
terminals
spatial information
space
server
Prior art date
Application number
PCT/KR2020/000608
Other languages
French (fr)
Korean (ko)
Inventor
한상준
Original Assignee
엔센스코리아주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엔센스코리아주식회사 filed Critical 엔센스코리아주식회사
Publication of WO2021137348A1 publication Critical patent/WO2021137348A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present invention is based on a real-virtual matching technology capable of synthesizing a virtual object in a real space to increase the presence of a virtual object by a plurality of people in the same space through their respective terminals and to provide the same user experience.
  • augmented reality technology is a technology field derived from virtual reality technology that synthesizes and superimposes virtual objects on real space and shows them. It can increase the presence of a virtual object by creating the illusion that it actually exists in real space.
  • a 3D point cloud map is generated from a depth image obtained using a depth camera, an object in a real space on which augmented content is to be projected with the 3D point cloud map is tracked, and virtual on the real space using a display device such as a projector.
  • a display device such as a projector.
  • the position value of the target object in the virtual space is calculated from the depth image obtained using the depth camera, and the reference position database and the There is a method of generating an event execution signal by comparison.
  • all of the above examples are accompanied by a device capable of acquiring the same depth as a depth camera in order to configure a three-dimensional real space as spatial information or to receive an input for interaction with a user in a three-dimensional real space.
  • a device capable of acquiring the same depth as a depth camera in order to configure a three-dimensional real space as spatial information or to receive an input for interaction with a user in a three-dimensional real space.
  • the present invention implements an augmented reality technology in which a plurality of users use their individual terminals in an arbitrary space, and a method in which a plurality of users interact with a virtual object and share it with each other.
  • augmented reality technology that allows you to feel the same presence in one space
  • by implementing the augmented reality technology using each individual terminal the limitation of the installation space is eliminated, and the 3D point cloud map is shared with each other and a single camera is used in the real space. It compensates for the position error, which is a difficult point in tracking , and when each other executes a command using each individual terminal, the same point in the three-dimensional space can be read as the command execution point.
  • the purpose is to provide a way to share the command execution point.
  • a spatial recognition method based on depth information acquired through a conventional infrared ToF camera, an RGB-D type depth camera, a stereo depth camera, etc. is avoided, and spatial information is constructed using only image information. It implements a SLAM (Simultaneous Localization and Mapping) method and provides a method for sharing it by a plurality of terminals.
  • SLAM Simultaneous Localization and Mapping
  • the present invention is characterized in that a plurality of users use their respective terminals to simultaneously use the augmented reality technology in an arbitrary space
  • the terminals include a camera and a color image acquisition unit for inputting images in a real space
  • the An image processing unit for extracting depth information from a color image and constructing a 3D point cloud map
  • a display and image output unit that can synthesize and overlap real space and virtual objects and output them, and a user input by touching the display
  • It consists of a touch screen capable of processing and a touch input unit capable of processing it
  • a server for a plurality of terminals to share a 3D point cloud map
  • a connection control unit for communicating with the terminal in a socket communication method in the server
  • a space map storage unit It consists of an event transmitter.
  • a 3D point cloud map generated based on the spatial information connecting the 3D point cloud map to a server to share the 3D point cloud map with each other using socket communication; receiving a role assigned to a master node or a slave node from the server connected through the socket communication;
  • the present invention generates a 3D point cloud map by recognizing a real space by using each individual terminal in a single space by a plurality of users at the same time, and creates a virtual map that can be shared by a plurality of users through a server, , a number of users share a virtual map from the server so that each user can experience the same augmented reality, thereby enhancing the sense of presence, as well as a number of users operating virtual objects through the terminal based on the virtual map
  • This is a method that reads the same three-dimensional spatial coordinates and shares them, so that event information related to manipulation of virtual objects in each augmented reality experience can be simultaneously executed.
  • the present invention can be used for tasks such as application examples such as
  • augmented reality technology can be applied in place of mock-ups in construction, architecture, interior design, 3D product design, etc. in order to increase the understanding of multilateral.
  • the present invention is applied to multiple users simultaneously synthesizing and superimposing virtual objects in the same real space, and has the same effect as having a meeting while viewing the real thing using a mockup. It is effective because you can easily experience augmented reality without being limited by time.
  • a method of using a model for urban construction planning or architectural design using a diorama it is possible to substitute the augmented reality technology using a plurality of terminals of the present invention instead of making an existing model. , it reduces the production cost or production time of the diorama, and in the presentation using this real model, modifications or changes of the model cannot be immediately reflected during the presentation, whereas the instruction execution point reading method through the present invention is used to present Through a method in which the user in charge of the demonstration touches the display and executes commands on the virtual object, for example, the existing building before the redevelopment and the new building after the redevelopment that are displayed overlaid on the real space as a virtual object are shown. or, similarly, changing or relocating trees or sculptures in green space can be shown in real time.
  • the application of the present invention is not limited to the above embodiment, and the present invention can be applied to fields other than the above embodiment without changing the essential content of the present invention.
  • 1 is a system structural diagram including a server capable of sharing spatial information with a plurality of terminals;
  • FIG. 2 is a flowchart specifically illustrating a method for generating 3D spatial information in a terminal and interworking with a server
  • FIG. 3 is a flowchart specifically illustrating a method of outputting an augmented reality image by reflecting spatial information in connection with a server in a terminal;
  • FIG. 4 is a flowchart specifically illustrating a method of sharing spatial information with a plurality of terminals in a server and reading a command execution point;
  • FIG. 5 is a flowchart specifically illustrating a method of controlling a virtual object in a server and sharing the virtual object by a plurality of terminals;
  • a method for sharing through and reading a command execution point that converts a user's two-dimensional input information using a touch screen into three-dimensional information in one terminal to experience the same augmented reality even if multiple users use their own terminals is the structural diagram of
  • one or more users each use the terminal 100 and acquire continuous images from the color image acquisition unit 120 using the camera 110 .
  • This is transmitted to the image processing unit 130 to generate and track a spatial map by the method of FIG. 2 .
  • the 3D point cloud map is transmitted to the server 200 through the data transceiver 140 and is received by the access control unit 210 .
  • the access control unit classifies the data operation command sent by the terminal by the method of FIG. 4 and controls the execution of the space map storage unit 220 , the space map search unit 230 , and the command execution point reader 240 according to the method of FIG. 4 .
  • the space map generated by the space map storage unit is stored in the space map DB 250 .
  • the two-dimensional image of the real space obtained by the color image acquisition unit and the 3D modeling of the virtual object stored in the virtual object DB 270 of the server and the location and direction information of the virtual object are shown in FIG. In the same way, they are synthesized and superimposed, and real-virtual images are generated and output to the display 160 .
  • the user can experience augmented reality synthesized by superimposing virtual objects in real space through the above process, and at this time, touch the touch screen 161 to manipulate, create, or delete the virtual objects to execute commands.
  • a touch is performed, and a two-dimensional coordinate input value is derived from the touch input unit 170 .
  • the coordinate input value input by the user through the above process is branched from the access control unit 210 of the server 200 through the data transmission/reception unit 140 and transmitted to the command execution point reader 240 .
  • the command execution point obtained in the same way as in FIG. 4 is transmitted to the event transmitter 260, and information about the virtual object is reflected in the virtual object DB 270, and the event transmitter is all nodes connected to the server through the access controller.
  • the information of the virtual object is transmitted to the terminal of
  • the plurality of terminals receiving the virtual object information through the data transceiver repeats the process of being updated with the new virtual object information in the image output unit and outputting the information to the display as in the above process.
  • Each terminal acquires an image of real space from a camera and extracts feature points using an algorithm such as SIFT, SURF, or ORB. Thereafter, it is determined whether the current state is the initialized state of the spatial map, and if the spatial map is not initialized, the camera posture and origin are estimated using the feature points extracted from the acquired image. At this time, two consecutive images are required to estimate the camera posture and origin. To compare the similarity between the feature points extracted from each image, the Euclidean distance is obtained to obtain a feature point pair matching the two feature points with the shortest distance.
  • an algorithm such as SIFT, SURF, or ORB.
  • the relative position of the t frame may be obtained from the t-1 frame using the geometric relationship of the matched feature point pair.
  • the spatial map is initialized by estimating the initial position of the spatial map by finding the camera position that best matches the spatial map while moving the camera image in 3D space based on the matching pair of feature points between the previous frame and the current frame.
  • a frame in which a large change occurs among image frames is determined as a keyframe for fast and stable position tracking, and a 3D point cloud map with a high computational load
  • the generation is executed only for keyframes in which a large change occurs, and spatial map registration and tracking, which does not have a high computational load, is branched to be executed for all frames.
  • the corresponding frame is determined as a keyframe and added to the keyframe group, and the extracted feature points are matched with the feature points of the spatial map initialized and generated in the above step in the three-dimensional space of each feature point.
  • a 3D point cloud map is generated so that the coordinates are optimal solutions.
  • the 3D spatial information is transmitted to the server, and the receiving server collects all the 3D spatial information received from the plurality of terminals to obtain the 3D spatial information of each feature point. Process the bundle adjustment to sum the 3D point cloud map so that the spatial coordinates are the optimal solution.
  • each terminal receives the bundle-adjusted server spatial map, so that a plurality of terminals can share all of the three-dimensional spatial information in real space.
  • the 3D spatial information generated by each terminal can obtain the simultaneity shared by a plurality of terminals.
  • the plurality of terminals may make a socket connection with the server and share the command execution point by exchanging the user's input information from the terminal with each other.
  • a terminal that initially accesses the server or creates a new shared group is assigned as a master node from the server, and a terminal that accesses the same shared group thereafter is assigned as a slave node, maintaining socket connection and command give and receive
  • the terminal assigned as the master node may newly add a virtual object to be augmented and expressed in real space, or may change three-dimensional space coordinates.
  • the user touches an arbitrary point on the touch screen of the terminal, and the 2D coordinates are projected in 3D to obtain a line segment that can orthogonalize the camera coordinates and the 2D coordinates.
  • the coordinates of the three-dimensional space obtained in this way become the spatial coordinates for outputting the virtual object, and the spatial coordinates and object information are transmitted to each terminal connected to the server and the socket, so that a plurality of terminals can display the same object on the same real space.
  • It can be augmented, so that it is possible to realize a method of generating a spatial map and reading out a command execution point for sharing 3D spatial information in a plurality of terminals, which is intended to be achieved in the present invention.

Abstract

The present invention relates to a method by which a plurality of terminals generates a space map in order to share three-dimensional space information among the plurality of terminals and reads a command execution point, and, more specifically, to a method, which combines image features acquired from continuous color images and sensor information acquired from a gyro sensor, so as to generate three-dimensional space information, transmits/receives same to/from a server through socket communication so as to synthesize fragmentary space information obtained from a plurality of terminals, thereby generating and sharing extended space information, obtains continuity for the augmentation of real space, projects two-dimensional coordinates inputted by a user through a touch screen on each terminal to three-dimensional space coordinates, so as to transform the projected three-dimensional space coordinates into three-dimensional coordinates in real space, shares same among the plurality of terminals so that the plurality of terminals can simultaneously augment the same virtual object at the same location.

Description

[규칙 제26조에 의한 보정 23.03.2020] 복수의 단말에서 3차원 공간정보를 공유하기 위한 공간맵 생성과 명령실행점 독출방법[Correction 23.03.2020 according to Rule 26]   Creation of a spatial map for sharing 3D spatial information in a plurality of terminals and a method for reading command execution points
본 발명은 실제의 공간 상에 가상의 객체를 합성할 수 있는 실-가상 정합 기술을 기반하여 동일한 공간에 있는 다수의 인원이 각자의 단말을 통해 가상 객체의 현존감을 높이고 동일한 사용자 경험을 제공하기 위한, 복수의 단말에서 연동 가능한 3차원 공간정보와 명령실행점을 읽어내기 위한 방법에 관한 것으로, 더욱 상세하게는 휴대용 단말기의 카메라를 통해 취득한 영상과 자이로센서를 통해 취득한 휴대용 단말의 자세정보를 이용하여 3차원 공간맵을 생성하고 서버와 소켓통신을 통해 생성된 공간맵을 다수의 휴대용 단말이 이를 공유하여 휴대용 단말을 조작하여 가상 객체의 위치나 방향을 결정하거나 가상 객체의 움직임을 제어하여 다자간의 동일한 사용자 경험을 제공하기 위한 방법에 관한 것이다.The present invention is based on a real-virtual matching technology capable of synthesizing a virtual object in a real space to increase the presence of a virtual object by a plurality of people in the same space through their respective terminals and to provide the same user experience. , relates to a method for reading interlockable three-dimensional spatial information and command execution points from a plurality of terminals, and more particularly, by using an image obtained through a camera of a portable terminal and posture information of a portable terminal obtained through a gyro sensor A three-dimensional space map is created and a number of portable terminals share the space map created through socket communication with the server to determine the location or direction of a virtual object by manipulating the portable terminal, or control the movement of a virtual object to provide the same It relates to a method for providing a user experience.
일반적으로 증강현실(Augmented Reality) 기술이란, 현실 공간 위에 가상의 물체를 합성하여 중첩하고 이를 보여주는 가상현실(Virtual Reality) 기술에서 파생된 기술 분야로써, 가상현실 기술보다 가상의 물체를 보여주는데 있어서 사용자로 하여금 가상의 물체가 실제로 현실 공간에 존재하는 것처럼 착각을 일으켜 현존감(Presence)을 높여줄 수 있다.In general, augmented reality technology is a technology field derived from virtual reality technology that synthesizes and superimposes virtual objects on real space and shows them. It can increase the presence of a virtual object by creating the illusion that it actually exists in real space.
첫번째 일례로서 뎁스 카메라를 이용하여 획득한 뎁스 영상으로부터 3D 포인트 클라우드 맵을 생성하고 3D 포인트 클라우드 맵으로 증강 콘텐츠가 투영될 실 공간의 객체를 추적하여, 프로젝터와 같은 디스플레이 장치를 이용해 실제 공간 상에 가상의 물체를 투영하여 직접 중첩시키고 사용자와의 인터랙션을 할 수 있는 방법이 있다.As a first example, a 3D point cloud map is generated from a depth image obtained using a depth camera, an object in a real space on which augmented content is to be projected with the 3D point cloud map is tracked, and virtual on the real space using a display device such as a projector. There is a way to directly overlap objects by projecting them and to interact with the user.
두번째 일례로서는 3차원의 현실 공간에서 사용자의 동작을 인식하여 이벤트를 발생시키기 위한 방법으로, 뎁스 카메라를 이용하여 획득한 뎁스 영상으로부터, 가상 공간 상에서의 대상 객체의 위치값을 연산하여 기준위치 데이터베이스와 비교하여 이벤트 실행 신호를 방생하는 방법이 있다.As a second example, as a method for generating an event by recognizing a user's motion in a three-dimensional real space, the position value of the target object in the virtual space is calculated from the depth image obtained using the depth camera, and the reference position database and the There is a method of generating an event execution signal by comparison.
하지만 상기의 일례들의 방법에서는 하나의 구성된 현실 공간에 복수의 사용자가 동시에 인터랙션 하도록 확장할 수 있겠으나, 다른 임의의 공간에서 증강현실을 구현하기 위하여는 뎁스 카메라와 프로젝터를 설치하고 카메라와 프로젝터의 위치를 캘리브레이션 해야하는 불편함이 있을 수 있어 증강현실을 구현하는데 즉시성이 떨어지는 문제점이 있다. However, in the method of the examples above, it can be extended to allow a plurality of users to interact simultaneously in one configured real space, but in order to implement augmented reality in another arbitrary space, a depth camera and a projector are installed and the position of the camera and the projector There may be inconvenient to calibrate , so there is a problem that the immediacy is poor in realizing augmented reality.
또한 상기의 일례들은 모두 3차원의 현실 공간을 공간정보로서 구성하거나, 3차원 현실 공간에서 사용자와의 인터랙션을 위한 입력을 받기 위하여 반드시 뎁스 카메라와 같은 심도(Depth)를 획득할 수 있는 장치가 수반되어야 한다. 그러나 근래에 가장 보편화 된 스마트폰 또는 태블릿과 같은 출력 디스플레이와 카메라, 터치 스크린을 가진 개인용 단말의 경우 적외선 ToF 카메라, RGB-D 방식의 뎁스 카메라, 스테레오 뎁스 카메라 등을 탑재한 경우는 극히 일부로, 이러한 뎁스 센서를 탑재하지 않은 개인용 단말에서는 상기 일례들과 같은 기술을 구현할 수 없는 문제점이 있다.In addition, all of the above examples are accompanied by a device capable of acquiring the same depth as a depth camera in order to configure a three-dimensional real space as spatial information or to receive an input for interaction with a user in a three-dimensional real space. should be However, in the case of a personal terminal with an output display, camera, and touch screen, such as a smartphone or tablet, which are most common in recent years, there are only a few cases where an infrared ToF camera, RGB-D type depth camera, stereo depth camera, etc. are installed. In a personal terminal not equipped with a depth sensor, there is a problem in that the technology as in the above examples cannot be implemented.
또 다른 일례로서 현실 공간의 2차원 공간정보를 추출하는 방법으로는 단일 카메라를 이용하여 입력되는 영상으로부터 특징점을 인식하여 3차원 공간상의 위치를 추적하는 방법이 있다.As another example, as a method of extracting two-dimensional spatial information in real space, there is a method of recognizing a feature point from an input image using a single camera and tracking a position in a three-dimensional space.
하지만 이는 현실 공간의 3D 포인트 클라우드 맵을 구성하는 과정에서 뎁스 카메라를 이용한 3차원 공간정보 추출의 방법에 비교하여 2차원 이미지에서 추출한 특징점의 오차로 인하여 실 공간을 추적하는데 더 어려울 뿐만 아니라 위치 추적의 어려움으로 인해 가상의 물체의 중첩에 오류가 발생하여 현존감이 떨어지는 문제점이 있다. 또한, 단일 단말의 입력 영상으로부터 3차원 정보를 추출하는데 그쳐, 여러 사용자가 같은 현존감을 느끼게 할 수 없는 문제점이 있다.However, compared to the method of extracting 3D spatial information using a depth camera in the process of constructing a 3D point cloud map of real space, it is more difficult to track the real space due to the error of the feature points extracted from the 2D image, and Due to the difficulty, there is a problem in that an error occurs in the superposition of the virtual object, thereby reducing the sense of presence. In addition, there is a problem in that the three-dimensional information is only extracted from the input image of a single terminal, so that several users cannot feel the same presence.
본 발명은 임의의 한 공간에서 다수의 사용자가 각자의 개별적인 단말을 이용하는 방식의 증강현실 기술을 구현하고, 다수의 사용자가 가상의 물체와 인터랙션하고 이를 서로 공유하는 방법으로, 다수의 사용자가 임의의 한 공간에서 같은 현존감을 느낄 수 있는 증강현실 기술에 있어서, 각자의 개별적인 단말을 이용하여 증강현실 기술을 구현함으로서 설치 공간의 제약을 없애고, 3D 포인트 클라우드 맵을 서로 공유하여 단일 카메라를 사용하여 실 공간을 추적하는데 애로점인 위치 오차를 보완하며, 서로가 각자의 개별적인 단말을 이용하여 명령을 실행할 때 3차원 공간 상의 동일점을 명령실행점으로 읽을 수 있는 방법으로 다자간의 증강현실 기술을 이용한 공간정보와 명령실행점을 공유할 수 있는 방법을 제공하는데 그 목적이 있다.The present invention implements an augmented reality technology in which a plurality of users use their individual terminals in an arbitrary space, and a method in which a plurality of users interact with a virtual object and share it with each other. In the augmented reality technology that allows you to feel the same presence in one space, by implementing the augmented reality technology using each individual terminal, the limitation of the installation space is eliminated, and the 3D point cloud map is shared with each other and a single camera is used in the real space. It compensates for the position error, which is a difficult point in tracking , and when each other executes a command using each individual terminal, the same point in the three-dimensional space can be read as the command execution point. The purpose is to provide a way to share the command execution point.
본 발명을 해결하기 위하여 종래의 적외선 ToF 카메라, RGB-D 방식의 뎁스 카메라, 스테레오 뎁스 카메라 등을 통해 취득한 뎁스 정보를 기반으로 하는 공간인식 방법을 지양하고, 이미지 정보만을 이용하여 공간정보를 구성하는 SLAM(Simultaneous Localization and Mapping) 방법을 구현하여, 이를 복수의 단말이 공유하는 방법을 제공한다.In order to solve the present invention, a spatial recognition method based on depth information acquired through a conventional infrared ToF camera, an RGB-D type depth camera, a stereo depth camera, etc. is avoided, and spatial information is constructed using only image information. It implements a SLAM (Simultaneous Localization and Mapping) method and provides a method for sharing it by a plurality of terminals.
본 발명은 다수의 사용자가 동시에 임의의 한 공간에서 증강현실 기술을 이용하기 위하여, 각자의 개별적인 단말을 이용함에 특징이 있는데, 상기 단말은 현실 공간의 영상 입력을 위한 카메라와 컬러 영상 취득부, 상기 컬러 영상으로부터 뎁스 정보를 추출하고 3D 포인트 클라우드 맵을 구성하기 위한 영상처리부, 현실 공간과 가상의 물체를 합성하여 중첩하고 이를 출력할 수 있는 디스플레이와 영상 출력부, 디스플레이를 터치하여 사용자가 입력을 할 수 있는 터치스크린과 이를 처리할 수 있는 터치 입력부로 구성되고, 복수의 단말이 3D 포인트 클라우드 맵을 공유하기 위한 서버와 상기 서버에서 소켓통신 방식으로 단말과 통신하기 위한 접속제어부, 공간 맵 저장부, 이벤트 전송부로 구성한다.The present invention is characterized in that a plurality of users use their respective terminals to simultaneously use the augmented reality technology in an arbitrary space, wherein the terminals include a camera and a color image acquisition unit for inputting images in a real space, the An image processing unit for extracting depth information from a color image and constructing a 3D point cloud map, a display and image output unit that can synthesize and overlap real space and virtual objects and output them, and a user input by touching the display It consists of a touch screen capable of processing and a touch input unit capable of processing it, and a server for a plurality of terminals to share a 3D point cloud map, a connection control unit for communicating with the terminal in a socket communication method in the server, a space map storage unit, It consists of an event transmitter.
상기 공간정보를 바탕으로 생성된 3D 포인트 클라우드 맵을 구성하는 단계; 상기 3D 포인트 클라우드 맵을 복수의 단말이 서로 공유하기 위해 서버로 소켓(Socket)통신을 이용하여 연결하는 단계; 상기 소켓통신으로 연결한 서버로부터 마스터(Master) 노드 또는 슬래이브(Slave) 노드로 역할을 할당받는 단계; 상기 서버로 3D 포인트 클라우드 맵을 전송하고 SLAM 공간의 원점을 기준으로 공간 맵을 생성하여 서버에 저장하는 마스터 노드, 서버에 저장된 공간 맵에서 단말이 생성한 3D 포인트 클라우드 맵을 탐색하고, 3D 포인트 클라우드 맵을 공간 맵에 병합하는 슬래이브 노드로 구성된 3D 포인트 클라우드 맵을 병합하고 공유하는 단계; 단말의 터치 스크린을 통해 입력한 2차원의 좌표를 공간 맵에서 3차원 좌표로 구성된 명령실행점을 독출하는 단계; 명령실행점의 3차원 좌표 정보와 이벤트를 복수의 단말이 공유하여 가상의 물체에 명령을 내리는 단계로 이루어진 것에 특징이 있다.constructing a 3D point cloud map generated based on the spatial information; connecting the 3D point cloud map to a server to share the 3D point cloud map with each other using socket communication; receiving a role assigned to a master node or a slave node from the server connected through the socket communication; A master node that transmits a 3D point cloud map to the server and creates a spatial map based on the origin of the SLAM space and stores it in the server, searches the 3D point cloud map generated by the terminal in the spatial map stored in the server, and 3D point cloud merging and sharing a 3D point cloud map composed of slave nodes that merge the map into a spatial map; reading a command execution point composed of three-dimensional coordinates from a spatial map of two-dimensional coordinates input through a touch screen of a terminal; It is characterized in that the three-dimensional coordinate information of the command execution point and the event are shared by a plurality of terminals to give a command to a virtual object.
본 발명은 다수의 사용자가 동시에 임의의 한 공간에서 각자의 개별적인 단말을 이용하여 현실 공간을 인식하여 3D 포인트 클라우드 맵을 생성하고, 이를 서버를 통해 다수의 사용자가 공유할 수 있는 가상맵을 생성하고, 다수의 사용자가 서버로부터 가상맵을 공유하여 사용자 각자에게 동일한 증강현실을 체험할 수 있도록 하여 현존감을 끌어올림은 물론, 다수의 사용자가 단말을 통해 가상의 물체를 조작함에 있어 가상맵을 기준으로 하여 동일한 3차원 공간좌표를 독출하고 이를 공유하여, 각자의 증강현실 체험에서 가상 물체의 조작 등에 관련한 이벤트 정보를 동시에 실행할 수 있는 방법이다.The present invention generates a 3D point cloud map by recognizing a real space by using each individual terminal in a single space by a plurality of users at the same time, and creates a virtual map that can be shared by a plurality of users through a server, , a number of users share a virtual map from the server so that each user can experience the same augmented reality, thereby enhancing the sense of presence, as well as a number of users operating virtual objects through the terminal based on the virtual map This is a method that reads the same three-dimensional spatial coordinates and shares them, so that event information related to manipulation of virtual objects in each augmented reality experience can be simultaneously executed.
이를 통해 어느 임의의 한 공간에서 뎁스 카메라와 프로젝터를 설치하지 않고서도 여러 사용자가 즉시 증강현실을 체험할 수 있으며, 증강현실 체험의 동시성을 만족하여 현존감을 극대로 끌어올려 증강현실 기술을 이용하여 다음과 같은 활용 예시와 같은 업무에 본 발명을 이용할 수 있다.Through this, multiple users can immediately experience augmented reality without installing a depth camera and projector in any one space. The present invention can be used for tasks such as application examples such as
먼저, 다자간의 이해도를 높이기 위하여 건설, 건축, 인테리어, 3차원 제품 디자인 등에서 목업(Mock-up)을 대신하여 증강현실 기술을 적용할 수 있다. 이 때 다수의 사용자가 같은 현실 공간에서 동시에 가상의 물체를 합성하고 중첩하는데 있어서 본 발명을 적용하여, 목업을 활용한 실물을 보면서 회의하는 것과 같은 효과가 있으며, 목업을 활용한 회의에 비해 공간이나 시간에 국한하지 않고 용이하게 증강현실을 체험할 수 있는데 효과가 있다.First, augmented reality technology can be applied in place of mock-ups in construction, architecture, interior design, 3D product design, etc. in order to increase the understanding of multilateral. At this time, the present invention is applied to multiple users simultaneously synthesizing and superimposing virtual objects in the same real space, and has the same effect as having a meeting while viewing the real thing using a mockup. It is effective because you can easily experience augmented reality without being limited by time.
또 다른 실시의 예로, 디오라마(Diorama)를 이용한 도시 건설 계획이나 건축 설계를 위하여 모형을 활용하는 방법에 있어서, 실존하는 모형을 제작하는 대신 본 발명의 복수의 단말을 이용한 증강현실 기술로 대체 가능하여, 디오라마의 제작 비용이나 제작 시간을 절감하고, 또한 이러한 실물 모형을 이용한 발표에서는 모형의 수정 내지 변경사항을 발표 도중 즉시 반영할 수 없는데 반하여, 본 발명을 통한 명령실행점 독출 방법을 이용하면, 발표 시연을 담당하는 사용자가 디스플레이를 터치하여 가상의 물체에 명령을 실행하는 방법을 통해 예를 들면 가상의 물체로서 현실 공간에 중첩되어 표시 중인 재개발 이전의 기존 건물과 재개발 이후의 신축 건물을 교차하여 보여주거나, 마찬가지로 녹지 공간의 나무나 조형물을 변경 또는 그 위치를 이동시키는 것을 실시간으로 보여줄 수 있게 된다.As another embodiment, in a method of using a model for urban construction planning or architectural design using a diorama, it is possible to substitute the augmented reality technology using a plurality of terminals of the present invention instead of making an existing model. , it reduces the production cost or production time of the diorama, and in the presentation using this real model, modifications or changes of the model cannot be immediately reflected during the presentation, whereas the instruction execution point reading method through the present invention is used to present Through a method in which the user in charge of the demonstration touches the display and executes commands on the virtual object, for example, the existing building before the redevelopment and the new building after the redevelopment that are displayed overlaid on the real space as a virtual object are shown. or, similarly, changing or relocating trees or sculptures in green space can be shown in real time.
상기 실시의 예와 마찬가지로 공장 등에서 대형 설비를 도입할 때, 미리 그 공간에 가상의 물체인 대형 설비를 현실 공간 위에 합성하여 중첩시켜 보여줌으로써 미리 그 크기를 가늠해 볼 수 있고, 그에 따른 작업자들의 동선이나 기존 설비와의 간섭 여부를 미리 알 수 있다. 또한 본 발명은 다수의 사용자가 동시에 임의의 한 공간에서 각자의 개별적인 단말을 이용하여 증강현실을 체험할 수 있으므로, 각 사용자가 본인의 시점에서 설비 도입 시의 문제 여부를 검토 할 수 있다. As in the above embodiment, when a large facility is introduced in a factory, etc., it is possible to estimate the size in advance by synthesizing and superimposing a large facility, which is a virtual object, on the real space in the space, and accordingly the movement of workers or Interference with existing equipment can be known in advance. In addition, in the present invention, since a plurality of users can simultaneously experience augmented reality using their individual terminals in any one space, each user can examine whether there is a problem when introducing the facility from his or her own point of view.
단, 본 발명의 적용은 상기 실시 예시에 국한하지 않고, 본 발명의 본질적인 내용의 변경없이 상기의 실시 예시와 다른 분야에도 본 발명을 적용할 수 있다.However, the application of the present invention is not limited to the above embodiment, and the present invention can be applied to fields other than the above embodiment without changing the essential content of the present invention.
도 1은 복수의 단말과 공간정보를 공유할 수 있는 서버를 포함한 시스템 구조도.1 is a system structural diagram including a server capable of sharing spatial information with a plurality of terminals;
도 2는 단말에서 3차원 공간정보를 생성하고 서버와 연동하기 위한 방법을 구체적으로 나타낸 흐름도.2 is a flowchart specifically illustrating a method for generating 3D spatial information in a terminal and interworking with a server;
도 3은 단말에서 서버와 연동하여 공간정보를 반영하여 증강현실 이미지를 출력하는 방법을 구체적으로 나타낸 흐름도.3 is a flowchart specifically illustrating a method of outputting an augmented reality image by reflecting spatial information in connection with a server in a terminal;
도 4는 서버에서 복수의 단말과 공간정보를 공유하고 명령실행점 독출하는 방법을 구체적으로 나타낸 흐름도.4 is a flowchart specifically illustrating a method of sharing spatial information with a plurality of terminals in a server and reading a command execution point;
도 5는 서버에서 가상의 물체를 제어하고 복수의 단말이 공유하는 방법을 구체적으로 나타낸 흐름도.5 is a flowchart specifically illustrating a method of controlling a virtual object in a server and sharing the virtual object by a plurality of terminals;
본 발명을 첨부된 도면을 참조하여 상세히 설명하면 다음과 같다.The present invention will be described in detail with reference to the accompanying drawings as follows.
도 1은 카메라(110)와 디스플레이(160), 터치스크린(161), 자이로센서(180)를 가지는 복수의 단말(100)이, 공간맵 DB(250)와 가상 물체 DB(270)를 가지는 서버(200)를 통해 공간정보를 다수의 사용자가 각자의 단말을 이용하여 공유할 수 있도록 하기 위한 장치로서, 상기 단말을 사용자의 주변 공간을 향하여 비추어 공간정보를 구성하고, 구성된 공간정보를 상기서버를 통해 공유하며, 어느 한 단말에서 터치스크린을 이용한 사용자의 2차원 입력정보를 3차원 정보로 변환한 명령실행점을 독출하여, 다수의 사용자가 각자의 단말을 이용하더라도 동일한 증강현실을 체험하기 위한 방법의 구조도이다.1 is a server having a plurality of terminals 100 having a camera 110 , a display 160 , a touch screen 161 , and a gyro sensor 180 , a space map DB 250 and a virtual object DB 270 . A device for allowing a plurality of users to share spatial information using their respective terminals through (200), constructing spatial information by illuminating the terminal toward the user's surrounding space, and transferring the configured spatial information to the server A method for sharing through and reading a command execution point that converts a user's two-dimensional input information using a touch screen into three-dimensional information in one terminal to experience the same augmented reality even if multiple users use their own terminals is the structural diagram of
상기 구조도를 구체적으로 기술하면, 1명 이상의 사용자가 각자 단말(100)을 사용하고, 카메라(110)를 이용하여 컬러 영상 취득부(120)에서 연속적인 영상을 취득한다. 이를 영상처리부(130)에 전달하여 도 2의 방법으로 공간맵을 생성 및 추적한다. 이 때 데이터 송수신부(140)를 통해 서버(200)로 3D 포인트 클라우드 맵을 전송하여 이를 접속제어부(210)에서 수신한다. 접속제어부는 도 4의 방법으로 단말이 보낸 데이터의 동작 명령을 구분하여 이에 따라 공간맵 저장부(220), 공간맵 탐색부(230), 명령실행점 독출부(240)의 실행을 제어한다. 공간맵 저장부에 의해 생성된 공간맵은 공간맵 DB(250)에 저장한다.Specifically, one or more users each use the terminal 100 and acquire continuous images from the color image acquisition unit 120 using the camera 110 . This is transmitted to the image processing unit 130 to generate and track a spatial map by the method of FIG. 2 . At this time, the 3D point cloud map is transmitted to the server 200 through the data transceiver 140 and is received by the access control unit 210 . The access control unit classifies the data operation command sent by the terminal by the method of FIG. 4 and controls the execution of the space map storage unit 220 , the space map search unit 230 , and the command execution point reader 240 according to the method of FIG. 4 . The space map generated by the space map storage unit is stored in the space map DB 250 .
영상출력부(150)에서는 상기 컬러 영상 취득부에서 얻은 현실 공간의 2차원 이미지와 서버의 가상 물체 DB(270)에 저장되어 있는 가상 물체의 3D 모델링과 가상 물체의 위치 및 방향 정보를 도 3과 같은 방법으로 합성하여 중첩하고, 실-가상 이미지를 생성하여 디스플레이(160)에 출력한다.In the image output unit 150, the two-dimensional image of the real space obtained by the color image acquisition unit and the 3D modeling of the virtual object stored in the virtual object DB 270 of the server and the location and direction information of the virtual object are shown in FIG. In the same way, they are synthesized and superimposed, and real-virtual images are generated and output to the display 160 .
사용자는 상기 과정을 통해 현실 공간에 가상 물체가 중첩되어 합성된 증강현실을 체험할 수 있으며, 이 때 가상 물체를 조작하거나, 생성하거나, 삭제하는 등의 명령을 실행하기 위하여 터치스크린(161)을 터치하고, 이를 터치입력부(170)에서 2차원 좌표 입력값을 도출한다. 상기 과정으로 사용자가 입력한 좌표 입력값은 데이터 송수신부(140)를 통해 서버(200)의 접속제어부(210)에서 분기되어 명령실행점 독출부(240)로 전달된다. 도 4와 같은 방법으로 구한 명령실행점은 이벤트 전송부(260)로 전달하여, 가상 물체에 대한 정보를 가상 물체 DB(270)에 반영하고, 이벤트 전송부는 접속제어부를 통해 서버에 접속한 모든 노드의 단말로 가상 물체의 정보를 전달한다.The user can experience augmented reality synthesized by superimposing virtual objects in real space through the above process, and at this time, touch the touch screen 161 to manipulate, create, or delete the virtual objects to execute commands. A touch is performed, and a two-dimensional coordinate input value is derived from the touch input unit 170 . The coordinate input value input by the user through the above process is branched from the access control unit 210 of the server 200 through the data transmission/reception unit 140 and transmitted to the command execution point reader 240 . The command execution point obtained in the same way as in FIG. 4 is transmitted to the event transmitter 260, and information about the virtual object is reflected in the virtual object DB 270, and the event transmitter is all nodes connected to the server through the access controller. The information of the virtual object is transmitted to the terminal of
데이터 송수신부를 통해 가상 물체의 정보를 수신한 복수의 단말은 다시 상기 과정과 같이 영상 출력부에서 새로운 가상 물체의 정보로 갱신되어 디스플레이로 출력하는 과정을 반복하게 된다.The plurality of terminals receiving the virtual object information through the data transceiver repeats the process of being updated with the new virtual object information in the image output unit and outputting the information to the display as in the above process.
상기 단말에서 3차원 공간정보를 생성하고 서버와 연동하기 위한 방법을 구체적으로 나타내면 도 2의 흐름도와 같다. 각 단말은 카메라로부터 현실 공간의 이미지를 획득하고, SIFT, SURF, 또는 ORB와 같은 알고리즘을 이용하여 특징점을 추출한다. 그 후, 현재 상태가 공간맵이 초기화된 상태인지 판단하여 만약 공간맵이 초기화되어있지 않다면, 상기 획득한 이미지로부터 추출된 특징점을 이용하여 카메라 자세 및 원점을 추정한다. 이 때 카메라 자세 및 원점의 추정을 위해서는 연속된 두 이미지가 필요하며, 각 이미지에서 추출한 특징점간의 유사도를 비교하기 위하여 유클리디언 거리를 구하여 이 거리가 가장 짧은 두 특징점을 서로 정합되는 특징점 쌍을 얻을 수 있다. 또한 상기 정합된 특징점 쌍의 기하학적 관계를 이용하여 t-1 프레임으로부터 t 프레임의 상대적 위치를 구할 수 있다. 이 단계에서는, 이전 프레임과 현재 프레임 사이의 특징점 정합 쌍을 기반으로 카메라 이미지를 3차원 공간상에서 움직이면서 공간맵과 제일 잘 맞는 카메라 위치를 찾아 공간맵의 초기 위치를 추정하여 공간맵을 초기화 한다.A method for generating 3D spatial information in the terminal and interworking with a server is shown in detail in the flowchart of FIG. 2 . Each terminal acquires an image of real space from a camera and extracts feature points using an algorithm such as SIFT, SURF, or ORB. Thereafter, it is determined whether the current state is the initialized state of the spatial map, and if the spatial map is not initialized, the camera posture and origin are estimated using the feature points extracted from the acquired image. At this time, two consecutive images are required to estimate the camera posture and origin. To compare the similarity between the feature points extracted from each image, the Euclidean distance is obtained to obtain a feature point pair matching the two feature points with the shortest distance. can In addition, the relative position of the t frame may be obtained from the t-1 frame using the geometric relationship of the matched feature point pair. In this step, the spatial map is initialized by estimating the initial position of the spatial map by finding the camera position that best matches the spatial map while moving the camera image in 3D space based on the matching pair of feature points between the previous frame and the current frame.
상기 단말에서 현재 프레임에서 특징점을 추출한 단계에서 공간맵이 초기화 되어 있는 경우에는, 빠르고 안정적인 위치 추적을 위하여 이미지 프레임 중에서 변화가 크게 발생하는 프레임을 키프레임으로 결정하고, 연산 로드가 높은 3D 포인트 클라우드 맵 생성은 변화가 크게 발생하는 키프레임에 대해서만 실행하고, 연산 로드가 높지 않은 공간맵 정합 및 추적은 모든 프레임에 대해서 실행하도록 분기한다.When the spatial map is initialized in the step of extracting feature points from the current frame in the terminal, a frame in which a large change occurs among image frames is determined as a keyframe for fast and stable position tracking, and a 3D point cloud map with a high computational load The generation is executed only for keyframes in which a large change occurs, and spatial map registration and tracking, which does not have a high computational load, is branched to be executed for all frames.
상기 분기에서 새로운 키프레임으로 판단된 경우에는 해당 프레임을 키프레임으로 결정하여 키프레임군에 추가하고, 추출된 특징점은 상기 단계에서 초기화하여 생성한 공간맵의 특징점과 매칭하여 각 특징점의 3차원 공간좌표가 최적해가 되도록 3D 포인트 클라우드 맵을 생성한다. 이 때 복수의 단말에서 3차원 공간정보를 공유하기 위하여, 3차원 공간정보를 서버로 송신하고, 이를 수신한 서버에서는 복수의 단말들로부터 수신한 3차원 공간정보를 모두 취합하여 각 특징점의 3차원 공간좌표가 최적해가 되도록 3D 포인트 클라우드 맵을 합하는 번들 조정을 처리한다. 이 때 번들 조정된 서버 공간맵은 각 단말이 이를 수신하여, 복수의 단말이 실공간의 3차원 공간정보를 모두 공유할 수 있게 된다.If it is determined as a new keyframe in the branching, the corresponding frame is determined as a keyframe and added to the keyframe group, and the extracted feature points are matched with the feature points of the spatial map initialized and generated in the above step in the three-dimensional space of each feature point. A 3D point cloud map is generated so that the coordinates are optimal solutions. At this time, in order to share the 3D spatial information in the plurality of terminals, the 3D spatial information is transmitted to the server, and the receiving server collects all the 3D spatial information received from the plurality of terminals to obtain the 3D spatial information of each feature point. Process the bundle adjustment to sum the 3D point cloud map so that the spatial coordinates are the optimal solution. At this time, each terminal receives the bundle-adjusted server spatial map, so that a plurality of terminals can share all of the three-dimensional spatial information in real space.
상기의 일련의 단계를 통해 각 단말에서 생성한 3차원 공간정보는 복수의 단말이 공유하는 동시성을 얻을 수 있다. 또한 복수의 각 단말에서 사용자의 입력을 동시에 처리하기 위하여, 복수의 단말은 서버와 소켓 연결을 하고, 단말에서의 사용자의 입력 정보를 서로 주고 받음으로 명령실행점을 공유할 수 있게 된다.Through the above series of steps, the 3D spatial information generated by each terminal can obtain the simultaneity shared by a plurality of terminals. In addition, in order to simultaneously process a user's input in each of the plurality of terminals, the plurality of terminals may make a socket connection with the server and share the command execution point by exchanging the user's input information from the terminal with each other.
이를 구체적으로 설명하면, 서버와 최초 접속하거나, 또는 새로운 공유그룹을 생성하는 단말은 서버로부터 마스터노드로 할당되고, 이후 같은 공유그룹에 접속하는 단말은 슬레이브 노드로 할당되어 각각 소켓 연결을 유지하며 명령을 주고 받는다.Specifically, a terminal that initially accesses the server or creates a new shared group is assigned as a master node from the server, and a terminal that accesses the same shared group thereafter is assigned as a slave node, maintaining socket connection and command give and receive
마스터 노드로 할당된 단말은 실공간에 증강하여 표현할 가상의 객체에 대하여 새로이 이를 추가하거나, 3차원 공간좌표를 변경할 수 있다. 이를 위하여 사용자는 단말의 터치스크린의 임의의 점을 터치하고, 이 2D 좌표는 카메라 좌표와 2D 좌표를 직교할 수 있는 선분을 구하기 위하여 3차원 투영을 하며, 이 선분의 투영선에서 가장 가까운 거리에 존재하는 3차원 공간좌표군의 특징점을 적어도 3개 이상을 찾아, 특징점들의 평균 거리를 구한다. 이렇게 구해진 3차원 공간의 좌표는 가상의 객체를 출력할 공간 좌표가 되며, 서버와 소켓으로 연결한 각 단말들에 공간 좌표와 객체 정보를 송신하여, 복수의 단말이 모두 같은 객체를 동일한 실공간 위에 증강 할 수 있게 되어, 본 발명에서 이루고자 한 복수의 단말에서 3차원 공간정보를 공유하기 위한 공간맵 생성과 명령실행점 독출방법을 실현할 수 있게 된다.The terminal assigned as the master node may newly add a virtual object to be augmented and expressed in real space, or may change three-dimensional space coordinates. To this end, the user touches an arbitrary point on the touch screen of the terminal, and the 2D coordinates are projected in 3D to obtain a line segment that can orthogonalize the camera coordinates and the 2D coordinates. Find at least three feature points in a three-dimensional space coordinate group, and obtain the average distance of the feature points. The coordinates of the three-dimensional space obtained in this way become the spatial coordinates for outputting the virtual object, and the spatial coordinates and object information are transmitted to each terminal connected to the server and the socket, so that a plurality of terminals can display the same object on the same real space. It can be augmented, so that it is possible to realize a method of generating a spatial map and reading out a command execution point for sharing 3D spatial information in a plurality of terminals, which is intended to be achieved in the present invention.

Claims (4)

  1. 카메라와 자이로 센서를 탑재한 휴대형 단말에서, 3차원 공간을 인식하여 공간 정보를 구성하고자 하는 연속된 컬러 영상을 취득하는 단계, 연속된 자이로 센서의 정보를 취득하여 기록하는 단계, 연속된 컬러 영상으로부터 특징을 추출하여 정합하는 단계, 연속된 자이로 센서의 정보를 이용하여 정합 정보를 보정하는 단계를 가지고, 상기 단계를 통해 얻은 3차원 공간 정보를 다수의 휴대형 단말이 공유하는 것을 특징으로 하는 3차원 공간 정보 생성 방법In a portable terminal equipped with a camera and a gyro sensor, a step of recognizing a three-dimensional space and acquiring a continuous color image to compose spatial information, a step of acquiring and recording information of a continuous gyro sensor, and a continuous color image A three-dimensional space, characterized in that the three-dimensional space information obtained through the steps is shared by a plurality of portable terminals, comprising the steps of extracting and matching features, and correcting the matching information using the continuous gyro sensor information. How to generate information
  2. 상기 청구항 1에서 생성한 3차원 공간 정보를, 둘 이상의 휴대형 단말이 공유를 하기 위하여, 둘 이상의 휴대형 단말에서 생성한 3차원 공간 정보를 소켓 통신을 이용하여 서버로 송신하는 단계, 서버에서 각 휴대형 단말로부터 수신한 공간 정보를 정합하여 공간 정보를 합하고 정합하는 단계, 서버에서 각 휴대형 단말에 소켓 통신을 이용하여 정합된 공간 정보를 송신하는 단계, 각 휴대형 단말에서 지역 공간 정보와 전역 공간 정보를 합하여 공간 정보를 확장하는 단계를 통해 다수의 휴대형 단말이 3차원 공간 정보를 공유하는 방법In order to share the 3D spatial information generated in claim 1 by two or more portable terminals, transmitting 3D spatial information generated by two or more portable terminals to a server using socket communication, from the server to each portable terminal The step of adding and matching the spatial information by matching the spatial information received from the server, the step of transmitting the matched spatial information to each portable terminal using socket communication from the server, the spatial information by summing the local spatial information and the global spatial information from each portable terminal A method of sharing 3D spatial information by a plurality of portable terminals through the step of expanding the information
  3. 가상의 객체를 실공간 위에 증강하기 위한 3차원 좌표를 얻기 위하여, 상기 청구항2에서 다수의 휴대형 단말이 공유하는 3차원 공간 정보를 이용하는, 실공간과 가상의 객체를 합성하여 출력하기 위한 디스플레이와, 사용자의 터치 입력을 받기 위한 터치 스크린을 가진 휴대형 단말에서, 사용자의 터치 입력을 3차원 공간 정보에 투영하여 공간 정보와 충돌 여부를 판별하여 3차원 좌표를 얻는 것을 특징으로 하는 명령 실행점 독출 방법a display for synthesizing and outputting real space and virtual objects using the three-dimensional space information shared by a plurality of portable terminals in claim 2 in order to obtain three-dimensional coordinates for augmenting the virtual object on the real space; In a portable terminal having a touch screen for receiving a user's touch input, a method for reading a command execution point, characterized in that the user's touch input is projected onto the 3D spatial information to determine whether there is a collision with the spatial information to obtain 3D coordinates
  4. 다수의 휴대형 단말이 동시에 동일한 가상의 객체를 동일한 실공간에 증강하기 위하여, 상기 청구항 3에서 얻은 명령 실행점을, 서버로 소켓 통신을 이용하여 송신하고, 다수의 휴대형 단말이 명령 실행점을 수신하여 동기를 맞추어 가상의 객체를 증강하는 방법In order for a plurality of portable terminals to simultaneously augment the same virtual object in the same real space, the command execution point obtained in claim 3 is transmitted to a server using socket communication, and a plurality of portable terminals receive the command execution point. How to augment virtual objects with synchronization
PCT/KR2020/000608 2019-12-31 2020-01-13 Method for generating space map in order to share three-dimensional space information among plurality of terminals and reading command execution point WO2021137348A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0179781 2019-12-31
KR20190179781 2019-12-31

Publications (1)

Publication Number Publication Date
WO2021137348A1 true WO2021137348A1 (en) 2021-07-08

Family

ID=76686609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/000608 WO2021137348A1 (en) 2019-12-31 2020-01-13 Method for generating space map in order to share three-dimensional space information among plurality of terminals and reading command execution point

Country Status (1)

Country Link
WO (1) WO2021137348A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130136569A (en) * 2011-03-29 2013-12-12 퀄컴 인코포레이티드 System for the rendering of shared digital interfaces relative to each user's point of view
KR20160048874A (en) * 2013-08-30 2016-05-04 퀄컴 인코포레이티드 Method and apparatus for representing physical scene
US20160189432A1 (en) * 2010-11-18 2016-06-30 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US20180045963A1 (en) * 2016-08-11 2018-02-15 Magic Leap, Inc. Automatic placement of a virtual object in a three-dimensional space
KR101989969B1 (en) * 2018-10-11 2019-06-19 대한민국 Contents experience system of architectural sites based augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160189432A1 (en) * 2010-11-18 2016-06-30 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
KR20130136569A (en) * 2011-03-29 2013-12-12 퀄컴 인코포레이티드 System for the rendering of shared digital interfaces relative to each user's point of view
KR20160048874A (en) * 2013-08-30 2016-05-04 퀄컴 인코포레이티드 Method and apparatus for representing physical scene
US20180045963A1 (en) * 2016-08-11 2018-02-15 Magic Leap, Inc. Automatic placement of a virtual object in a three-dimensional space
KR101989969B1 (en) * 2018-10-11 2019-06-19 대한민국 Contents experience system of architectural sites based augmented reality

Similar Documents

Publication Publication Date Title
CN110140099B (en) System and method for tracking controller
US20230071839A1 (en) Visual-Inertial Positional Awareness for Autonomous and Non-Autonomous Tracking
CN110073313B (en) Interacting with an environment using a parent device and at least one companion device
CN110582798B (en) System and method for virtual enhanced vision simultaneous localization and mapping
US11948369B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
US20200151898A1 (en) Mapping Optimization in Autonomous and Non-Autonomous Platforms
EP2343882B1 (en) Image processing device, object selection method and program
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
US10547974B1 (en) Relative spatial localization of mobile devices
US11417069B1 (en) Object and camera localization system and localization method for mapping of the real world
CN110264509A (en) Determine the method, apparatus and its storage medium of the pose of image-capturing apparatus
JP6348741B2 (en) Information processing system, information processing apparatus, information processing program, and information processing method
CN106304842A (en) For location and the augmented reality system and method for map building
AU2015248967B2 (en) A method for localizing a robot in a localization plane
KR101639161B1 (en) Personal authentication method using skeleton information
CN104656893A (en) Remote interaction control system and method for physical information space
WO2019221800A1 (en) System and method for spatially registering multiple augmented reality devices
Egodagamage et al. Distributed monocular SLAM for indoor map building
WO2021137348A1 (en) Method for generating space map in order to share three-dimensional space information among plurality of terminals and reading command execution point
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
WO2023120770A1 (en) Method and apparatus for interaction between cognitive mesh information generated in three-dimensional space and virtual objects
Jianjun et al. A direct visual-inertial sensor fusion approach in multi-state constraint Kalman filter
TWI768724B (en) Method for positioning in a three-dimensional space and positioning system
WO2021065607A1 (en) Information processing device and method, and program
JP7261121B2 (en) Information terminal device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909054

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909054

Country of ref document: EP

Kind code of ref document: A1