WO2018174499A2 - Method for implementing augmented reality image by using virtual marker and vector - Google Patents
Method for implementing augmented reality image by using virtual marker and vector Download PDFInfo
- Publication number
- WO2018174499A2 WO2018174499A2 PCT/KR2018/003188 KR2018003188W WO2018174499A2 WO 2018174499 A2 WO2018174499 A2 WO 2018174499A2 KR 2018003188 W KR2018003188 W KR 2018003188W WO 2018174499 A2 WO2018174499 A2 WO 2018174499A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- augmented reality
- layer
- computing device
- marker
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
Definitions
- the present invention relates to a method for implementing augmented reality image using a vector.
- Augmented reality refers to computer graphics technology that blends real world images and virtual images that users see with one image. Augmented reality synthesizes an image relating to a virtual object or information to a specific object of the real world image.
- physical marker images or location information are used to identify an object to synthesize a virtual image.
- the camera of the computing device may not accurately photograph the physical marker image due to the shaking of the user's hand, and thus, the augmented reality image may not be precisely implemented.
- location information there is a problem that the augmented reality image is not implemented due to the limited or malfunction of the GPS location recognition of the computing device according to the influence of the surrounding environment.
- An object of the present invention is to provide a method and program for implementing augmented reality image that can prevent the augmented reality content is not displayed and disconnected continuously as the physical marker image shakes.
- a method of implementing an augmented reality image implemented by a computing device including: obtaining a first layer indicating an image of a real world acquired by the computing device; Identifying at least one object included in the first layer; Determining a first marker image based on an image corresponding to the at least one object among pre-stored images; Matching a position of the at least one object with the first marker image; Generating a second layer based on the first marker image; Generating an augmented reality image by combining the first layer and the second layer; And outputting the augmented reality image.
- the augmented reality content is not displayed continuously and is cut off as the marker image is shaken in the augmented reality image.
- FIG. 1 is a schematic conceptual view illustrating a virtual reality image implementation method according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating the inside of a terminal that provides augmented reality.
- FIG. 3 is a flowchart illustrating a first embodiment of a method for providing augmented reality.
- FIG. 4 is a flowchart illustrating a second embodiment of a method for providing augmented reality.
- a method of implementing an augmented reality image implemented by a computing device including: obtaining a first layer indicating an image of a real world acquired by the computing device; Identifying at least one object included in the first layer; Determining a first marker image based on an image corresponding to the at least one object among pre-stored images; Matching a position of the at least one object with the first marker image; Generating a second layer based on the first marker image; Generating an augmented reality image by combining the first layer and the second layer; And outputting the augmented reality image.
- providing a pre-stored image to the user Obtaining a user command including image information from the user; And determining a second marker image based on the image information.
- the second marker image may be further considered in generating the second layer.
- the pre-stored image may include an outline vector value.
- the user command may include contour vector information of an image to be used as the second marker image.
- the user command may include information of an inner point and an outer point of the image to be used as the second marker image.
- the first marker image may be transparently generated to be recognized by the computing device and not to be recognized by the user.
- the second layer may include augmented reality content corresponding to at least one of the first marker image and the second marker image, and the augmented reality content may mean a virtual image that appears in the augmented reality image.
- the object arrangement state of the first layer may be checked based on a vector, and the providing form of the augmented reality content may be determined based on the object arrangement state.
- the present invention may include a computer readable medium recording a program for executing the augmented reality image implementation method described.
- the application may include an application for a terminal device stored in a medium in order to execute the augmented reality image implementation method described in combination with the computing device, which is hardware.
- An augmented reality image implementation method using vectors according to an embodiment of the present invention is realized by a computing device.
- the augmented reality image implementation method may be implemented as an application, stored in the computing device and performed by the computing device.
- the computing device may be provided as a mobile device such as a smart phone, a tablet, and the like, but is not limited thereto.
- a computing device may be provided with a camera and may process and store data. That is, the computing device may be provided as a wearable device such as glasses or a band equipped with a camera. Any computing device that is not illustrated may be provided.
- the computing device may communicate with other computing devices or servers via a network.
- the method of implementing augmented reality images may be realized by linking the computing device with another computing device or server.
- the computing device 100 captures a space 10 of a real world to obtain a real world image.
- a plurality of real objects 11, 12, 13, and 14 exist in real world space 10.
- the plurality of real objects 11, 12, 13, and 14 may include any object in two or three dimensions.
- the plurality of real objects 11, 12, 13, and 14 may have different or similar shapes.
- the computing device 100 may distinguish objects based on these morphological differences.
- the computing device 100 may identify the plurality of objects 21, 22, 23, 24 in the real world image.
- the computing device 100 may extract the outlines of the identified plurality of objects 21, 22, 23, and 24.
- the computing device 100 determines an object matching the pre-stored image among the plurality of objects 21, 22, 23, and 24 by using the vector value of the contour of the pre-stored image.
- the computing device 100 may store image samples corresponding to the plurality of objects 21, 22, 23, and 24 in advance. Data regarding the contour of the image sample corresponding to the plurality of objects 21, 22, 23, and 24 may be stored in advance.
- the computing device 100 may retrieve an image sample similar to the shape of the previously stored first object 21.
- the computing device 100 may use a pre-stored image sample as a marker image to be described later.
- Types of marker images may include a first marker image and a second marker image.
- the first marker image may indicate a marker image obtained based on the first layer to be described later. That is, the first marker image may indicate the marker image determined based on the actual image, not determined by the user. For example, suppose that the first layer reflecting the real image has a calendar and a frame distinguished from the background, where the first marker image is a transparent marker generated based on the outline and shape of the calendar and the frame in the first layer. Can be.
- the marker may play a role of generating augmented reality content later.
- the second marker image may indicate the marker image obtained based on the information input from the user.
- the user may arbitrarily set augmented reality content (stars, explosions, characters, etc.) to appear on the display screen.
- the second marker image may be used while the user sets the augmented reality content to appear.
- the second marker image may be a transparent marker previously stored in the first layer based on the outline and shape of the augmented reality content (star, explosion, character, etc.).
- data relating to the contours of the plurality of objects 21, 22, 23, 24 may be provided in a three-dimensional type.
- Data relating to the image or contour of the plurality of objects 21, 22, 23, and 24 may be transmitted to and stored in the computing device 100 from another computing device or server.
- images of the plurality of objects 21, 22, 23, and 24 captured by the user may be stored in the computing device 100 in advance.
- the data about the contour of the extracted object may be stored in the form of a vector value, that is, as a vector image.
- the user may indicate a user who implements augmented reality through the computing device 100.
- an augmented reality image can be precisely implemented.
- the distance, direction, position, etc. of the object changes from the computing device 100, by appropriately changing the vector image of the object (i.e. any By responding to various forms), it is possible to accurately identify objects within real-world images.
- the computing device 100 determines an object 22 that matches the object of the plurality of objects 21, 22, 23, and 24, and synthesizes the virtual image 40 around the determined object 22 to augmented reality image.
- the user may designate at least one area 31, 32 in the real world image.
- the computing device 100 may use the objects 22 and 24 in the areas 31 and 32 designated by the user as object candidates, and determine whether the corresponding objects 22 and 24 match the objects. Or, in substantially the same way, the user may designate at least one object 22, 24 in the real world image as an object candidate.
- the computing device 100 may include at least an image acquisition unit 101, a sensor unit 102, an object recognition unit 103, a first layer generator 104, a user command input unit 105, and a user.
- Command editing unit 106, marker image generation unit 107, image matching unit 108, second layer generation unit 109, second layer storage unit 110, image synthesizing unit 111, display control unit 112 ) May include one of the display 113.
- Each component may be controlled by a processor (not shown) included in the computing device 100.
- the image acquirer 101 may capture a real world image.
- the image acquirer 101 may acquire a real world image by photographing.
- the real world image may include a plurality of real objects 11, 12, 13, and 14.
- the plurality of real objects 11, 12, 13, and 14 may include any object in two or three dimensions.
- the plurality of real objects 11, 12, 13, and 14 may have different or similar shapes.
- the image acquisition unit 101 may be a camera or the like.
- the sensor unit 102 may be equipped with devices that support GPS.
- the sensor unit 102 may recognize a location of the captured image, a direction in which the computing device 100 photographs, a moving speed of the computing device 100, and the like.
- the object recognizer 103 may recognize the plurality of real objects 11, 12, 13, and 14 based on the outlines of the plurality of real objects 11, 12, 13, and 14 included in the real world image. have.
- the object recognizer 103 recognizes the plurality of reality objects 11, 12, 13, and 14 based on the contours of the plurality of reality objects 11, 12, 13, and 14, and recognizes the plurality of reality objects ( A plurality of objects 21, 22, 23, and 24 corresponding to 11, 12, 13, and 14 may be created in the computing device 100.
- the first layer generator 104 may generate a first layer that indicates a real image corresponding to the real world image.
- the augmented reality image may be implemented through the synthesis of the real image and the virtual image.
- the first layer generator 104 may generate a real image based on the real world image photographed by the image acquirer 101.
- the user command input unit 105 may receive a command to output another object distinguished from the plurality of objects 21, 22, 23, and 24 from a user who uses the computing device 100. For example, the user may recognize the plurality of objects 21, 22, 23, and 24 from the computing device 100. If the user wants to replace the first object 21 with another object, the user may input a command to the computing device 100 to request that the first object 21 be replaced with another previously stored object. Alternatively, the user may input a command to the computing device 100 requesting to replace the first object 21 with an object that the user directly inputs (or draws) to the computing device 100.
- the user command may include information about an inner point and an outer point of the image to be used as the marker image.
- the user command editing unit 106 may edit at least one of the plurality of objects 21, 22, 23, and 24 based on the user command obtained from the user command input unit 105.
- the user command editing unit 106 may store the first object 21 in advance. You can edit to change to another object.
- the marker image generator 107 may generate a marker image based on the plurality of objects 21, 22, 23, and 24.
- the marker image may be an image for generating augmented reality content.
- the computing device 100 provides a virtual reality image of making a stone included in the reality image golden. If it is assumed that the second object 22 is a stone, the marker image generator 107 may generate a marker image capable of generating gold based on the vector value of the second object 22.
- the marker image may be recognized by the computing device 100.
- the marker image may be generated transparently so as not to be recognized by the user.
- the image matching unit 108 may correspond to the position between the marker images of the generated plurality of objects 21, 22, 23, and 24 and the plurality of objects 21, 22, 23, and 24.
- the image matching unit 108 moves the marker image to correspond to the plurality of objects 21, 22, 23, and 24 when the positions of the plurality of objects 21, 22, 23, and 24 change in real time. You can.
- the second layer generator 109 may recognize marker images of the generated plurality of objects 21, 22, 23, and 24.
- the second layer generator 109 may generate a second layer in which augmented reality content corresponding to each position of the marker images of the plurality of objects 21, 22, 23, and 24 generated is combined. Augmented reality content may be identified by the user.
- the second layer storage unit 110 may store the second layer generated from the second layer generator 109.
- the second layer generated based on the marker image may provide the user with a continuous screen that is not broken even when the positions of the plurality of objects 21, 22, 23, and 24 change in real time.
- the image synthesizer 111 may generate an augmented reality image by combining the first layer and the second layer. That is, the augmented reality image may be an image including augmented reality content in a real world image. For example, if there are stones in the real world image acquired through the computing device 100, the image synthesizing unit 111 may generate an image in which only the stones are displayed in gold.
- the display controller 112 may control the display 113 to output an augmented reality image.
- the display unit 113 may output the augmented reality visual screen.
- the computing device 100 may generate a first layer based on the real world image (S310).
- the computing device 100 may identify at least one object in the first layer (S320).
- the computing device 100 may extract the color, resolution, and vector value of the contour of the first layer.
- the computing device 100 may identify at least one object in the first layer based on the color, resolution, vector value of the contour, etc. of the first layer.
- the detailed object identification process may be as follows.
- the computing device 100 may segment the image based on the resolution of the first layer.
- the computing device 100 may classify the divided image for each area. When the number of divided images is greater than the preset number, the computing device 100 may merge the hierarchical regions by adjusting the resolution. For example, the computing device 100 may lower the resolution of the first layer so that the computing device 100 may be divided into a plurality of divided images.
- the computing device 100 may extract an object that can be independently recognized among the divided images.
- the computing device 100 may determine the first marker image based on the image corresponding to the identified object among the pre-stored images (S330).
- the computing device 100 may match the position of the object included in the first marker image and the first layer (S340).
- the computing device 100 may generate a second layer including augmented reality content based on the first marker image (S350).
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer. Since the augmented reality content is generated based on the first marker image formed based on the first layer rather than the first layer, the augmented reality image including the augmented reality content that does not break even when the first layer is shaken due to the hand shake phenomenon. Can be generated. If the position or viewing angle of the first layer is changed, the computing device 100 stores the stored vector using the vector value of any object of the first layer, the position vector value of the object in the real world, the normal vector value of the object, and the like. The vector value compensation corresponding to the change in the position or the viewing angle in one marker image can be performed.
- the computing device 100 may correct the vector value of the first marker image by using the vector value of the first marker image corresponding to the frame, the position vector value of the frame of the real world, and the normal vector value of the frame. .
- the computing device 100 may visually output the augmented reality image through the display 113 (S360).
- the computing device 100 may generate a first layer based on the real world image (S310).
- the computing device 100 may provide the user with at least one object or at least one image stored in advance (S311).
- the computing device 100 may provide the user with at least one object (or image) stored in advance at the request of the user.
- the computing device 100 may automatically provide the user with at least one object (image) stored in advance even without a user's request.
- a user who has identified at least one object (or image) previously stored through the computing device 100 inputs a command to the computing device 100 requesting to replace at least one object obtained from the real world image with another previously stored object. can do.
- a user may directly enter (or draw) an object into computing device 100.
- the computing device 100 may obtain a command from the user requesting to replace at least one object obtained from the real world image with another object stored in advance (S312). Alternatively, the computing device 100 may obtain a command for requesting the user to replace at least one object obtained from the real world image with an object input (or drawn) directly to the computing device 100.
- the computing device 100 may determine a second marker image among pre-stored images based on the command (S313).
- the computing device 100 may identify at least one object in the first layer (S320).
- the computing device 100 may extract the color, resolution, and vector value of the contour of the first layer.
- the computing device 100 may identify at least one object in the first layer based on the color, resolution, vector value of the contour, etc. of the first layer.
- the detailed object identification process may be as follows.
- the computing device 100 may segment the image based on the resolution of the first layer.
- the computing device 100 may classify the divided image for each area. When the number of divided images is greater than the preset number, the computing device 100 may merge the hierarchical regions by adjusting the resolution. For example, the computing device 100 may lower the resolution of the first layer so that the computing device 100 may be divided into a plurality of divided images.
- the computing device 100 may extract an object that can be independently recognized among the divided images.
- the computing device 100 may determine the first marker image based on the image corresponding to the identified object among the pre-stored images (S330).
- the computing device 100 may match the position of the object included in the first marker image and the first layer (S340).
- the computing device 100 may generate a second layer including augmented reality content based on at least one of the first marker image and the second marker image (S351).
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer. Since the augmented reality content is generated based on at least one of a first marker image formed based on the first layer and a second marker image formed based on a user command, not the first layer, the first layer may be shaken due to the shaking of the hand. An augmented reality image including an augmented reality content that is not broken even in a case may be generated. If the position or viewing angle of the first layer is changed, the computing device 100 stores the stored vector using the vector value of any object of the first layer, the position vector value of the object in the real world, the normal vector value of the object, and the like. The vector value compensation corresponding to the change in the position or the viewing angle of the first marker image or the second marker image may be performed.
- the computing device 100 may correct the vector value of the first marker image by using the vector value of the first marker image corresponding to the frame, the position vector value of the frame of the real world, and the normal vector value of the frame. .
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer.
- the computing device 100 may visually output the augmented reality image through the display 113 (S360).
- the augmented reality image implementation method according to an embodiment of the present invention described above may be implemented as a program (or an application) and stored in a medium to be executed in combination with a computer which is hardware.
- the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
- Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (10)
- 컴퓨팅 장치에 의해 실현되는 증강 현실 영상 구현 방법으로서,An augmented reality image implementation method realized by a computing device,컴퓨팅 장치에 의해 획득되는 현실 세계의 이미지를 지시하는 제1레이어를 획득하는 단계;Obtaining a first layer indicating an image of the real world obtained by the computing device;상기 제1레이어에 포함된 적어도 하나의 객체를 식별하는 단계;Identifying at least one object included in the first layer;미리 저장된 이미지 중 상기 적어도 하나의 객체와 대응하는 이미지를 기초로 제1마커이미지를 결정하는 단계;Determining a first marker image based on an image corresponding to the at least one object among pre-stored images;상기 적어도 하나의 객체와 상기 제1마커이미지의 위치를 매칭시키는 단계;Matching a position of the at least one object with the first marker image;상기 제1마커이미지를 기초로 제2레이어를 생성하는 단계;Generating a second layer based on the first marker image;상기 제1레이어와 상기 제2레이어의 합성을 통해 증강 현실 영상을 생성하는 단계; 및Generating an augmented reality image by combining the first layer and the second layer; And상기 증강 현실 영상을 출력하는 단계;를 포함하는 증강 현실 영상 구현 방법.And outputting the augmented reality image.
- 제1항에 있어서,The method of claim 1,미리 저장된 이미지를 사용자에게 제공하는 단계;Providing a user with a pre-stored image;상기 사용자로부터 이미지 정보를 포함하는 사용자 명령을 획득하는 단계;Obtaining a user command including image information from the user;상기 이미지 정보를 기초로 제2마커이미지를 결정하는 단계;를 더 포함하며,Determining a second marker image based on the image information;상기 제2레이어의 생성에 상기 제2마커이미지를 더 고려하는, 증강 현실 영상 구현 방법. And considering the second marker image in the generation of the second layer.
- 제1항에 있어서,The method of claim 1,상기 미리 저장된 이미지는 윤곽선 벡터 값을 포함하는, 증강 현실 영상 구현 방법. And the prestored image comprises a contour vector value.
- 제2항에 있어서,The method of claim 2,상기 사용자 명령은, 상기 제2마커이미지로 사용할 이미지의 윤곽선 벡터 정보를 포함하는, 증강 현실 영상 구현 방법. The user command includes contour vector information of an image to be used as the second marker image.
- 제2항에 있어서,The method of claim 2,상기 사용자 명령은, 상기 제2마커이미지로 사용할 이미지의 내부 지점과 외부 지점의 정보를 포함하는, 증강 현실 영상 구현 방법. The user command may include information about an inner point and an outer point of the image to be used as the second marker image.
- 제1항에 있어서,The method of claim 1,상기 제1마커이미지 상기 컴퓨팅 장치에 의해 인식되고, 사용자에 의해 인식되지 않도록 투명하게 생성되는, 증강 현실 영상 구현 방법. The first marker image is recognized by the computing device, transparently generated so as not to be recognized by the user, augmented reality image implementation method.
- 제2항에 있어서,The method of claim 2,상기 제2레이어는 상기 제1마커이미지 및 상기 제2마커이미지 중 적어도 하나에 대응되는 증강 현실 컨텐츠를 포함하며, 상기 증강 현실 컨텐츠는 상기 증강 현실 영상에서 나타나는 가상영상을 의미하는, 증강 현실 영상 구현 방법. The second layer may include augmented reality content corresponding to at least one of the first marker image and the second marker image, and the augmented reality content may mean a virtual image that appears in the augmented reality image. Way.
- 제7항에 있어서,The method of claim 7, wherein상기 제1레이어의 물체 배치 상태를 벡터 기반으로 확인하고, 상기 물체 배치 상태를 기초로 상기 증강 현실 컨텐츠의 제공형태를 결정하는, 증강 현실 영상 구현 방법. Confirming the object arrangement state of the first layer based on a vector, and determining a form of providing the augmented reality content based on the object arrangement state.
- 제1항 내지 제8항 중 어느 하나의 항에 기재된 증강 현실 영상 구현 방법을 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 매체.A computer readable medium having recorded thereon a program for executing the augmented reality image implementing method according to any one of claims 1 to 8.
- 하드웨어인 상기 컴퓨팅 장치와 결합되어 제1항 내지 제8항 중 어느 하나의 항에 기재된 증강 현실 영상 구현 방법을 실행시키기 위하여 매체에 저장된 단말장치용 어플리케이션.An application for a terminal device, which is combined with the computing device, which is hardware, and stored in a medium for executing the augmented reality image implementing method according to any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020501108A JP2020514937A (en) | 2017-03-20 | 2018-03-19 | Realization method of augmented reality image using vector |
US16/551,039 US20190378339A1 (en) | 2017-03-20 | 2019-08-26 | Method for implementing augmented reality image using vector |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170034397 | 2017-03-20 | ||
KR10-2017-0034397 | 2017-03-20 | ||
KR10-2017-0102891 | 2017-08-14 | ||
KR20170102891 | 2017-08-14 | ||
KR10-2017-0115841 | 2017-09-11 | ||
KR1020170115841A KR102000960B1 (en) | 2017-03-20 | 2017-09-11 | Method for implementing augmented reality image using vector |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/551,039 Continuation US20190378339A1 (en) | 2017-03-20 | 2019-08-26 | Method for implementing augmented reality image using vector |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2018174499A2 true WO2018174499A2 (en) | 2018-09-27 |
WO2018174499A3 WO2018174499A3 (en) | 2018-11-08 |
Family
ID=63585569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/003188 WO2018174499A2 (en) | 2017-03-20 | 2018-03-19 | Method for implementing augmented reality image by using virtual marker and vector |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018174499A2 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060021001A (en) * | 2004-09-02 | 2006-03-07 | (주)제니텀 엔터테인먼트 컴퓨팅 | Implementation of marker-less augmented reality and mixed reality system using object detecting method |
KR101227237B1 (en) * | 2010-03-17 | 2013-01-28 | 에스케이플래닛 주식회사 | Augmented reality system and method for realizing interaction between virtual object using the plural marker |
KR101216222B1 (en) * | 2011-01-14 | 2012-12-28 | 에스케이플래닛 주식회사 | System and method for providing augmented reality service |
KR102167273B1 (en) * | 2013-06-25 | 2020-10-20 | 한양대학교 산학협력단 | Method, apparatus and computer program product for augmenting real object |
KR102161510B1 (en) * | 2013-09-02 | 2020-10-05 | 엘지전자 주식회사 | Portable device and controlling method thereof |
-
2018
- 2018-03-19 WO PCT/KR2018/003188 patent/WO2018174499A2/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2018174499A3 (en) | 2018-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019216491A1 (en) | A method of analyzing objects in images recorded by a camera of a head mounted device | |
CN109584295B (en) | Method, device and system for automatically labeling target object in image | |
JP4878083B2 (en) | Image composition apparatus and method, and program | |
WO2017026839A1 (en) | 3d face model obtaining method and device using portable camera | |
WO2019050360A1 (en) | Electronic device and method for automatic human segmentation in image | |
WO2012091326A2 (en) | Three-dimensional real-time street view system using distinct identification information | |
KR102000960B1 (en) | Method for implementing augmented reality image using vector | |
WO2021045552A1 (en) | Electronic device for image synthesis and operating method thereof | |
US20220358662A1 (en) | Image generation method and device | |
WO2021101097A1 (en) | Multi-task fusion neural network architecture | |
WO2013025011A1 (en) | Method and system for body tracking for recognizing gestures in a space | |
WO2019156543A2 (en) | Method for determining representative image of video, and electronic device for processing method | |
WO2021215800A1 (en) | Surgical skill training system and machine learning-based surgical guide system using three-dimensional imaging | |
CN112749613A (en) | Video data processing method and device, computer equipment and storage medium | |
JP2019201397A (en) | Imaging apparatus and program | |
WO2018174499A2 (en) | Method for implementing augmented reality image by using virtual marker and vector | |
WO2019004754A1 (en) | Augmented reality advertisements on objects | |
WO2024111728A1 (en) | User emotion interaction method and system for extended reality based on non-verbal elements | |
WO2017026834A1 (en) | Responsive video generation method and generation program | |
WO2020045909A1 (en) | Apparatus and method for user interface framework for multi-selection and operation of non-consecutive segmented information | |
EP4333420A1 (en) | Information processing device, information processing method, and program | |
WO2023149603A1 (en) | Thermal-image-monitoring system using plurality of cameras | |
CN207301480U (en) | A kind of dual-purpose telescopic system with bi-locating function | |
KR102339825B1 (en) | Device for situation awareness and method for stitching image thereof | |
CN111242107B (en) | Method and electronic device for setting virtual object in space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2020501108 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.12.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |