WO2018174499A2 - Procédé de réalisation d'image de réalité augmentée au moyen d'un marqueur virtuel et d'un vecteur - Google Patents
Procédé de réalisation d'image de réalité augmentée au moyen d'un marqueur virtuel et d'un vecteur Download PDFInfo
- Publication number
- WO2018174499A2 WO2018174499A2 PCT/KR2018/003188 KR2018003188W WO2018174499A2 WO 2018174499 A2 WO2018174499 A2 WO 2018174499A2 KR 2018003188 W KR2018003188 W KR 2018003188W WO 2018174499 A2 WO2018174499 A2 WO 2018174499A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- augmented reality
- layer
- computing device
- marker
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
Definitions
- the present invention relates to a method for implementing augmented reality image using a vector.
- Augmented reality refers to computer graphics technology that blends real world images and virtual images that users see with one image. Augmented reality synthesizes an image relating to a virtual object or information to a specific object of the real world image.
- physical marker images or location information are used to identify an object to synthesize a virtual image.
- the camera of the computing device may not accurately photograph the physical marker image due to the shaking of the user's hand, and thus, the augmented reality image may not be precisely implemented.
- location information there is a problem that the augmented reality image is not implemented due to the limited or malfunction of the GPS location recognition of the computing device according to the influence of the surrounding environment.
- An object of the present invention is to provide a method and program for implementing augmented reality image that can prevent the augmented reality content is not displayed and disconnected continuously as the physical marker image shakes.
- a method of implementing an augmented reality image implemented by a computing device including: obtaining a first layer indicating an image of a real world acquired by the computing device; Identifying at least one object included in the first layer; Determining a first marker image based on an image corresponding to the at least one object among pre-stored images; Matching a position of the at least one object with the first marker image; Generating a second layer based on the first marker image; Generating an augmented reality image by combining the first layer and the second layer; And outputting the augmented reality image.
- the augmented reality content is not displayed continuously and is cut off as the marker image is shaken in the augmented reality image.
- FIG. 1 is a schematic conceptual view illustrating a virtual reality image implementation method according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating the inside of a terminal that provides augmented reality.
- FIG. 3 is a flowchart illustrating a first embodiment of a method for providing augmented reality.
- FIG. 4 is a flowchart illustrating a second embodiment of a method for providing augmented reality.
- a method of implementing an augmented reality image implemented by a computing device including: obtaining a first layer indicating an image of a real world acquired by the computing device; Identifying at least one object included in the first layer; Determining a first marker image based on an image corresponding to the at least one object among pre-stored images; Matching a position of the at least one object with the first marker image; Generating a second layer based on the first marker image; Generating an augmented reality image by combining the first layer and the second layer; And outputting the augmented reality image.
- providing a pre-stored image to the user Obtaining a user command including image information from the user; And determining a second marker image based on the image information.
- the second marker image may be further considered in generating the second layer.
- the pre-stored image may include an outline vector value.
- the user command may include contour vector information of an image to be used as the second marker image.
- the user command may include information of an inner point and an outer point of the image to be used as the second marker image.
- the first marker image may be transparently generated to be recognized by the computing device and not to be recognized by the user.
- the second layer may include augmented reality content corresponding to at least one of the first marker image and the second marker image, and the augmented reality content may mean a virtual image that appears in the augmented reality image.
- the object arrangement state of the first layer may be checked based on a vector, and the providing form of the augmented reality content may be determined based on the object arrangement state.
- the present invention may include a computer readable medium recording a program for executing the augmented reality image implementation method described.
- the application may include an application for a terminal device stored in a medium in order to execute the augmented reality image implementation method described in combination with the computing device, which is hardware.
- An augmented reality image implementation method using vectors according to an embodiment of the present invention is realized by a computing device.
- the augmented reality image implementation method may be implemented as an application, stored in the computing device and performed by the computing device.
- the computing device may be provided as a mobile device such as a smart phone, a tablet, and the like, but is not limited thereto.
- a computing device may be provided with a camera and may process and store data. That is, the computing device may be provided as a wearable device such as glasses or a band equipped with a camera. Any computing device that is not illustrated may be provided.
- the computing device may communicate with other computing devices or servers via a network.
- the method of implementing augmented reality images may be realized by linking the computing device with another computing device or server.
- the computing device 100 captures a space 10 of a real world to obtain a real world image.
- a plurality of real objects 11, 12, 13, and 14 exist in real world space 10.
- the plurality of real objects 11, 12, 13, and 14 may include any object in two or three dimensions.
- the plurality of real objects 11, 12, 13, and 14 may have different or similar shapes.
- the computing device 100 may distinguish objects based on these morphological differences.
- the computing device 100 may identify the plurality of objects 21, 22, 23, 24 in the real world image.
- the computing device 100 may extract the outlines of the identified plurality of objects 21, 22, 23, and 24.
- the computing device 100 determines an object matching the pre-stored image among the plurality of objects 21, 22, 23, and 24 by using the vector value of the contour of the pre-stored image.
- the computing device 100 may store image samples corresponding to the plurality of objects 21, 22, 23, and 24 in advance. Data regarding the contour of the image sample corresponding to the plurality of objects 21, 22, 23, and 24 may be stored in advance.
- the computing device 100 may retrieve an image sample similar to the shape of the previously stored first object 21.
- the computing device 100 may use a pre-stored image sample as a marker image to be described later.
- Types of marker images may include a first marker image and a second marker image.
- the first marker image may indicate a marker image obtained based on the first layer to be described later. That is, the first marker image may indicate the marker image determined based on the actual image, not determined by the user. For example, suppose that the first layer reflecting the real image has a calendar and a frame distinguished from the background, where the first marker image is a transparent marker generated based on the outline and shape of the calendar and the frame in the first layer. Can be.
- the marker may play a role of generating augmented reality content later.
- the second marker image may indicate the marker image obtained based on the information input from the user.
- the user may arbitrarily set augmented reality content (stars, explosions, characters, etc.) to appear on the display screen.
- the second marker image may be used while the user sets the augmented reality content to appear.
- the second marker image may be a transparent marker previously stored in the first layer based on the outline and shape of the augmented reality content (star, explosion, character, etc.).
- data relating to the contours of the plurality of objects 21, 22, 23, 24 may be provided in a three-dimensional type.
- Data relating to the image or contour of the plurality of objects 21, 22, 23, and 24 may be transmitted to and stored in the computing device 100 from another computing device or server.
- images of the plurality of objects 21, 22, 23, and 24 captured by the user may be stored in the computing device 100 in advance.
- the data about the contour of the extracted object may be stored in the form of a vector value, that is, as a vector image.
- the user may indicate a user who implements augmented reality through the computing device 100.
- an augmented reality image can be precisely implemented.
- the distance, direction, position, etc. of the object changes from the computing device 100, by appropriately changing the vector image of the object (i.e. any By responding to various forms), it is possible to accurately identify objects within real-world images.
- the computing device 100 determines an object 22 that matches the object of the plurality of objects 21, 22, 23, and 24, and synthesizes the virtual image 40 around the determined object 22 to augmented reality image.
- the user may designate at least one area 31, 32 in the real world image.
- the computing device 100 may use the objects 22 and 24 in the areas 31 and 32 designated by the user as object candidates, and determine whether the corresponding objects 22 and 24 match the objects. Or, in substantially the same way, the user may designate at least one object 22, 24 in the real world image as an object candidate.
- the computing device 100 may include at least an image acquisition unit 101, a sensor unit 102, an object recognition unit 103, a first layer generator 104, a user command input unit 105, and a user.
- Command editing unit 106, marker image generation unit 107, image matching unit 108, second layer generation unit 109, second layer storage unit 110, image synthesizing unit 111, display control unit 112 ) May include one of the display 113.
- Each component may be controlled by a processor (not shown) included in the computing device 100.
- the image acquirer 101 may capture a real world image.
- the image acquirer 101 may acquire a real world image by photographing.
- the real world image may include a plurality of real objects 11, 12, 13, and 14.
- the plurality of real objects 11, 12, 13, and 14 may include any object in two or three dimensions.
- the plurality of real objects 11, 12, 13, and 14 may have different or similar shapes.
- the image acquisition unit 101 may be a camera or the like.
- the sensor unit 102 may be equipped with devices that support GPS.
- the sensor unit 102 may recognize a location of the captured image, a direction in which the computing device 100 photographs, a moving speed of the computing device 100, and the like.
- the object recognizer 103 may recognize the plurality of real objects 11, 12, 13, and 14 based on the outlines of the plurality of real objects 11, 12, 13, and 14 included in the real world image. have.
- the object recognizer 103 recognizes the plurality of reality objects 11, 12, 13, and 14 based on the contours of the plurality of reality objects 11, 12, 13, and 14, and recognizes the plurality of reality objects ( A plurality of objects 21, 22, 23, and 24 corresponding to 11, 12, 13, and 14 may be created in the computing device 100.
- the first layer generator 104 may generate a first layer that indicates a real image corresponding to the real world image.
- the augmented reality image may be implemented through the synthesis of the real image and the virtual image.
- the first layer generator 104 may generate a real image based on the real world image photographed by the image acquirer 101.
- the user command input unit 105 may receive a command to output another object distinguished from the plurality of objects 21, 22, 23, and 24 from a user who uses the computing device 100. For example, the user may recognize the plurality of objects 21, 22, 23, and 24 from the computing device 100. If the user wants to replace the first object 21 with another object, the user may input a command to the computing device 100 to request that the first object 21 be replaced with another previously stored object. Alternatively, the user may input a command to the computing device 100 requesting to replace the first object 21 with an object that the user directly inputs (or draws) to the computing device 100.
- the user command may include information about an inner point and an outer point of the image to be used as the marker image.
- the user command editing unit 106 may edit at least one of the plurality of objects 21, 22, 23, and 24 based on the user command obtained from the user command input unit 105.
- the user command editing unit 106 may store the first object 21 in advance. You can edit to change to another object.
- the marker image generator 107 may generate a marker image based on the plurality of objects 21, 22, 23, and 24.
- the marker image may be an image for generating augmented reality content.
- the computing device 100 provides a virtual reality image of making a stone included in the reality image golden. If it is assumed that the second object 22 is a stone, the marker image generator 107 may generate a marker image capable of generating gold based on the vector value of the second object 22.
- the marker image may be recognized by the computing device 100.
- the marker image may be generated transparently so as not to be recognized by the user.
- the image matching unit 108 may correspond to the position between the marker images of the generated plurality of objects 21, 22, 23, and 24 and the plurality of objects 21, 22, 23, and 24.
- the image matching unit 108 moves the marker image to correspond to the plurality of objects 21, 22, 23, and 24 when the positions of the plurality of objects 21, 22, 23, and 24 change in real time. You can.
- the second layer generator 109 may recognize marker images of the generated plurality of objects 21, 22, 23, and 24.
- the second layer generator 109 may generate a second layer in which augmented reality content corresponding to each position of the marker images of the plurality of objects 21, 22, 23, and 24 generated is combined. Augmented reality content may be identified by the user.
- the second layer storage unit 110 may store the second layer generated from the second layer generator 109.
- the second layer generated based on the marker image may provide the user with a continuous screen that is not broken even when the positions of the plurality of objects 21, 22, 23, and 24 change in real time.
- the image synthesizer 111 may generate an augmented reality image by combining the first layer and the second layer. That is, the augmented reality image may be an image including augmented reality content in a real world image. For example, if there are stones in the real world image acquired through the computing device 100, the image synthesizing unit 111 may generate an image in which only the stones are displayed in gold.
- the display controller 112 may control the display 113 to output an augmented reality image.
- the display unit 113 may output the augmented reality visual screen.
- the computing device 100 may generate a first layer based on the real world image (S310).
- the computing device 100 may identify at least one object in the first layer (S320).
- the computing device 100 may extract the color, resolution, and vector value of the contour of the first layer.
- the computing device 100 may identify at least one object in the first layer based on the color, resolution, vector value of the contour, etc. of the first layer.
- the detailed object identification process may be as follows.
- the computing device 100 may segment the image based on the resolution of the first layer.
- the computing device 100 may classify the divided image for each area. When the number of divided images is greater than the preset number, the computing device 100 may merge the hierarchical regions by adjusting the resolution. For example, the computing device 100 may lower the resolution of the first layer so that the computing device 100 may be divided into a plurality of divided images.
- the computing device 100 may extract an object that can be independently recognized among the divided images.
- the computing device 100 may determine the first marker image based on the image corresponding to the identified object among the pre-stored images (S330).
- the computing device 100 may match the position of the object included in the first marker image and the first layer (S340).
- the computing device 100 may generate a second layer including augmented reality content based on the first marker image (S350).
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer. Since the augmented reality content is generated based on the first marker image formed based on the first layer rather than the first layer, the augmented reality image including the augmented reality content that does not break even when the first layer is shaken due to the hand shake phenomenon. Can be generated. If the position or viewing angle of the first layer is changed, the computing device 100 stores the stored vector using the vector value of any object of the first layer, the position vector value of the object in the real world, the normal vector value of the object, and the like. The vector value compensation corresponding to the change in the position or the viewing angle in one marker image can be performed.
- the computing device 100 may correct the vector value of the first marker image by using the vector value of the first marker image corresponding to the frame, the position vector value of the frame of the real world, and the normal vector value of the frame. .
- the computing device 100 may visually output the augmented reality image through the display 113 (S360).
- the computing device 100 may generate a first layer based on the real world image (S310).
- the computing device 100 may provide the user with at least one object or at least one image stored in advance (S311).
- the computing device 100 may provide the user with at least one object (or image) stored in advance at the request of the user.
- the computing device 100 may automatically provide the user with at least one object (image) stored in advance even without a user's request.
- a user who has identified at least one object (or image) previously stored through the computing device 100 inputs a command to the computing device 100 requesting to replace at least one object obtained from the real world image with another previously stored object. can do.
- a user may directly enter (or draw) an object into computing device 100.
- the computing device 100 may obtain a command from the user requesting to replace at least one object obtained from the real world image with another object stored in advance (S312). Alternatively, the computing device 100 may obtain a command for requesting the user to replace at least one object obtained from the real world image with an object input (or drawn) directly to the computing device 100.
- the computing device 100 may determine a second marker image among pre-stored images based on the command (S313).
- the computing device 100 may identify at least one object in the first layer (S320).
- the computing device 100 may extract the color, resolution, and vector value of the contour of the first layer.
- the computing device 100 may identify at least one object in the first layer based on the color, resolution, vector value of the contour, etc. of the first layer.
- the detailed object identification process may be as follows.
- the computing device 100 may segment the image based on the resolution of the first layer.
- the computing device 100 may classify the divided image for each area. When the number of divided images is greater than the preset number, the computing device 100 may merge the hierarchical regions by adjusting the resolution. For example, the computing device 100 may lower the resolution of the first layer so that the computing device 100 may be divided into a plurality of divided images.
- the computing device 100 may extract an object that can be independently recognized among the divided images.
- the computing device 100 may determine the first marker image based on the image corresponding to the identified object among the pre-stored images (S330).
- the computing device 100 may match the position of the object included in the first marker image and the first layer (S340).
- the computing device 100 may generate a second layer including augmented reality content based on at least one of the first marker image and the second marker image (S351).
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer. Since the augmented reality content is generated based on at least one of a first marker image formed based on the first layer and a second marker image formed based on a user command, not the first layer, the first layer may be shaken due to the shaking of the hand. An augmented reality image including an augmented reality content that is not broken even in a case may be generated. If the position or viewing angle of the first layer is changed, the computing device 100 stores the stored vector using the vector value of any object of the first layer, the position vector value of the object in the real world, the normal vector value of the object, and the like. The vector value compensation corresponding to the change in the position or the viewing angle of the first marker image or the second marker image may be performed.
- the computing device 100 may correct the vector value of the first marker image by using the vector value of the first marker image corresponding to the frame, the position vector value of the frame of the real world, and the normal vector value of the frame. .
- the computing device 100 may generate an augmented reality image through the synthesis of the first layer and the second layer.
- the computing device 100 may visually output the augmented reality image through the display 113 (S360).
- the augmented reality image implementation method according to an embodiment of the present invention described above may be implemented as a program (or an application) and stored in a medium to be executed in combination with a computer which is hardware.
- the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
- Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020501108A JP2020514937A (ja) | 2017-03-20 | 2018-03-19 | ベクタを用いた拡張現実画像の実現方法 |
US16/551,039 US20190378339A1 (en) | 2017-03-20 | 2019-08-26 | Method for implementing augmented reality image using vector |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170034397 | 2017-03-20 | ||
KR10-2017-0034397 | 2017-03-20 | ||
KR10-2017-0102891 | 2017-08-14 | ||
KR20170102891 | 2017-08-14 | ||
KR10-2017-0115841 | 2017-09-11 | ||
KR1020170115841A KR102000960B1 (ko) | 2017-03-20 | 2017-09-11 | 벡터를 이용한 증강 현실 영상 구현 방법 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/551,039 Continuation US20190378339A1 (en) | 2017-03-20 | 2019-08-26 | Method for implementing augmented reality image using vector |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2018174499A2 true WO2018174499A2 (fr) | 2018-09-27 |
WO2018174499A3 WO2018174499A3 (fr) | 2018-11-08 |
Family
ID=63585569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/003188 WO2018174499A2 (fr) | 2017-03-20 | 2018-03-19 | Procédé de réalisation d'image de réalité augmentée au moyen d'un marqueur virtuel et d'un vecteur |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018174499A2 (fr) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060021001A (ko) * | 2004-09-02 | 2006-03-07 | (주)제니텀 엔터테인먼트 컴퓨팅 | 개체인지를 이용한 Marker-less 증강현실과 복합현실 응용시스템 및 그 방법 |
KR101227237B1 (ko) * | 2010-03-17 | 2013-01-28 | 에스케이플래닛 주식회사 | 복수의 마커를 이용하여 가상 객체간 인터렉션을 구현하는 증강현실 시스템 및 방법 |
KR101216222B1 (ko) * | 2011-01-14 | 2012-12-28 | 에스케이플래닛 주식회사 | 증강 현실 서비스 제공 시스템 및 방법 |
KR102167273B1 (ko) * | 2013-06-25 | 2020-10-20 | 한양대학교 산학협력단 | 실제 객체를 증강하는 방법, 장치 및 컴퓨터 프로그램 제품 |
KR102161510B1 (ko) * | 2013-09-02 | 2020-10-05 | 엘지전자 주식회사 | 포터블 디바이스 및 그 제어 방법 |
-
2018
- 2018-03-19 WO PCT/KR2018/003188 patent/WO2018174499A2/fr active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2018174499A3 (fr) | 2018-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019216491A1 (fr) | Procédé d'analyse d'objets dans des images enregistrées par une caméra d'un dispositif monté sur la tête | |
CN109584295B (zh) | 对图像内目标物体进行自动标注的方法、装置及系统 | |
JP4878083B2 (ja) | 画像合成装置及び方法、プログラム | |
WO2017026839A1 (fr) | Procédé et dispositif permettant d'obtenir un modèle 3d de visage au moyen d'une caméra portative | |
WO2019050360A1 (fr) | Dispositif électronique et procédé de segmentation automatique d'être humain dans une image | |
WO2012091326A2 (fr) | Système de vision de rue en temps réel tridimensionnel utilisant des informations d'identification distinctes | |
KR102000960B1 (ko) | 벡터를 이용한 증강 현실 영상 구현 방법 | |
WO2021045552A1 (fr) | Dispositif électronique de synthèse d'image et son procédé de fonctionnement | |
US20220358662A1 (en) | Image generation method and device | |
WO2021101097A1 (fr) | Architecture de réseau neuronal de fusion multi-tâches | |
WO2013025011A1 (fr) | Procédé et système de suivi d'un corps permettant de reconnaître des gestes dans un espace | |
WO2019156543A2 (fr) | Procédé de détermination d'une image représentative d'une vidéo, et dispositif électronique pour la mise en œuvre du procédé | |
WO2021215800A1 (fr) | Système de formation de compétences chirurgicales et système de guidage chirurgical fondé sur l'apprentissage machine et utilisant l'imagerie tridimensionnelle | |
CN112749613A (zh) | 视频数据处理方法、装置、计算机设备及存储介质 | |
JP2019201397A (ja) | 撮影装置及びプログラム | |
WO2018174499A2 (fr) | Procédé de réalisation d'image de réalité augmentée au moyen d'un marqueur virtuel et d'un vecteur | |
WO2019004754A1 (fr) | Publicités à réalité augmentée sur des objets | |
WO2024111728A1 (fr) | Procédé et système d'interaction d'émotion d'utilisateur pour une réalité étendue basée sur des éléments non verbaux | |
WO2017026834A1 (fr) | Procédé de génération et programme de génération de vidéo réactive | |
WO2020045909A1 (fr) | Appareil et procédé pour logiciel intégré d'interface utilisateur pour sélection multiple et fonctionnement d'informations segmentées non consécutives | |
EP4333420A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations et programme | |
WO2023149603A1 (fr) | Système de surveillance par images thermiques utilisant une pluralité de caméras | |
CN207301480U (zh) | 一种带双定位功能的两用望远镜系统 | |
KR102339825B1 (ko) | 상황인식 장치 및 이의 영상 스티칭 방법 | |
CN111242107B (zh) | 用于设置空间中的虚拟对象的方法和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2020501108 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.12.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18771846 Country of ref document: EP Kind code of ref document: A2 |