WO2023068795A1 - Dispositif et procédé de création d'un métavers à l'aide d'une analyse d'image - Google Patents

Dispositif et procédé de création d'un métavers à l'aide d'une analyse d'image Download PDF

Info

Publication number
WO2023068795A1
WO2023068795A1 PCT/KR2022/015931 KR2022015931W WO2023068795A1 WO 2023068795 A1 WO2023068795 A1 WO 2023068795A1 KR 2022015931 W KR2022015931 W KR 2022015931W WO 2023068795 A1 WO2023068795 A1 WO 2023068795A1
Authority
WO
WIPO (PCT)
Prior art keywords
metaverse
information
target data
extracted
image analysis
Prior art date
Application number
PCT/KR2022/015931
Other languages
English (en)
Korean (ko)
Inventor
오우진
Original Assignee
주식회사 제이어스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 제이어스 filed Critical 주식회사 제이어스
Publication of WO2023068795A1 publication Critical patent/WO2023068795A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present invention relates to an apparatus and method for generating a metaverse, and more particularly, to an apparatus and method for generating a metaverse using image analysis.
  • map services provided online have been continuously developed through image processing technology such as aerial photographs and street views in 2D (landmark, satellite).
  • non-face-to-face online services using the metaverse which has recently been in the limelight, are attracting more attention as the non-face-to-face life becomes longer due to the re-proliferation of Corona 19 and the desire to expand reality into the virtual world.
  • Metaverse is a combination of 'Meta' meaning transcendence and 'Universe' meaning world.
  • a commonly thought metaverse is a game using an avatar or a non-face-to-face meeting in a virtual space.
  • An object of the present invention is to collect target data related to a space to create a metaverse embodying a virtual space, analyze and extract meta information included in the target data, and generate a metaverse based on the meta information, but use an inference engine.
  • Metaverse generation device using image analysis that can be modified and supplemented to enable accurate and sophisticated creation of metaverse space by learning about the connectivity of metaverse space and the accuracy of information included in extracted meta information by utilizing is to provide a way
  • An apparatus for generating a metaverse using image analysis includes a collection unit for collecting target data including photo data related to a space to generate a metaverse implementing a virtual space; The collected target data is collected, meta information included in each target data is analyzed and extracted, and at least one or more objects included in the image of the target data are detected based on a neural network learning-based object recognition algorithm along with location information as meta information.
  • a first inference engine including extracted object information and shape information; a library for classifying and storing meta-information including location information, object information, and shape information extracted from the first inference engine along with the target data in association with corresponding target data; and a metaverse generating unit that collects meta information data extracted and classified from the library and stored for each place to generate a metaverse, which is a virtual space for the corresponding place.
  • the meta information is characterized in that it further includes text information and time information.
  • the library classifies the meta information analyzed and extracted from the first inference engine by element, and separates and stores the meta information in association with the target data, but can be stored separately by place using the location information or by time using the time information.
  • tagging or ID for identification is given in order to assign association with target data and classify by place or time.
  • the metaverse generating device using the image analysis is a second device that performs neural network learning and 3D conversion calculation to generate a 3D metaverse space based on target data in 2D form and meta information extracted from the target data. It further includes an inference engine.
  • the second inference engine can perform learning and calculations for correcting and supplementing the generated metaverse space, and learning about the connectivity of the metaverse space and the accuracy of information included in the extracted meta information, It is characterized by correcting and supplementing to enable accurate and sophisticated creation of the metaverse space as more target data is collected and repeatedly learned.
  • the metaverse generated by the metaverse generator further includes a database that is classified and stored by location using location information or classified and stored by time using time information.
  • the collection unit collecting target data for generating a metaverse that is a 3D virtual space;
  • the first inference engine collects collected target data, analyzes and extracts meta information included in each target data, and includes location information as meta information in an image of the target data based on a neural network learning-based object recognition algorithm.
  • meta information extracted from at least one object classifying and storing, by the library, meta information including location information, object information, and shape information extracted from the first inference engine along with the target data in association with the corresponding target data; and the step of generating the metaverse, which is a virtual space for the corresponding place, by collecting meta information data extracted, classified, and stored from the library by place.
  • the second inference engine may further include performing neural network training and 3D conversion calculation to generate a 3D metaverse space based on target data in 2D form and meta information extracted from the target data.
  • the second inference engine can perform learning and calculations for correcting and supplementing the generated metaverse space, and learning about the connectivity of the metaverse space and the accuracy of information included in the extracted meta information, The step of correcting and supplementing to enable accurate and sophisticated creation of the metaverse space as more target data is collected and repeated learning is further included.
  • the meta information further includes text information and time information, and in the metaverse generation method using the image analysis, the database divides and stores the metaverse generated by the metaverse generator by location and time by utilizing meta information. Include more steps.
  • the apparatus and method for generating a metaverse using image analysis of the present invention collects target data related to a space to generate a metaverse embodying a virtual space, analyzes and extracts meta information included in the target data, and based on the meta information Create a metaverse, but use an inference engine to learn about the connectivity of the metaverse space and the accuracy of the information included in the extracted meta information, so that the metaverse space can be corrected and supplemented to enable accurate and sophisticated creation.
  • metaverse data classified and stored by location and time is compatible with various applications upon request from various demand devices, such as VR devices, mobile devices such as smartphones, tablets, and laptops running map apps, PCs, etc. It can be implemented in, and in particular, since the corresponding place can be implemented and reproduced by time, it can provide functions such as time travel in a virtual space similar to reality implemented by metaverse data.
  • FIG. 1 is a diagram showing the configuration concept of a metaverse generating device using image analysis according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing the configuration of a metaverse generating device using image analysis according to an embodiment of the present invention.
  • Figure 3 is a flow chart for explaining the library creation process of the metaverse generation method using image analysis according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a metaverse generation process of a metaverse generation method using image analysis according to an embodiment of the present invention.
  • FIG. 5 is a flowchart showing the entire process of a metaverse generation method using image analysis according to an embodiment of the present invention.
  • a sophisticated and accurate metaverse can be created when a metaverse is created by a learning-based inference engine.
  • the data analysis process of the first reasoning engine 200 is a process of extracting meta information corresponding to each element included in data, and by extracting meta information, by utilizing the information necessary to create a metaverse space, sophisticated and accurate It can help to create metaverse data, and can also assist in generating metaverse data by time.
  • the element may be auxiliary information such as photographing time and location (coordinate) information, but may also include object information, which is information about an object included in a photograph image, that is, an object.
  • elements can be classified and classified, and then stored in the library 300 after being clearly organized to be used when creating a metaverse (S104).
  • 3D conversion and metaverse data for a specific place or building are generated through learning of the second inference engine 400 by utilizing the meta information corresponding to the element stored in the library 300 (S106).
  • the generated metaverse data can be generated to lead to one virtual space by neural network learning of the second inference engine 400 based on compatibility, connectivity, and sophistication of the place, and the data is divided by coordinates or sections , It can be stored (S108).
  • a separate blockchain server (not shown) may be prepared for integrity verification and managed through a blockchain network.
  • the authentication unit builds a blockchain network in connection with a number of blockchain servers, generates public and private keys through the established internal blockchain network, converts them into hash values, distributes and stores them, and distributes the stored public keys and users.
  • User authentication can be performed based on the personal information of the user.
  • a plurality of customer terminals which are demand devices capable of communicating with the metaverse generating device using image analysis of the present invention, receive individual user information along with a public key and include a hash value for user information.
  • user certificates that can be generated, and the storage method for each user certificate can be made by a Merkle tree structure.
  • each user certificate (transaction) is stored with a hash value in the lowest child node, and the hash value is stored in the intermediate node on the path leading to the lowest child node in the merkle root (parent node), which is the top level of the merkle tree. It is hashed and stored so that it can be shared.
  • the user certificate copied to the individual customer terminal is compared with the user certificate in the database 600, and only the hash value hashed along the path of the Merkle tree is compared. do.
  • an additional neural network learning algorithm may be used in the inference engine to be used for image restoration when the image required for object recognition is not clear due to limitations in camera performance or camera errors.
  • a new image can be created or regenerated, so it can be used for restoring a damaged image.
  • generative adversarial networks are structured with multiple deep neural networks and require dozens of times more computation than existing deep neural network models to generate high-resolution images, but can provide excellent performance for image restoration. there is.
  • the generative adversarial neural network is an unsupervised learning-based generative model that adversarially trains two networks with a generator and discriminator.
  • Input data is input to the generator to create fake images similar to real images.
  • a noise value may be input as the input data.
  • Noise values can follow any probability distribution. For example, it may be data generated with a zero-mean Gaussian.
  • the discriminator can be trained to discriminate between the real image and the fake image generated by the generator. More specifically, it is possible to learn to have a high probability when a real image is input, and to have a low probability when a fake image is input. That is, the discriminator can gradually learn to discriminate between real images and fake images.
  • GAN generative adversarial network
  • DC-GAN which is a deep convolution generative adversarial network
  • DC-GAN which is a deep convolution generative adversarial network
  • the form of a deep fake can be a form that resembles one's face.
  • deepfakes can be made within the permissible line.
  • DC-GAN needs to construct a Cycle-GAN consisting of two generators and two discriminators, and during the training period, two different sets of numerous images are sent as input, each with two DCGANs, i.e., these two domains as X and represented by Y.
  • the input image x is a member of domain X and the noise signal should be similar to the image of domain Y, denote the generator of the GAN of domain X different from the existing domain as G, and the image generated by G as G ( x).
  • Domain Y is the target domain of this GAN, and likewise for other GANs, the input image is y belonging to domain Y, and the generator (denoted by F) is also an image (F(y), which is difficult to distinguish from the display image), so X is this GAN becomes the target domain of
  • the discriminator of a GAN with generator G can be expressed as:
  • a GAN that differs as DY can be denoted as DX.
  • a 'terminal' may be a wireless communication device with guaranteed portability and mobility, and may be, for example, any type of handheld-based wireless communication device such as a smart phone, a tablet PC, or a laptop computer.
  • the 'terminal' may be a wired communication device such as a PC capable of accessing other terminals or servers through a network.
  • a network refers to a connection structure capable of exchanging information between nodes such as terminals and servers, such as a local area network (LAN), a wide area network (WAN), and the Internet (WWW : World Wide Web), wired and wireless data communications network, telephone network, and wired and wireless television communications network.
  • LAN local area network
  • WAN wide area network
  • WWW World Wide Web
  • wireless data communication networks examples include 3G, 4G, 5G, 3rd Generation Partnership Project (3GPP), Long Term Evolution (LTE), World Interoperability for Microwave Access (WIMAX), Wi-Fi, Bluetooth communication, infrared communication, ultrasonic communication, visible light communication (VLC: Visible Light Communication), LiFi, and the like, but are not limited thereto.
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • WIMAX World Interoperability for Microwave Access
  • Wi-Fi Bluetooth communication
  • infrared communication ultrasonic communication
  • VLC Visible Light Communication
  • LiFi and the like, but are not limited thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Architecture (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

Un dispositif pour créer un métavers à l'aide d'une analyse d'image selon un mode de réalisation de la présente invention comprend : une unité de collecte pour collecter des données cibles comprenant des données photo associées à un espace pour créer un métavers qui met en œuvre un espace virtuel ; un premier moteur d'inférence qui collecte les données cibles collectées, analyse et extrait des métainformations comprises dans chaque donnée cible, et comprend, en tant que méta-informations, des informations d'emplacement ainsi que des informations d'objet et des informations de forme obtenues par extraction d'un ou de plusieurs objets compris dans une image des données cibles sur la base d'un algorithme de reconnaissance d'objet basé sur un apprentissage de réseau neuronal ; une bibliothèque qui classe les méta-informations extraites du premier moteur d'inférence en association avec des données cibles correspondantes ainsi que les données cibles, et les sauvegarde ; et une unité de création de métavers qui collecte des données de méta-informations extraites, classées et sauvegardées à partir de la bibliothèque par lieu, et crée un métavers qui est un espace virtuel pour un lieu correspondant.
PCT/KR2022/015931 2021-10-22 2022-10-19 Dispositif et procédé de création d'un métavers à l'aide d'une analyse d'image WO2023068795A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210141722A KR102402170B1 (ko) 2021-10-22 2021-10-22 이미지 분석을 이용한 메타버스 생성 장치 및 방법
KR10-2021-0141722 2021-10-22

Publications (1)

Publication Number Publication Date
WO2023068795A1 true WO2023068795A1 (fr) 2023-04-27

Family

ID=81809445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015931 WO2023068795A1 (fr) 2021-10-22 2022-10-19 Dispositif et procédé de création d'un métavers à l'aide d'une analyse d'image

Country Status (2)

Country Link
KR (1) KR102402170B1 (fr)
WO (1) WO2023068795A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102402170B1 (ko) * 2021-10-22 2022-05-26 주식회사 제이어스 이미지 분석을 이용한 메타버스 생성 장치 및 방법
KR102619706B1 (ko) * 2022-07-18 2024-01-02 주식회사 페어립에듀 메타버스 가상 공간 구현 시스템 및 방법
KR20240023297A (ko) * 2022-08-11 2024-02-21 붐앤드림베케이션 주식회사 메타 시공간 제품좌표 생성장치 기반의 메타 시공간 제품 매매장치 및 방법, 메타 시공간 제품 검색 및 접속장치

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130011037A (ko) * 2011-07-20 2013-01-30 국민대학교산학협력단 지식기반 증강현실 시스템
KR20130061538A (ko) * 2011-12-01 2013-06-11 한국전자통신연구원 가상현실 기반 콘텐츠 제공장치 및 그 방법
KR20180092778A (ko) * 2017-02-10 2018-08-20 한국전자통신연구원 실감정보 제공 장치, 영상분석 서버 및 실감정보 제공 방법
WO2020229841A1 (fr) * 2019-05-15 2020-11-19 Roborace Limited Système de fusion de données métavers
JP2021140767A (ja) * 2020-03-06 2021-09-16 エヌビディア コーポレーション 合成データ生成のためのシーン構造の教師なし学習
KR102402170B1 (ko) * 2021-10-22 2022-05-26 주식회사 제이어스 이미지 분석을 이용한 메타버스 생성 장치 및 방법

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101759188B1 (ko) 2015-06-09 2017-07-31 오인환 2d 얼굴 이미지로부터 3d 모델을 자동 생성하는 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130011037A (ko) * 2011-07-20 2013-01-30 국민대학교산학협력단 지식기반 증강현실 시스템
KR20130061538A (ko) * 2011-12-01 2013-06-11 한국전자통신연구원 가상현실 기반 콘텐츠 제공장치 및 그 방법
KR20180092778A (ko) * 2017-02-10 2018-08-20 한국전자통신연구원 실감정보 제공 장치, 영상분석 서버 및 실감정보 제공 방법
WO2020229841A1 (fr) * 2019-05-15 2020-11-19 Roborace Limited Système de fusion de données métavers
JP2021140767A (ja) * 2020-03-06 2021-09-16 エヌビディア コーポレーション 合成データ生成のためのシーン構造の教師なし学習
KR102402170B1 (ko) * 2021-10-22 2022-05-26 주식회사 제이어스 이미지 분석을 이용한 메타버스 생성 장치 및 방법

Also Published As

Publication number Publication date
KR102402170B1 (ko) 2022-05-26

Similar Documents

Publication Publication Date Title
WO2023068795A1 (fr) Dispositif et procédé de création d'un métavers à l'aide d'une analyse d'image
WO2018174623A1 (fr) Appareil et procédé d'analyse d'images utilisant un réseau neuronal profond tridimensionnel virtuel
EP3876140B1 (fr) Procédé et appareil de reconnaissance de postures de multiples personnes, dispositif électronique et support d'informations
CN109543690A (zh) 用于提取信息的方法和装置
CN111368943B (zh) 图像中对象的识别方法和装置、存储介质及电子装置
CN111461089A (zh) 一种人脸检测的方法、人脸检测模型的训练方法及装置
CN111768336A (zh) 人脸图像处理方法、装置、计算机设备和存储介质
CN113704531A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN110543811B (zh) 一种基于深度学习的非配合式考试人员管理方法及其系统
CN104484814B (zh) 一种基于视频地图的广告方法及系统
CN110046297A (zh) 运维违规操作的识别方法、装置和存储介质
CN110348463A (zh) 用于识别车辆的方法和装置
CN112989767A (zh) 医学词语标注方法、医学词语映射方法、装置及设备
CN105631404A (zh) 对照片进行聚类的方法及装置
CN113762034A (zh) 视频分类方法和装置、存储介质及电子设备
WO2024101466A1 (fr) Appareil et procédé de suivi de personne disparue basé sur des attributs
CN110516094A (zh) 门类兴趣点数据的去重方法、装置、电子设备及存储介质
CN115168609A (zh) 一种文本匹配方法、装置、计算机设备和存储介质
CN112308093B (zh) 基于图像识别的空气质量感知方法、模型训练方法及系统
CN115130456A (zh) 语句解析、匹配模型的训练方法、装置、设备及存储介质
CN114638973A (zh) 目标图像检测方法及图像检测模型训练方法
CN115082873A (zh) 基于通路融合的图像识别方法、装置及存储介质
CN114663929A (zh) 基于人工智能的脸部识别方法、装置、设备和存储介质
CN114639132A (zh) 人脸识别场景下的特征提取模型处理方法、装置、设备
WO2020175729A1 (fr) Appareil et procédé pour détecter un point de caractéristique faciale à l'aide d'une carte de points caractéristiques gaussiens et d'un schéma de régression

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22884017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE