WO2022040921A1 - Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée - Google Patents

Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée Download PDF

Info

Publication number
WO2022040921A1
WO2022040921A1 PCT/CN2020/111099 CN2020111099W WO2022040921A1 WO 2022040921 A1 WO2022040921 A1 WO 2022040921A1 CN 2020111099 W CN2020111099 W CN 2020111099W WO 2022040921 A1 WO2022040921 A1 WO 2022040921A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
data
positioning
head
mounted display
Prior art date
Application number
PCT/CN2020/111099
Other languages
English (en)
Chinese (zh)
Inventor
兰卫旗
Original Assignee
南京翱翔信息物理融合创新研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京翱翔信息物理融合创新研究院有限公司 filed Critical 南京翱翔信息物理融合创新研究院有限公司
Priority to PCT/CN2020/111099 priority Critical patent/WO2022040921A1/fr
Publication of WO2022040921A1 publication Critical patent/WO2022040921A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the invention belongs to the technical field of augmented reality positioning, and in particular relates to a distributed augmented reality positioning terminal, a positioning server and a positioning system.
  • the mainstream portable augmented reality positioning solutions on the market include head-mounted integrated devices represented by HoloLens and split devices represented by Magicleap.
  • HoloLens embeds data acquisition, data processing, and integrated display modules into the head-mounted device. While ensuring positioning accuracy, it minimizes the weight of the device, but it also brings serious heat generation and short standby time.
  • Magicleap has improved the above problems well, placing the data processing and fusion rendering module on the split microprocessor to achieve high positioning effect.
  • the above two devices have great advantages in audio-visual entertainment applications, but in industrial applications, due to the problems of processor performance, heat dissipation performance, and operating system compatibility, it is difficult to be truly applied.
  • the present invention provides a distributed augmented reality positioning terminal.
  • the technical solution is specific For: including a head-mounted display and a portable processor, the head-mounted display is connected to the portable processor through a wireless signal or electrically, wherein.
  • the head-mounted display is used for acquiring scene images, collecting IMU data, and receiving and displaying fused images through a camera.
  • the portable processor receives the data signal of the head-mounted display, and outputs the position and attitude of the camera through the acquired scene image and IMU data.
  • the head-mounted display includes an image acquisition module and a display module; the display module is fixedly installed inside the head-mounted display, and the image acquisition module is configured as at least one group of cameras, and is fixedly installed on the outer side of the display module; in.
  • the image acquisition module is used for acquiring scene images and collecting IMU data.
  • the display module is used for receiving and displaying the fusion image.
  • the portable processor includes a pre-integration module, a feature extraction module, an optical flow tracking module, a matching module and an optimization module; the pre-integration module processes the IMU data collected by the head-mounted display and outputs the camera moving Process R1 and T1 to the optimization module.
  • the feature extraction module performs data processing of feature point extraction on the scene image obtained by the head-mounted display camera, and transmits the processed data to the optical flow tracking module.
  • the optical flow tracking module transmits the data to the matching module after performing optical flow tracking processing on the data.
  • the matching module is used to match the images between the two frames before and after according to the optical flow tracking processing structure, obtain R2 and T2 between the two frames of images, and transmit them to the optimization module.
  • the optimization module is used to store R1 and T1, R2 and T2, and obtain the position and attitude of the camera.
  • a denoising module is also included, which is arranged at the front end of the data processing by the pre-integration module, and is used for denoising the IMU data collected by the head-mounted display.
  • each group of preprocessing modules corresponds to a group of scene image processing, and multiple groups of preprocessing modules preprocess the scene images and transmit the data to the feature extraction module.
  • a distributed augmented reality positioning device including a positioning server, a communication module and a positioning terminal, wherein the positioning server and the positioning terminal perform data information exchange through the communication module.
  • the positioning terminal is set as the above-mentioned distributed augmented reality positioning terminal.
  • the positioning server includes a storage module and a fusion module, wherein.
  • a storage module for storing prefabricated models and their scene poses.
  • the fusion module is used to generate a fusion image according to the position of the camera, the posture of the storage module, the prefabricated model and its scene posture.
  • a distributed augmented reality positioning system including a positioning device, a built-in database unit, and a data processing unit; the positioning device is the above-mentioned distributed augmented reality positioning device.
  • the positioning server of the positioning device receives the data of the portable processor and outputs it to the data processing unit; the built-in database unit is used to output the built-in database information to the data processing unit; the data processing unit receives the obtained positioning server The data is compared with the information in the built-in database, and after processing, it is transmitted to the display module; the display module is used to display the data results.
  • the processing method of the data processing unit is as follows: by comparing the built-in database information with the data received by the positioning server, fetching the 3D model information to be loaded, and then merging and rendering with the video information of the scene to obtain a fused image.
  • the positioning terminal, the positioning device and the positioning system provided by the present invention have the following advantages.
  • the present invention adopts a scene perception method based on the fusion of images and inertial measurement modules. Compared with base station positioning, it can not only measure the pose relative to the scene, but also measure the depth distance information relative to the object and the perception information of the scene. larger amount.
  • the pose calculation module of the present invention is located on the portable processor, the scene loading and rendering module is located on the positioning server, and the display module is located on the head-mounted display, and each module is independent of each other to avoid the risk of system downtime.
  • the present invention is not limited in motion scenes, can move at will, and has higher flexibility.
  • FIG. 1 is a schematic diagram of base station positioning in the prior art.
  • FIG. 2 is a block diagram of the composition of the positioning system in the present invention.
  • FIG. 3 is a block diagram of the composition of the head-mounted display of the present invention.
  • FIG. 4 is a block diagram of the composition of the portable processor of the present invention.
  • FIG. 5 is a block diagram of the composition of the positioning server in the positioning system of the present invention.
  • a distributed augmented reality positioning system includes a positioning terminal 100 and a positioning server 200 capable of data interaction.
  • the positioning terminal 100 includes a head mounted display 1 and a portable processor 2 .
  • the head-mounted display 1 is used for acquiring scene images, providing IMU data and receiving and displaying fused images; the portable processor 2 is used for calculating the position and attitude of the camera according to the scene images and IMU data.
  • the head mounted display 1 includes two image acquisition modules 11 and a display module 12 .
  • the image acquisition module 11 is used to acquire scene images and provide IMU data; the display module 12 is used to receive and display the fused images.
  • the image acquisition module 11 can be optionally set as an inertial navigation camera, or can optionally be two cameras. The two cameras are set on the outside of the head-mounted display 1 in a left and right manner.
  • the display module 12 is a display screen. Inside the head mounted display 1 .
  • the portable processor 2 includes a denoising module 21 , an integration module 22 , a preprocessing module 23 , a feature extraction module 24 , an optical flow tracking module 25 , a matching module 26 and an optimization module 27 .
  • the denoising module 21 is used to denoise the IMU data and use the processing result as the input of the integration module 22 .
  • the integration module 22 is used to perform pre-integration processing on the IMU data to obtain the R1 and T1 of the camera during the movement process.
  • the preprocessing module 23 is used to preprocess the scene image and use the processing result as the input of the feature extraction module 24 .
  • the feature extraction module 24 is used to extract feature points in the scene image.
  • the optical flow tracking module 25 is used to perform optical flow tracking on the scene image according to the feature points.
  • the matching module 26 is configured to match the images between the two frames before and after according to the optical flow tracking result to obtain R2 and T2 between the two frames of images.
  • the optimization module 27 is used to calculate the position and attitude of the camera according to R1, T1, R2 and T2.
  • the positioning server 200 includes a storage module 3 and a fusion module 4 .
  • the storage module 3 is used to store the prefabricated model and its scene pose.
  • the fusion module 4 is used to perform scene fusion and rendering according to the position of the camera, the posture of the storage module 3, the prefabricated model and its scene posture, and generate a fusion image.
  • the binocular inertial navigation camera on the head-mounted display 1 collects video information of the scene, and then transmits the video information and the IMU data of the camera to the portable processor 2 .
  • the scene image and the IMU data are processed in real time through the visual inertial navigation algorithm, the position and attitude of the camera are calculated, and the position and attitude of the camera are calculated and sent to the positioning server 200 at the same time.
  • This process is based on the front-end processing part of the open source algorithm VINS-Fusion algorithm. After the left and right cameras collect the scene image, they first perform preprocessing, and then extract the feature points in the image and perform optical flow tracking on the image.
  • the positioning server 200 After receiving the data sent by the portable processor 2, the positioning server 200 retrieves the 3D model information that needs to be loaded by comparing the built-in database, obtains the fused image by merging and rendering with the video information of the scene, and then sends it to.
  • the display module 12 performs display to realize the effect of augmented reality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un terminal de positionnement pour réalité augmentée distribuée, comprenant un visiocasque et un processeur portable. Le visiocasque est connecté par un signal sans fil, ou électriquement connecté, au processeur portable ; le visiocasque est utilisé pour obtenir une image de scène au moyen d'une caméra, collecter des données d'unité de mesure inertielle (IMU), et recevoir et afficher une image fusionnée ; le processeur portable reçoit un signal de données du visiocasque, et délivre la position et l'orientation de la caméra au moyen de l'image de scène obtenue et des données d'IMU. En outre, un appareil et un système de positionnement sont également fournis. Par comparaison avec le positionnement de station de base classique, l'orientation par rapport à la scène peut être mesurée, et des informations de distance de profondeur par rapport à un objet peuvent également être mesurées, et la quantité d'informations de perception de la scène est plus importante ; de plus, une scène de mouvement n'est pas limitée, un mouvement libre peut être réalisé, de même qu'une haute flexibilité peut être obtenue; de plus, un module d'affichage est situé sur le visiocasque, et les modules sont indépendants l'un de l'autre, ce qui permet d'éviter le risque de défaillance du système.
PCT/CN2020/111099 2020-08-25 2020-08-25 Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée WO2022040921A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/111099 WO2022040921A1 (fr) 2020-08-25 2020-08-25 Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/111099 WO2022040921A1 (fr) 2020-08-25 2020-08-25 Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée

Publications (1)

Publication Number Publication Date
WO2022040921A1 true WO2022040921A1 (fr) 2022-03-03

Family

ID=80352407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111099 WO2022040921A1 (fr) 2020-08-25 2020-08-25 Terminal de positionnement, appareil de positionnement et système de positionnement pour réalité augmentée distribuée

Country Status (1)

Country Link
WO (1) WO2022040921A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047104A (zh) * 2017-12-26 2019-07-23 精工爱普生株式会社 对象检测和跟踪方法、头戴式显示装置和存储介质
CN110125928A (zh) * 2019-03-27 2019-08-16 浙江工业大学 一种基于前后帧进行特征匹配的双目惯导slam系统
US20190361518A1 (en) * 2018-05-22 2019-11-28 Oculus Vr, Llc Apparatus, system, and method for accelerating positional tracking of head-mounted displays
CN110658916A (zh) * 2019-09-18 2020-01-07 中国人民解放军海军航空大学 目标跟踪方法和系统
CN111307146A (zh) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 一种基于双目相机和imu的虚拟现实头戴显示设备定位系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047104A (zh) * 2017-12-26 2019-07-23 精工爱普生株式会社 对象检测和跟踪方法、头戴式显示装置和存储介质
US20190361518A1 (en) * 2018-05-22 2019-11-28 Oculus Vr, Llc Apparatus, system, and method for accelerating positional tracking of head-mounted displays
CN110125928A (zh) * 2019-03-27 2019-08-16 浙江工业大学 一种基于前后帧进行特征匹配的双目惯导slam系统
CN110658916A (zh) * 2019-09-18 2020-01-07 中国人民解放军海军航空大学 目标跟踪方法和系统
CN111307146A (zh) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 一种基于双目相机和imu的虚拟现实头戴显示设备定位系统

Similar Documents

Publication Publication Date Title
CN108765498B (zh) 单目视觉跟踪方法、装置及存储介质
JP4689639B2 (ja) 画像処理システム
CN104699247B (zh) 一种基于机器视觉的虚拟现实交互系统及方法
CN106843507B (zh) 一种虚拟现实多人互动的方法及系统
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
CN110327048B (zh) 一种基于可穿戴式惯性传感器的人体上肢姿态重建系统
CN110617814A (zh) 单目视觉和惯性传感器融合的远距离测距系统及方法
JP4743818B2 (ja) 画像処理装置、画像処理方法、コンピュータプログラム
CN110617813B (zh) 单目视觉信息和imu信息相融合的尺度估计系统及方法
CN103948361B (zh) 无标志点的内窥镜定位跟踪方法和系统
CN111307146B (zh) 一种基于双目相机和imu的虚拟现实头戴显示设备定位系统
CN112270702B (zh) 体积测量方法及装置、计算机可读介质和电子设备
JP2017534940A (ja) 3dシーンでオブジェクトを再現するシステム及び方法
CN109767470B (zh) 一种跟踪系统初始化方法及终端设备
US10992879B2 (en) Imaging system with multiple wide-angle optical elements arranged on a straight line and movable along the straight line
CN112819860B (zh) 视觉惯性系统初始化方法及装置、介质和电子设备
WO2022047828A1 (fr) Système de positionnement combiné à réalité augmentée industrielle
CN107015655A (zh) 博物馆虚拟场景ar体验眼镜装置及其实现方法
CN112070820A (zh) 一种分布式增强现实的定位终端、定位服务器及定位系统
CN105262949A (zh) 一种多功能全景视频实时拼接方法
WO2024094227A1 (fr) Procédé d'estimation de pose de geste basé sur un filtrage de kalman et un apprentissage profond
CN111035458A (zh) 一种手术综合视景智能辅助系统及图像处理方法
CN103412401B (zh) 内窥镜和管道内壁三维图像重建方法
CN109445598A (zh) 一种基于视觉的增强现实系统装置
CN112288876A (zh) 远距离ar识别服务器及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950596

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950596

Country of ref document: EP

Kind code of ref document: A1