WO2022047828A1 - 一种工业增强现实组合定位系统 - Google Patents

一种工业增强现实组合定位系统 Download PDF

Info

Publication number
WO2022047828A1
WO2022047828A1 PCT/CN2020/115089 CN2020115089W WO2022047828A1 WO 2022047828 A1 WO2022047828 A1 WO 2022047828A1 CN 2020115089 W CN2020115089 W CN 2020115089W WO 2022047828 A1 WO2022047828 A1 WO 2022047828A1
Authority
WO
WIPO (PCT)
Prior art keywords
helmet
pose
camera
positioning system
base station
Prior art date
Application number
PCT/CN2020/115089
Other languages
English (en)
French (fr)
Inventor
兰卫旗
Original Assignee
南京翱翔信息物理融合创新研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京翱翔信息物理融合创新研究院有限公司 filed Critical 南京翱翔信息物理融合创新研究院有限公司
Publication of WO2022047828A1 publication Critical patent/WO2022047828A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the invention relates to the technical field of augmented reality positioning, in particular to an industrial augmented reality combined positioning system.
  • Augmented reality is a technology that calculates the position and angle of camera images in real time and adds corresponding images.
  • the goal of this technology is to put the virtual world on the screen and interact with the real world.
  • the base station of HTC VIVE is the basis of the Lighthouse tracking system, which consists of several positioning base stations, head-mounted displays and interactive handles. It will use alternately scanning horizontal and vertical lasers to detect HTC Vive headsets, and small sensors inside the headset also detect passing lasers. The system then intelligently combines all the data to recognize the rotation of the device and its position in 3D space.
  • this positioning method can only obtain the pose of the user relative to the base station, which is difficult to meet the AR scenarios that require interaction and identification.
  • the industrial augmented reality combined positioning system of the present invention includes an intelligent perception module, a base station positioning module and a service decision module; the base station positioning module is based on the existing HTC Based on the VIVE positioning system, it includes a base station, a helmet and a server.
  • the base station emits laser light, and the sensor on the helmet receives the laser signal.
  • the data is transmitted to the server for analysis, and the pose of the helmet relative to the base station is calculated.
  • the intelligent perception module includes Portable microprocessor and camera integrated on the helmet, the camera transmits the image information to the microprocessor, through the built-in positioning and identification algorithm, the object is identified and the pose of the object in the head scene is calculated, and the service decision module accepts The pose from the intelligent perception module and the helmet information detected by the base station are used to obtain the final pose through the coordinate transformation relationship.
  • the 3D model is fused into the video stream in real time according to the pose to achieve the effect of augmented reality. .
  • Step 1 Construct the perceptual space.
  • Step 2 Get the pose of the camera.
  • Step 3 Get the helmet pose.
  • Step 4 Calculate the pose of the HTC VIVE positioning system.
  • Step 5 Build the fused AR scene.
  • Step 6 Render the display.
  • Step 1 The camera uses the front-end of the VINS-Fusion algorithm to process the collected images.
  • Opencv is used to extract the SIFT feature points of the images, and the motion relationship between the two frames of images is calculated according to the feature matching relationship between the two frames of images. .
  • Step 2 The camera coordinate system is based on the left eye. First, the pose of the helmet relative to the scene is calculated by coordinate transformation, and then sent to the HTC VIVE positioning system through the server.
  • Step 3 In the HTC VIVE positioning system, through the transformation of the helmet camera coordinate system, the base station coordinate system and the world coordinate system, calculate the HTC The coordinate transformation of the VIVE virtual scene's coordinate system relative to the real world.
  • the step 1 includes opening the server, the helmet, the camera and the portable microprocessor and unifying the coordinate system, connecting the HTC VIVE and a portable microprocessor fix the camera directly in front of the helmet, measure the camera coordinate system and the helmet coordinate system at the same time, and calculate the R and T of both.
  • the step 2 includes that the camera collects the image of the scene, then preprocesses the image, extracts the optical flow feature of the image, tracks the optical flow of the continuous frame images, and performs the image of the stable and sustainable optical flow points. Match to get the Rotation and Position between the two frame images.
  • the position and attitude of the helmet is located through two base stations, and the base station emits a laser.
  • the sensor on the helmet detects the laser signal
  • the position and attitude of the helmet relative to the base station that is, the position and the Rotation, can be calculated, and sent to the server.
  • the intelligent perception module carried by this system in addition to the scene perception function, is highly expandable.
  • an object recognition module, a pedestrian detection module, etc. can be added, which can carry out multi-person linkage, and realize multi-person entertainment and interaction in real scenes. and other functions.
  • FIG. 1 is a system architecture diagram of the present invention.
  • Fig. 2 is a coordinate conversion relationship diagram in the present invention.
  • FIG. 3 is a diagram of a camera matching model in the present invention.
  • FIG. 4 is a schematic diagram of a base station coordinate system, a geodetic coordinate system and a helmet coordinate system in the present invention.
  • FIG. 5 is a schematic diagram of the camera coordinate system and the helmet coordinate system in the present invention.
  • the industrial augmented reality combined positioning system of the present invention includes an intelligent perception module, a base station positioning module and a service decision module.
  • the base station positioning module is based on the existing HTC VIVE positioning system, which includes a base station 1, a helmet 2 and a server 3.
  • the base station transmits a laser
  • the sensor on the helmet receives the laser signal
  • the data is transmitted to the server for analysis, and the helmet is calculated.
  • the intelligent perception module includes a portable microprocessor 4 and a camera 5 integrated on the helmet 2.
  • the camera transmits image information to the microprocessor, and through the built-in positioning and identification algorithm, it identifies the object and calculates the scene object's value. pose.
  • the service decision module accepts the pose from the intelligent perception module and the helmet information detected by the base station, obtains the final pose through the coordinate transformation relationship, and fuses the 3D model into the video stream in real time according to the pose according to different application scenarios. , to achieve the effect of augmented reality.
  • Step 1 The camera uses the front-end of the VINS-Fusion algorithm to process the collected images.
  • Opencv is used to extract the SIFT feature points of the images, and the motion relationship between the two frames of images is calculated according to the feature matching relationship between the two frames of images. .
  • Step 2 The camera coordinate system is based on the left eye. First, the pose of the helmet relative to the scene is calculated by coordinate transformation, and then sent to the HTC VIVE positioning system through the server.
  • Step 3 In the HTC VIVE positioning system, through the transformation of the helmet camera coordinate system, the base station coordinate system and the world coordinate system, calculate the HTC The coordinate transformation of the VIVE virtual scene's coordinate system relative to the real world.
  • the above formula is called the epipolar constraint.
  • the epipolar constraint contains both translation and rotation.
  • the two matrices in the middle are the fundamental matrix. and the essential matrix ,in.
  • R3 and T3 (where R represents the rotation matrix and T represents the displacement matrix) of the helmet relative to the object in Figure 2 is as follows, as shown in Figure 4, three related coordinate systems are defined: the geodetic coordinate system (world coordinate system) , Lighthouse optical coordinate system and helmet coordinate system . After the HTC VIVE system is calibrated, the HTC VIVE helmet can be used normally.
  • the calibration process is mainly the following two formulas.
  • the calibration parameters can be calculated by formulas (8) and (9): Lighthouse coordinate system laser scanning center: and , as well as .
  • the known rigid body coordinate system is obtained to obtain the coordinates and attitude of the geodetic coordinate system.
  • the camera coordinate system is , the helmet coordinate system is , the motion of the camera can be estimated through the feature point matching relationship between the continuous motion frames of the camera, and the pose of the helmet can be calculated in real time through the HTC VIVE base station positioning system.
  • the conversion relationship between the optical coordinate system and the local coordinate system in the positioning principle of the base station it is only necessary to calculate the pose change relationship between the camera and the helmet to determine the pose relationship between the camera and the base station, and realize the transformation from the image coordinate system to the camera. Conversion from coordinate system to HTC VIVE coordinate system.
  • the pixels of the video frame are transformed into the coordinate system of HTC VIVE, and the transformation from the camera coordinate system to the base station coordinate system is realized.
  • the workflow of the industrial augmented reality combined positioning system is as follows.
  • Step 1 Construct the perceptual space.
  • Step 2 Get the pose of the camera.
  • the camera collects the image of the scene, and then preprocesses the image (distortion correction, adaptive local histogram homogenization); then extracts the optical flow feature of the image, and tracks the continuous frame image optical flow; The optical flow points of the image are matched to obtain the Rotation (x, y, z, w) and Position (x, y, z) between the two frame images.
  • Step 3 Get the helmet pose.
  • the position and attitude of the helmet is located through two base stations, and the base station emits a laser.
  • the position and attitude of the helmet relative to the base station can be calculated, that is, Position (x, y, z) and Rotation (x). , y, z, w) and sent to the server.
  • Step 4 Calculate the pose of the HTC VIVE positioning system.
  • Step 5 Build the fused AR scene.
  • Step 6 Render the display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种工业增强现实组合定位系统包括智能感知模块、基站定位模块和服务决策模块;所述基站定位模块以现有的HTC VIVE定位系统为基础,其包括基站、头盔和服务器,基站通过发射激光,头盔上面的传感器接收激光信号,数据传输到服务器进行解析,计算出头盔相对于基站的位姿,所述智能感知模块包括便携式微处理器和集成在头盔上的相机,相机将图像信息传输到微型处理器中,通过内置的定位及识别算法,对物体进行识别并计算出头场景物体的位姿,所述服务决策模块接受来自智能感知模块的位姿与基站检测的头盔信息,通过坐标转换关系得到最终的位姿,同时根据应用场景的不同,实时的根据位姿将三维模型融合在视频流中,实现增强现实的效果。

Description

一种工业增强现实组合定位系统 技术领域
本发明涉及增强现实定位技术领域,具体为一种工业增强现实组合定位系统。
背景技术
增强现实,简称AR,是一种实时地计算摄影机影像的位置及角度并加上相应图像的技术,这种技术的目标是在屏幕上把虚拟世界套在现实世界并进行互动。
现阶段HTC VIVE的基站是Lighthouse追踪系统的基础,由若干个定位基站、头戴式显示器和交互手柄等组成,它会使用交替扫描的横向和纵向激光来检测HTC Vive头显,而头显里面的小型传感器同时会检测经过的激光。然后系统会智能地把所有的数据结合起来,从而识别出设备的转动,以及在3D空间中的位置。但是这种定位方式只能得到用户相对于基站的位姿,难以满足需要交互以及识别的AR场景。
技术问题
由此可见,提供一种工业增强现实组合定位系统是本领域亟需解决的问题。
技术解决方案
针对上述问题,本发明工业增强现实组合定位系统包括智能感知模块、基站定位模块和服务决策模块;所述基站定位模块以现有的HTC VIVE定位系统为基础,其包括基站、头盔和服务器,基站通过发射激光,头盔上面的传感器接收激光信号,数据传输到服务器进行解析,计算出头盔相对于基站的位姿,所述智能感知模块包括便携式微处理器和集成在头盔上的相机,相机将图像信息传输到微型处理器中,通过内置的定位及识别算法,对物体进行识别并计算出头场景物体的位姿,所述服务决策模块接受来自智能感知模块的位姿与基站检测的头盔信息,通过坐标转换关系得到最终的位姿,同时根据应用场景的不同,实时的根据位姿将三维模型融合在视频流中,实现增强现实的效果。
进一步的,所述定位系统的工作流程如下。
步骤1:构建感知空间。
步骤2:获取相机的位姿。
步骤3:获取头盔位姿。
步骤4:计算HTC VIVE定位系统位姿。
步骤5:构建融合的AR场景。
步骤6:渲染显示。
进一步的,所述坐标转换的过程及方法如下。
步骤一:相机采用VINS-Fusion算法的前端对采集到图像进行处理,首先利用Opencv对图像进行SIFT特征点提取,根据前后两帧图像之间的特征匹配关系计算出两帧图像之间的运动关系。
步骤二:相机坐标系以左目为基准,首先利用坐标变换计算出头盔相对于场景的位姿,然后通过服务器发送至HTC VIVE定位系统。
步骤三:在HTC VIVE定位系统中,通过头盔相机坐标系和基站坐标系以及世界坐标系的变换,计算出HTC VIVE虚拟场景的坐标系相对于真实世界的坐标变换。
进一步的,所述步骤1包括打开服务器、头盔、相机和便携式微处理器并统一坐标系,连接HTC VIVE和便携式微型处理器,将相机固定在头盔正前方,同时测量相机坐标系与头盔坐标系,计算两者的R和T。
进一步的,所述步骤2包括相机采集到场景的图像,然后对图像进行预处理、对图像进行光流特征提取,并对连续帧图像光流跟踪、对稳定可持续跟踪的光流点进行图像匹配,得到两帧图像之间的Rotation和Position。
进一步的,所述步骤2通过两个基站定位头盔的位姿,基站发射激光,当头盔上的传感器检测到激光信号之后,可以计算出头盔相对于基站的位姿,即位置Position和Rotation,并发送至服务器。
有益效果
1.在HTC VIVE定位系统的基础上实现增强现实效果,并在HTC VIVE原有的效果上进行拓展,实现了更大的活动范围。
2. 本系统携带的智能感知模块,除具备场景感知功能外,可拓展性强,如可增加物体识别模块、行人检测模块等,可进行多人联动,在真实场景中实现多人娱乐、交互等功能。
附图说明
图1是本发明的系统架构图。
图2是本发明中坐标转换关系图。
图3是本发明中相机匹配模型图。
图4是本发明中基站坐标系、大地坐标系和头盔坐标系的示意图。
图5是本发明中相机坐标系与头盔坐标系的示意图。
本发明的最佳实施方式
本发明所提到的方向用语,例如「上」、「下」、「前」、「后」、「左」、「右」、「内」、「外」、「 侧面」等,仅是附图中的方向,只是用来解释和说明本发明,而不是用来限定本发明的保护范围。
参见图1,本发明工业增强现实组合定位系统包括智能感知模块、基站定位模块和服务决策模块。
所述基站定位模块以现有的HTC VIVE定位系统为基础,其包括基站1、头盔2和服务器3,基站通过发射激光,头盔上面的传感器接收激光信号,数据传输到服务器进行解析,计算出头盔相对于基站的位姿。
所述智能感知模块包括便携式微处理器4和集成在头盔2上的相机5,相机将图像信息传输到微型处理器中,通过内置的定位及识别算法,对物体进行识别并计算出场景物体的位姿。
所述服务决策模块接受来自智能感知模块的位姿与基站检测的头盔信息,通过坐标转换关系得到最终的位姿,同时根据应用场景的不同,实时的根据位姿将三维模型融合在视频流中,实现增强现实的效果。
参见图2,上述坐标转换的过程及方法如下。
步骤一:相机采用VINS-Fusion算法的前端对采集到图像进行处理,首先利用Opencv对图像进行SIFT特征点提取,根据前后两帧图像之间的特征匹配关系计算出两帧图像之间的运动关系。
步骤二:相机坐标系以左目为基准,首先利用坐标变换计算出头盔相对于场景的位姿,然后通过服务器发送至HTC VIVE定位系统。
步骤三: 在HTC VIVE定位系统中,通过头盔相机坐标系和基站坐标系以及世界坐标系的变换,计算出HTC VIVE虚拟场景的坐标系相对于真实世界的坐标变换。
图2中相机相对于物体的R1和T1(其中R代表旋转矩阵,T代表位移矩阵)计算过程如下,如图3所示,相机5在连续运动过程中,以运动过程中的两帧图像
Figure 4349dest_path_image001
Figure 911999dest_path_image002
,设两帧之间的图像运动为
Figure 373068dest_path_image003
。两个相机运动的中心为
Figure 68491dest_path_image004
Figure 451062dest_path_image005
,现在考虑
Figure 522923dest_path_image001
中有一个特征点
Figure 307340dest_path_image006
,它在
Figure 439244dest_path_image002
中对应着特征点
Figure 595595dest_path_image007
,这两个特征点是通过特征匹配得到的。
在第一帧的坐标系下,设
Figure 346513dest_path_image008
的空间位置为。
Figure 110070dest_path_image009
              (1)。
根据相机的针孔模型,两个像素点
Figure 288241dest_path_image006
Figure 770038dest_path_image007
的像素位置为。
Figure 59068dest_path_image010
          (2)。
取:        
Figure 677131dest_path_image011
              (3)。
其中
Figure 524739dest_path_image012
是两个像素点的归一化平面上的坐标。代入上式,得。
Figure 493832dest_path_image013
   (4)。
同时两边左乘
Figure 586553dest_path_image014
Figure 324702dest_path_image015
Figure 641414dest_path_image016
(5)。
重新代入
Figure 973169dest_path_image017
Figure 371046dest_path_image018
,有。
Figure 229281dest_path_image019
  (6)。
上式称为对极约束,对极约束中同时包含了平移和旋转,其中间的两个矩阵:基础矩阵
Figure 451315dest_path_image020
和本质矩阵
Figure 270366dest_path_image021
,其中。
Figure 829523dest_path_image022
(7)。
根据本质矩阵
Figure 417631dest_path_image021
,恢复出相机的运动
Figure 138462dest_path_image023
。可以根据奇异值分解(
Figure 677765dest_path_image024
)的方法得到
Figure 775034dest_path_image023
此过程中计算出来的
Figure 483227dest_path_image023
为图2中的
Figure 312643dest_path_image025
图2中头盔相对于物体的R3和T3(其中R代表旋转矩阵,T代表位移矩阵)计算过程如下,如图4所示,定义三个相关坐标系:大地坐标系(世界坐标系)
Figure 965341dest_path_image026
,Lighthouse光学坐标系
Figure 741668dest_path_image027
以及头盔坐标系
Figure 429001dest_path_image028
。 在对HTC VIVE系统进行标定之后,HTC VIVE头盔便可以正常使用。标定过程中主要是以下两个公式。
  大地坐标系到本体坐标系的变换关系。
Figure 134045dest_path_image029
(8)。
  Lighthouse坐标系到本体坐标系的变换关系。
Figure 8460dest_path_image030
(9)。
  选取头盔刚体上的5个点和15个未知数,15个方程,通过公式(8)和(9)可以计算出标定参数:Lighthouse坐标系激光扫描中心:
Figure 322898dest_path_image031
Figure 864738dest_path_image032
Figure 239218dest_path_image033
以及
Figure 866509dest_path_image034
  当求出所有的标定参数之后,求出已知刚体本体坐标系求出大地坐标系坐标及姿态。
  正向计算
Figure 781375dest_path_image035
Figure 551623dest_path_image036
Figure 221639dest_path_image037
  (10)。
  9个未知数,通过9个方程可得到:
Figure 211591dest_path_image033
Figure 992465dest_path_image034
Figure 118684dest_path_image038
Figure 631705dest_path_image039
Figure 233588dest_path_image040
又:
Figure 194984dest_path_image041
(11)。
可得到
Figure 300343dest_path_image042
Figure 187528dest_path_image043
Figure 276707dest_path_image044
Figure 274750dest_path_image036
Figure 172299dest_path_image045
,同时得到R3和T3。
参见图5,相机坐标系为
Figure 620597dest_path_image046
,头盔坐标系为
Figure 305394dest_path_image047
,通过相机连续运动帧之间的特征点匹配关系可以估计出相机的运动,同时通过HTC VIVE基站定位系统可以实时计算出头盔的位姿。通过基站定位原理中光学坐标系和本地坐标系的转换关系,只需要计算相机和头盔之间的位姿变化关系,就可以确定相机与基站之间的位姿关系,实现由图像坐标系到相机坐标系再到HTC VIVE坐标系之间的转换。
以相机归一化坐标系中的某个点
Figure 966183dest_path_image048
为例,其映射到头盔坐标系
Figure 187080dest_path_image049
的过程满足公式(12)。
Figure 540701dest_path_image050
(12)。
Figure 214259dest_path_image051
(13)。
  在公式(13)中,
Figure 944317dest_path_image052
代表两个坐标系之间的旋转,观察可得两者之间并没有旋转。
Figure 19721dest_path_image053
代表两个坐标系之间的位移,其值如公式(14)所示,具体取值以实际测量值为准。
Figure 544243dest_path_image054
(14)。
  通过上述公式,将视频帧的像素变换到HTC VIVE的坐标系下,实现了相机坐标系到基站坐标系的变换。
所述工业增强现实组合定位系统的工作流程如下。
步骤1:构建感知空间。
打开服务器、头盔、相机和便携式微处理器并统一坐标系,连接HTC VIVE和便携式微型处理器,将相机固定在头盔正前方,同时测量相机坐标系与头盔坐标系,计算两者的R和T。
步骤2:获取相机的位姿。
相机采集到场景的图像,然后对图像进行预处理(畸变校正、自适应局部直方图均质化);然后对图像进行光流特征提取,并对连续帧图像光流跟踪;对稳定可持续跟踪的光流点进行图像匹配,得到两帧图像之间的Rotation(x,y,z,w)和Position(x,y,z)。
步骤3:获取头盔位姿。
通过两个基站定位头盔的位姿,基站发射激光,当头盔上的传感器检测到激光信号之后,可以计算出头盔相对于基站的位姿,即位置Position(x,y,z)和Rotation(x,y,z,w),并发送至服务器。
步骤4:计算HTC VIVE定位系统位姿。
记R1,T1为T1。
Figure 218281dest_path_image055
记R2,T2为T2。
Figure 486451dest_path_image056
记R3,T3为T3。
Figure 213098dest_path_image057
则最终的R,T为。
Figure 49467dest_path_image058
Figure 556672dest_path_image059
最后通过SVD分解得到R,T。
步骤5:构建融合的AR场景。
  设计位姿与模型对应的场景,当系统的位姿满足程序设定的要求时,加载渲染对应的虚拟模型到真实的世界中。
步骤6:渲染显示。
  将渲染的AR场景输出到头盔的显示单元进行显示。
此外,应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当 将说明书作为一个整体,各实施例中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。

Claims (8)

  1. 一种工业增强现实组合定位系统,其特征在于,工业增强现实组合定位系统包括智能感知模块、基站定位模块和服务决策模块;
    所述基站定位模块以现有的HTC VIVE定位系统为基础,其包括基站(1)、头盔(2)和服务器(3),基站通过发射激光,头盔上面的传感器接收激光信号,数据传输到服务器进行解析,计算出头盔相对于基站的位姿;
    所述智能感知模块包括便携式微处理器(4)和集成在头盔(2)上的相机(5),相机将图像信息传输到微型处理器中,通过内置的定位及识别算法,对物体进行识别并计算出头场景物体的位姿;
    所述服务决策模块接受来自智能感知模块的位姿与基站检测的头盔信息,通过坐标转换关系得到最终的位姿,同时根据应用场景的不同,实时的根据位姿将三维模型融合在视频流中,实现增强现实的效果。
  2. 根据权利要求1所述的一种工业增强现实组合定位系统,其特征在于,所述定位系统的工作流程如下:
    步骤1:构建感知空间;
    步骤2:获取相机的位姿;
    步骤3:获取头盔位姿;
    步骤4:计算HTC VIVE定位系统位姿
    步骤5:构建融合的AR场景;
    步骤6:渲染显示。
  3. 根据权利要求1所述的一种工业增强现实组合定位系统,其特征在于,所述坐标转换的过程及方法如下:
    步骤一:相机采用VINS-Fusion算法的前端对采集到图像进行处理,首先利用Opencv对图像进行SIFT特征点提取,根据前后两帧图像之间的特征匹配关系计算出两帧图像之间的运动关系。
  4. 步骤二:相机坐标系以左目为基准,首先利用坐标变换计算出头盔相对于场景的位姿,然后通过服务器发送至HTC VIVE定位系统。
  5. 步骤三:在HTC VIVE定位系统中,通过头盔相机坐标系和基站坐标系以及世界坐标系的变换,计算出HTC VIVE虚拟场景的坐标系相对于真实世界的坐标变换。
  6. 根据权利要求2所述的一种工业增强现实组合定位系统,其特征在于,所述步骤1包括打开服务器、头盔、相机和便携式微处理器并统一坐标系,连接HTC VIVE和便携式微型处理器,将相机固定在头盔正前方,同时测量相机坐标系与头盔坐标系,计算两者的R和T。
  7. 根据权利要求2所述的一种工业增强现实组合定位系统,其特征在于,所述步骤2包括相机采集到场景的图像,然后对图像进行预处理、对图像进行光流特征提取,并对连续帧图像光流跟踪、对稳定可持续跟踪的光流点进行图像匹配,得到两帧图像之间的Rotation和Position。
  8. 根据权利要求2所述的一种工业增强现实组合定位系统,其特征在于,所述步骤2通过两个基站定位头盔的位姿,基站发射激光,当头盔上的传感器检测到激光信号之后,可以计算出头盔相对于基站的位姿,即位置Position和Rotation,并发送至服务器。
PCT/CN2020/115089 2020-09-07 2020-09-14 一种工业增强现实组合定位系统 WO2022047828A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010926641.5A CN112116631A (zh) 2020-09-07 2020-09-07 一种工业增强现实组合定位系统
CN202010926641.5 2020-09-07

Publications (1)

Publication Number Publication Date
WO2022047828A1 true WO2022047828A1 (zh) 2022-03-10

Family

ID=73802291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115089 WO2022047828A1 (zh) 2020-09-07 2020-09-14 一种工业增强现实组合定位系统

Country Status (2)

Country Link
CN (1) CN112116631A (zh)
WO (1) WO2022047828A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529808A (zh) * 2022-04-21 2022-05-24 南京北控工程检测咨询有限公司 一种管道检测全景拍摄处理方法
CN116664681A (zh) * 2023-07-26 2023-08-29 长春工程学院 基于语义感知的电力作业智能协同增强现实系统及方法
CN118471050A (zh) * 2024-07-10 2024-08-09 成都成电光信科技股份有限公司 一种适用模拟飞行训练的混合现实头盔系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967341B (zh) * 2021-02-23 2023-04-25 湖北枫丹白露智慧标识科技有限公司 基于实景图像的室内视觉定位方法、系统、设备及存储介质
US20230031480A1 (en) * 2021-07-28 2023-02-02 Htc Corporation System for tracking camera and control method thereof
CN115514885B (zh) * 2022-08-26 2024-03-01 燕山大学 基于单双目融合的远程增强现实随动感知系统及方法
CN116372954A (zh) * 2023-05-26 2023-07-04 苏州融萃特种机器人有限公司 Ar沉浸式遥操作排爆机器人系统、控制方法和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739704A (zh) * 2016-02-02 2016-07-06 上海尚镜信息科技有限公司 基于增强现实的远程引导方法和系统
CN108416846A (zh) * 2018-03-16 2018-08-17 北京邮电大学 一种无标识三维注册算法
CN109032329A (zh) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 面向多人增强现实交互的空间一致性保持方法
US10551623B1 (en) * 2018-07-20 2020-02-04 Facense Ltd. Safe head-mounted display for vehicles
CN110858414A (zh) * 2018-08-13 2020-03-03 北京嘀嘀无限科技发展有限公司 图像处理方法、装置、可读存储介质与增强现实系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739704A (zh) * 2016-02-02 2016-07-06 上海尚镜信息科技有限公司 基于增强现实的远程引导方法和系统
CN108416846A (zh) * 2018-03-16 2018-08-17 北京邮电大学 一种无标识三维注册算法
CN109032329A (zh) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 面向多人增强现实交互的空间一致性保持方法
US10551623B1 (en) * 2018-07-20 2020-02-04 Facense Ltd. Safe head-mounted display for vehicles
CN110858414A (zh) * 2018-08-13 2020-03-03 北京嘀嘀无限科技发展有限公司 图像处理方法、装置、可读存储介质与增强现实系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529808A (zh) * 2022-04-21 2022-05-24 南京北控工程检测咨询有限公司 一种管道检测全景拍摄处理方法
CN114529808B (zh) * 2022-04-21 2022-07-19 南京北控工程检测咨询有限公司 一种管道检测全景拍摄处理系统及方法
CN116664681A (zh) * 2023-07-26 2023-08-29 长春工程学院 基于语义感知的电力作业智能协同增强现实系统及方法
CN116664681B (zh) * 2023-07-26 2023-10-10 长春工程学院 基于语义感知的电力作业智能协同增强现实系统及方法
CN118471050A (zh) * 2024-07-10 2024-08-09 成都成电光信科技股份有限公司 一种适用模拟飞行训练的混合现实头盔系统

Also Published As

Publication number Publication date
CN112116631A (zh) 2020-12-22

Similar Documents

Publication Publication Date Title
WO2022047828A1 (zh) 一种工业增强现实组合定位系统
US11693242B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
TWI574223B (zh) 運用擴增實境技術之導航系統
US11778403B2 (en) Personalized HRTFs via optical capture
JP2019536170A (ja) 仮想的に拡張された視覚的同時位置特定及びマッピングのシステム及び方法
JP2018511098A (ja) 複合現実システム
WO2017173735A1 (zh) 一种基于视频透视的智能眼镜系统及其透视方法
KR20140122126A (ko) 투명 디스플레이를 이용한 증강현실 구현 장치 및 그 방법
US20230152084A1 (en) Height Measurement Method and Apparatus, and Terminal
US20210390882A1 (en) Blind assist eyewear with geometric hazard detection
JP2023502552A (ja) ウェアラブルデバイス、インテリジェントガイド方法及び装置、ガイドシステム、記憶媒体
WO2024131200A1 (zh) 基于单目视觉的车辆3d定位方法、装置及汽车
GB2588441A (en) Method and system for estimating the geometry of a scene
CN109883433A (zh) 基于360度全景视图的结构化环境中车辆定位方法
WO2018088035A1 (ja) 画像認識処理方法、画像認識処理プログラム、データ提供方法、データ提供システム、データ提供プログラム、記録媒体、プロセッサ、及び電子機器
CN111246116B (zh) 一种用于屏幕上智能取景显示的方法及移动终端
CN112085777A (zh) 六自由度vr眼镜
US11234090B2 (en) Using audio visual correspondence for sound source identification
WO2021049281A1 (ja) 画像処理装置、ヘッドマウントディスプレイ、および空間情報取得方法
KR101668649B1 (ko) 주변 환경 모델링 방법 및 이를 수행하는 장치
US20210165999A1 (en) Method and system for head pose estimation
CN214122904U (zh) 舞蹈姿势反馈装置
CN117392518B (zh) 一种低功耗的视觉定位和建图芯片及其方法
JPH0475639A (ja) 顔画像モデル生成装置
US20230319476A1 (en) Eyewear with audio source separation using pose trackers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20952070

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20952070

Country of ref document: EP

Kind code of ref document: A1