WO2017088187A1 - Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle - Google Patents

Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle Download PDF

Info

Publication number
WO2017088187A1
WO2017088187A1 PCT/CN2015/095848 CN2015095848W WO2017088187A1 WO 2017088187 A1 WO2017088187 A1 WO 2017088187A1 CN 2015095848 W CN2015095848 W CN 2015095848W WO 2017088187 A1 WO2017088187 A1 WO 2017088187A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
imaging
camera
reality device
processor
Prior art date
Application number
PCT/CN2015/095848
Other languages
English (en)
Chinese (zh)
Inventor
周琨
李乐
Original Assignee
深圳市欢创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢创科技有限公司 filed Critical 深圳市欢创科技有限公司
Priority to PCT/CN2015/095848 priority Critical patent/WO2017088187A1/fr
Publication of WO2017088187A1 publication Critical patent/WO2017088187A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present application relates to the field of virtual reality, and in particular, to a system and method for implementing location tracking on a virtual reality device.
  • VR virtual reality
  • sensors such as three-axis accelerometers, six-axis gyroscopes and other inertial sensors are built into VR devices such as helmets and handles.
  • sensors because they use relative positioning methods, they rely on sensor integration and quadratic integration to calculate Angle and displacement values, so there is no absolute world reference point, so the absolute position detection and tracking of the helmet and handle cannot be performed. Therefore, other sensors are necessary for position detection and tracking.
  • a system named "constellation" for realizing position tracking of a virtual reality device the system adopting an external monocular infrared
  • the camera acts as a sensor, and at the same time, dozens of infrared LED active light sources are placed on the helmet and handle as marking points.
  • the infrared camera captures the LED point light source
  • the absolute position of the LED light source can be roughly estimated by the coordinates of the surface imaged on the CMOS SENSOR surface, and the spatial attitude of the VR device can be further calculated by dozens of LED light sources.
  • the above position tracking system also has a number of deficiencies.
  • the "constellation" system needs to set dozens of LEDs on the VR device, and these LEDs are soldered on the flexible PCB. Therefore, it is very difficult to place the LED point source in the correct position during the generation process.
  • the processing process causes problems such as processing difficulties and high labor costs.
  • the setting of multiple LEDs also increases the material cost.
  • the method of detecting the target by the "constellation" system is to find the highlighted target in the background through the infrared camera, that is, the LED that emits light, so the detection method is easily affected by the ambient light, and thus the stability of use is easily affected.
  • the constellation system uses monocular infrared camera positioning, can not determine the precise 3D coordinates of the LED light source, can only determine the 2D coordinates, and estimate the approximate 3D coordinates by the size of the captured light source area, and In order to ensure the accuracy of the final calculation results, many LED point light sources are needed to assist, but the calculation results are not accurate, so the accuracy of 3D coordinate positioning is not substantially improved on the basis of cost increase.
  • One technical solution of the present invention is: a system for implementing location tracking on a virtual reality device, the system comprising: an image imaging area; a virtual reality device, which is included in the image imaging area, The virtual reality device is provided with a plurality of marking points capable of reflecting light, wherein the reflection coefficient of the marking point is higher than a reflection coefficient of other portions in the image imaging area; the processor, the processor is built in the camera or the electronic In the terminal, the camera scans the virtual reality device sub-regions in a predetermined time interval, and each sub-area performs imaging imaging twice, and transmits an image obtained by imaging and imaging to the processor; The imaged image is processed to obtain position data of the marked point, and the position of the virtual reality device is obtained by performing calculation according to the position data of the marked point.
  • Another technical solution of the present invention is: a method for implementing location tracking on a virtual reality device, the method comprising the steps of: providing the system described in the above technical solution; using the camera in a predetermined time interval The virtual reality device scans sub-regions, and each sub-region performs two imaging imaging, and transmits an image obtained by imaging and imaging to the processor; using the processor to obtain a marker point by processing the imaged image The location data is calculated based on the location data of the marker point to derive the location of the virtual reality device.
  • FIG. 1 is a schematic structural diagram of a position tracking system of the prior art.
  • FIG. 2 is a schematic structural diagram of a virtual reality device according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of dividing an imaged image region into a plurality of sub-regions according to another embodiment of the present invention.
  • FIG. 4 illustrates an imaging image of a first shot of a camera taken by an embodiment of the present invention.
  • FIG. 5 illustrates an image of a second shot of a camera taken by an embodiment of the present invention.
  • FIG. 6 illustrates a difference processed, and binarized, imaged image provided by an embodiment of the present invention.
  • the present invention provides a system capable of achieving more accurate location tracking for a virtual reality device.
  • the system includes a VR device for implementing location tracking.
  • the system may further include at least one electronic terminal, the VR device may be wiredly connected to the electronic terminal through a connection line, such as a computer (desktop computer, integrated computer or portable computer, etc.), game machine, mobile phone, television
  • the VR device has a mask portion 2 and a wearing portion 3 for wearing the VR device on the face, and a display screen is provided inside the mask portion 2, and the user can view the electric power through the display screen.
  • the VR device can also include a handle (not shown) for use therewith.
  • the system further includes a processor and a camera, the processor is built in the camera or the electronic terminal, and the camera can be installed on an external electronic terminal, such as a computer (desktop computer, integrated computer or portable computer, etc.), game In display devices with display functions such as computers, mobile phones, TVs, and tablets.
  • the electronic terminal and the camera can be connected by wire/wirelessly.
  • the VR device can also be wirelessly connected to the electronic terminal. Specifically, the VR device can wirelessly connect with the electronic terminal through a wireless connection technology such as Bluetooth, ZigBee, and Wi-Fi.
  • a housing portion 4 may be disposed in the mask portion 2 of the VR device, and the electronic terminal can be placed in the housing portion 4 to achieve a viewing effect with a real experience.
  • the camera may be an infrared camera or a visible camera.
  • the processor can be set inside the camera or on a terminal connected to the VR device, such as a computer (desktop computer, integrated computer or notebook computer), game console, mobile phone, TV, etc. display device with display function.
  • the camera can be wired to the processor through a connection line. More specifically, the camera and processor can be connected via USB, Bluetooth, ZigBee, and Wi-Fi.
  • the camera is used in conjunction with a plurality of marker points 1 attached to the VR device to achieve position tracking of the VR device.
  • the marker point 1 can be attached to a specific location on the VR device, such as a mask portion on the VR device, a handle portion (not shown), a wearing portion, and the like. Further, depending on the specific application of the particular VR device, the number of marked points may be increased or decreased, and the shape of the marked points may vary.
  • the camera can know the position of the marked point by the processor analysis by capturing the position of the marked point.
  • the system has an image imaging area, and the image imaging area is an imaging range of the camera, and therefore, the image imaging area also covers the VR device.
  • the marker point 1 on the VR device is made of a specific material that reflects light, this particular material is directed to direct light.
  • the material of the marking point 1 may be a reflective film.
  • the material of the marking point 1 may be a 3M reflective film.
  • the material of the marking point 1 may also be a reflective coating, a marking tape, a reflective lattice, or the like.
  • the image imaging area is first divided into several sub-areas from top to bottom, and the imaging area is captured, and there may be one or more marking points on the sub-area. There is also a case where a sub-area does not have a mark point.
  • the infrared camera continuously shoots the same area twice, and after the first shooting, the illumination device emits infrared light, as shown in Fig. 4, the area 11 is The highlighted area of the target area and differentiates from the background object area. In the second shooting, the illumination device is controlled to not emit light, and the imaging is as shown in Fig. 5.
  • the area 111 in the figure is the display area of the target area, and the area 111 is distinguished from the background object area.
  • the processor uses background subtraction to identify the two imaging results. Specifically, for the two consecutive images of the same region, the difference is obtained. Since the interval between the two shots is extremely short, the ambient light hardly changes, and although the illumination device changes, for the background object with low reflectance, There is almost no difference. Therefore, for a background object, the gray values of the two images are basically the same. However, for a target object with a high reflection coefficient of infrared light, the illumination source is in a bright one due to two shots.
  • the imaged image of the target object will be different, and the gray value of the target object formed by the first shot will be significantly higher than the target object formed by the second shot.
  • the background object will be effectively eliminated, thereby obtaining the target object.
  • the area 1111 is a binarized image after taking the difference.
  • the infrared camera may preferably be a binocular infrared camera, and the target point is captured to form a binocular image, after which the processor can pass The imaged image is processed and the 3D coordinates of the target point are solved. After acquiring the 3D coordinates of each target point, the complete spatial pose of the VR device can be constructed.
  • the camera may also be a visible light camera. Since the marker point 1 also has a reflection coefficient that is significantly higher than that of an ordinary object for visible light. Similarly, the image imaging area is first divided into several sub-areas from top to bottom. Then in a very short day. The visible light camera shoots the same area twice in succession, and after the first shot, the illumination device on the camera emits visible light and images. After the second shot, the illumination device is controlled to not emit light and is imaged again. After that, the difference between the two images is taken. Since the interval between the two shots is extremely short, the ambient light hardly changes, and the illumination device changes. However, there is no difference between background objects with low reflectance.
  • the gray values of the two images are basically the same, but for a target object with high reflection coefficient of visible light, Since the brightness of the illumination source of the two shots is different, the imaged image of the target object in the two images is different.
  • the gray value of the target object formed by the first shot will be significantly higher than the target object formed by the second shot.
  • the processor then processes the two imaged images and removes the background object by background subtraction to obtain the target object.
  • the visible light camera can be a binocular visible light camera.
  • the imaging principle is the same as that of the binocular infrared camera.
  • the 3D position of the target object is obtained by imaging the target object and calculating the 3D position of the target object through the processor, thereby obtaining the 3D position of the VR device.
  • the present invention may also employ various embodiments listed below.
  • the solutions of any two of the embodiments 1-14 and the systems described above may be combined to form a new system embodiment; any of the embodiments 1-14
  • the solutions of the two and above methods can also be combined to form a new method embodiment.
  • Embodiment 1 A system for implementing location tracking on a virtual reality device, the system comprising: an image imaging area; a virtual reality device included in the image imaging area, The virtual reality device is provided with a plurality of marking points capable of reflecting light, the reflection coefficient of the marking point being higher than a reflection coefficient of other portions in the image imaging area; the processor, the processor being built in the camera or the electronic terminal a camera, the camera is wired or wirelessly connected to the processor; wherein the camera scans the virtual reality device sub-regions within a predetermined time interval, each sub-area is photographed twice, and the imaging is imaged The obtained image is transmitted to the processor; the processor obtains the position data of the marked point by processing the imaged image, and calculates the virtual reality according to the position data of the marked point. The location of the device.
  • Embodiment 2 The image imaging area is divided into a plurality of sub-areas,
  • the camera scans the sub-regions on the image imaging area in the predetermined time interval, and each sub-area performs two imaging imaging, wherein the first imaging imaging in the at least two imaging images ⁇ , using the illumination device of the camera to emit infrared light or visible light; in the second imaging imaging of the at least two imaging images, the illumination device of the camera is not illuminated; then the image is imaged twice Transmitting to the processor, the processor obtains a position of the mark point in the selected sub-area after the two imaged images are processed by difference processing to eliminate the background object, and passes the mark point The location is calculated to derive the location of the virtual reality device.
  • Embodiment 4 The marking point is a reflective film.
  • Embodiment 5 The marking point is a reflective coating, a marking tape or a reflective lattice.
  • Embodiment 6 The system further includes an electronic terminal, wherein the electronic terminal has a wired or wireless connection with the virtual reality device, and the electronic terminal is configured to provide an image for viewing by the virtual reality device.
  • Embodiment 7 The virtual reality device is a head mounted virtual reality device, a virtual reality glasses or a virtual reality handle.
  • Embodiment 8 The camera has a connection clip, and the camera is fixed to the electronic terminal by a connection clip.
  • Embodiment 9 A method for implementing location tracking on a virtual reality device, the method comprising the steps of: providing the system according to embodiment 1; using the camera in a predetermined time interval The virtual reality device scans sub-regions, each sub-region captures imaging twice, and transmits an image obtained by imaging and imaging to the processor; using the processor to process the imaged image to obtain a mark point Position data, and calculating according to the location data of the marker point to obtain the location of the virtual reality device.
  • Embodiment 10 The following steps are further included:
  • Embodiment 11 In the first imaging imaging of the at least two imaging images, the illumination device controlling the camera emits infrared light or visible light; the second time in the at least two imaging images After imaging imaging, the illumination device controlling the camera does not emit light.
  • Embodiment 12 The system further includes an electronic terminal, the electronic terminal is connected to the virtual reality device by wire or wirelessly, and the electronic terminal is configured to provide an image for viewing by the virtual reality device.
  • Embodiment 13 The virtual reality device is a head mounted virtual reality device, a virtual reality glasses or a virtual reality handle.
  • Embodiment 14 The camera has a connection clip, and the camera is fixed to the electronic terminal by a connection clip.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne un procédé et un système pour mettre en œuvre un suivi de position d'un dispositif de réalité virtuelle. Le système comprend : une région d'imagerie ; un dispositif de réalité virtuelle qui comporte une pluralité de points marqués (1) susceptible de réfléchir la lumière sur la région d'imagerie, le coefficient de réflexion des points marqués (1) étant supérieur à celui des autres parties dans la région d'imagerie ; un processeur intégré à une caméra ou à un terminal électronique ; et une caméra pour balayer des sous-régions du dispositif de réalité virtuelle une par une dans une marge de temps prédéterminée, chaque sous-zone étant imagée en étant photographiée deux fois, et les images résultant de l'imagerie par la photographie sont transmises au processeur. Le processeur obtient des données de position des points marqués (1) en traitant les images formées, et effectue un calcul selon les données de position des points marqués (1) afin d'obtenir la position et l'attitude du dispositif de réalité virtuelle. Le système et le procédé peuvent mettre en œuvre un suivi de position plus précis d'un dispositif de réalité virtuelle.
PCT/CN2015/095848 2015-11-27 2015-11-27 Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle WO2017088187A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/095848 WO2017088187A1 (fr) 2015-11-27 2015-11-27 Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/095848 WO2017088187A1 (fr) 2015-11-27 2015-11-27 Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle

Publications (1)

Publication Number Publication Date
WO2017088187A1 true WO2017088187A1 (fr) 2017-06-01

Family

ID=58762888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095848 WO2017088187A1 (fr) 2015-11-27 2015-11-27 Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle

Country Status (1)

Country Link
WO (1) WO2017088187A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937563A (zh) * 2009-07-03 2011-01-05 深圳泰山在线科技有限公司 一种目标检测方法和设备及其使用的图像采集装置
CN202159302U (zh) * 2011-07-28 2012-03-07 李钢 具有用户交互和输入功能的增强现实系统
WO2014016531A1 (fr) * 2012-07-26 2014-01-30 Institut Francais Des Sciences Et Technologies Des Transports, De L'amenagement Et Des Reseaux Methode monocamera de determination d'une direction d'un solide
CN104238738A (zh) * 2013-06-07 2014-12-24 索尼电脑娱乐美国公司 在头戴式系统内产生增强虚拟现实场景的系统和方法
CN104699247A (zh) * 2015-03-18 2015-06-10 北京七鑫易维信息技术有限公司 一种基于机器视觉的虚拟现实交互系统及方法
CN105045398A (zh) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 一种基于手势识别的虚拟现实交互设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937563A (zh) * 2009-07-03 2011-01-05 深圳泰山在线科技有限公司 一种目标检测方法和设备及其使用的图像采集装置
CN202159302U (zh) * 2011-07-28 2012-03-07 李钢 具有用户交互和输入功能的增强现实系统
WO2014016531A1 (fr) * 2012-07-26 2014-01-30 Institut Francais Des Sciences Et Technologies Des Transports, De L'amenagement Et Des Reseaux Methode monocamera de determination d'une direction d'un solide
CN104238738A (zh) * 2013-06-07 2014-12-24 索尼电脑娱乐美国公司 在头戴式系统内产生增强虚拟现实场景的系统和方法
CN104699247A (zh) * 2015-03-18 2015-06-10 北京七鑫易维信息技术有限公司 一种基于机器视觉的虚拟现实交互系统及方法
CN105045398A (zh) * 2015-09-07 2015-11-11 哈尔滨市一舍科技有限公司 一种基于手势识别的虚拟现实交互设备

Similar Documents

Publication Publication Date Title
TWI722280B (zh) 用於多個自由度之控制器追蹤
CN106681510B (zh) 位姿识别装置、虚拟现实显示装置以及虚拟现实系统
WO2014071254A4 (fr) Dispositif informatique et de commande de type montre sans fil et procédé pour imagerie en 3d, cartographie, réseau social et interfaçage
TWI317084B (en) Pointer positioning device and method
JP7248490B2 (ja) 情報処理装置、デバイスの位置および姿勢の推定方法
KR20150093831A (ko) 혼합 현실 환경에 대한 직접 상호작용 시스템
EP3252714A1 (fr) Sélection de caméra pour suivi de position
CN104246664B (zh) 不显示指针的透明显示器虚拟触摸装置
US11712619B2 (en) Handle controller
WO2019155840A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
EP3954443A1 (fr) Dispositif comprenant une pluralité de marqueurs
Tsun et al. A human orientation tracking system using Template Matching and active Infrared marker
US10762658B2 (en) Method and image pick-up apparatus for calculating coordinates of object being captured using fisheye images
US11944897B2 (en) Device including plurality of markers
TWI468997B (zh) 具有較大可操作範圍的指向系統及影像系統
JP7198149B2 (ja) 情報処理装置およびデバイス情報導出方法
WO2017080533A2 (fr) Appareil permettant d'entrer en interaction avec un environnement de réalité virtuelle
JP2006305332A (ja) 画像処理装置およびそれを用いた内視鏡
WO2017163648A1 (fr) Dispositif placé sur la tête
WO2017088187A1 (fr) Système et procédé pour mettre en œuvre un suivi de position de dispositif de réalité virtuelle
JP7288792B2 (ja) 情報処理装置およびデバイス情報導出方法
JP2019101476A (ja) 操作案内システム
CN110021044B (zh) 利用双鱼眼图像计算所摄物体坐标的方法及图像获取装置
JP2016062336A (ja) 動作指示システム、動作指示方法、装着端末、および、動作指示管理サーバ
TWI836498B (zh) 用於配件配對的方法、系統以及記錄介質

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15909098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 9.10.18)

122 Ep: pct application non-entry in european phase

Ref document number: 15909098

Country of ref document: EP

Kind code of ref document: A1