WO2018107923A1 - Positioning feature point identification method for use in virtual reality space - Google Patents

Positioning feature point identification method for use in virtual reality space Download PDF

Info

Publication number
WO2018107923A1
WO2018107923A1 PCT/CN2017/109795 CN2017109795W WO2018107923A1 WO 2018107923 A1 WO2018107923 A1 WO 2018107923A1 CN 2017109795 W CN2017109795 W CN 2017109795W WO 2018107923 A1 WO2018107923 A1 WO 2018107923A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
infrared
infrared point
point light
light
Prior art date
Application number
PCT/CN2017/109795
Other languages
French (fr)
Chinese (zh)
Inventor
李宗乘
党少军
Original Assignee
深圳市虚拟现实技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市虚拟现实技术有限公司 filed Critical 深圳市虚拟现实技术有限公司
Publication of WO2018107923A1 publication Critical patent/WO2018107923A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0325Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image

Definitions

  • the present invention relates to the field of virtual reality, and more particularly to a virtual reality spatial positioning feature point identification method.
  • Spatial positioning generally uses optical or ultrasonic modes for positioning and measurement, and the model is used to derive the spatial position of the object to be measured.
  • the general virtual reality space positioning system uses the infrared point and the light-sensing camera to determine the spatial position of the object.
  • the infrared point is at the front end of the near-eye display device.
  • the light-sensing camera captures the position of the infrared point and then derives the user's position. Physical coordinates. If you know the correspondence between at least three light sources and projections, you can call the PnP algorithm to get the spatial positioning position of the helmet.
  • the key to realize this process is to determine the light source ID (Identity, serial number) corresponding to the projection.
  • the current virtual reality space positioning in determining the corresponding light source ID of the projection often has the disadvantages of corresponding inaccuracy and corresponding long interval, which affects the accuracy and efficiency of the positioning.
  • the present invention provides a virtual reality spatial positioning feature that determines the accuracy and efficiency of the projection ID. Point identification method.
  • a virtual reality spatial positioning feature point identification method which includes the following steps:
  • S2 illuminating one of the infrared point light sources on the virtual reality helmet, and the processing unit records an ID of the infrared point source corresponding to the light spot on the image captured by the infrared camera;
  • S3 the virtual reality helmet keeps the infrared point light source that is lit in the previous frame, and is in a lighting state, and Illuminating a new infrared point source, the processing unit determining an ID of the infrared point source corresponding to the newly added spot on the image captured by the infrared camera;
  • S4 repeating S3 until all of the infrared point light sources are illuminated and the processing unit determines an ID of the infrared point source corresponding to all light spots on the image captured by the infrared camera.
  • the ID of the infrared point source corresponding to the newly added light spot is determined by comparing the image difference between the current frame and the previous frame.
  • the processing unit combines the historical information of the previous frame to make a slight translation of the light spot of the image of the previous frame, so that the light spot of the image of the previous frame is Corresponding relationship is generated between the light spots of the current frame image, and the corresponding ID of each light spot having a corresponding relationship on the current frame image is determined according to the correspondence relationship and the history information of the previous frame.
  • the light spot on the current frame image that has no corresponding relationship with the previous frame image corresponds to the ID of the newly illuminated infrared point light source.
  • the present invention provides a method for determining the spot ID by accurately finding a method corresponding to the infrared point source ID of the image taken by the infrared camera by sequentially lighting the infrared point source. And efficient.
  • the ID corresponding to the newly added spot can be judged by comparing the images of the two frames before and after.
  • the added spot and its corresponding ID are determined by adding displacement, and the virtual reality helmet is provided.
  • a method of identifying a spot ID in a plurality of motion states By monitoring the number of infrared point sources and the number of spots on the image and whether the number of spots meets the number of points required by the PnP algorithm, the accuracy of the positioning is ensured and deviations are prevented.
  • FIG. 1 is a schematic diagram of a virtual reality helmet of a virtual reality spatial positioning feature point recognition method according to the present invention
  • FIG. 2 is a schematic diagram showing the principle of a virtual reality spatial positioning feature point recognition method according to the present invention
  • FIG. 3 is an infrared point image taken by an infrared camera.
  • the present invention provides a virtual reality spatial positioning feature point identification method for determining the accuracy and efficiency of the projection ID.
  • the virtual reality positioning feature point recognition method of the present invention comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, and the infrared camera 20 is electrically connected to the processing unit 30.
  • the virtual reality helmet 10 includes a front panel 11 , and a plurality of infrared point light sources 13 are distributed on the front panel 11 of the virtual reality helmet 10 and the four side panels of the upper, lower, left and right sides, and the plurality of infrared point light sources 13 can pass through the virtual reality
  • the firmware interface of the helmet 10 is illuminated or turned off as needed.
  • FIG. 3 shows an infrared point image taken by an infrared camera.
  • the front panel 11 of the virtual reality helmet 10 faces an infrared camera (not shown)
  • the infrared point source can be 13
  • a spot projection is formed on the image, and the rest forms a uniform background image.
  • the infrared point source 13 on the virtual reality helmet 10 can form a spot of light on the image.
  • the virtual reality helmet 10 When the ID recognition is started, the virtual reality helmet 10 is in an initial state, lighting an infrared point light source 13 on the virtual reality helmet 10, and the processing unit 30 records the illuminated infrared point source 13 according to the light spots on the image.
  • the correspondence relationship of the light spots that is, the ID of the infrared point light source 13 corresponding to the light spot on the image captured by the infrared camera.
  • the virtual reality helmet 10 keeps the infrared light source 13 that is lit in the previous frame, and is lit, and lights up a new infrared point light source 13, which can be found on the image taken by the infrared camera 20.
  • the processing unit 30 determines the ID of the infrared point source 13 corresponding to the newly added light spot.
  • the virtual reality helmet 10 keeps the infrared spot light source 13 illuminated in the previous frame, and lights up a new infrared point light source 13, and uses the same method to determine the added light on the image.
  • the ID of the spot is newly illuminated by an infrared point source 13 in each frame according to the above method until all the infrared point sources 13 are lit, and each spot successfully corresponds to the ID of the illuminated infrared point source 13, ID identification process End.
  • the method for the processing unit 30 to determine the ID of the infrared point source 13 corresponding to the newly added light spot is: in the initial state of the virtual reality helmet 10, there is no corresponding relationship of the previous frame, or the data loss of the previous frame needs to be determined again.
  • Corresponding relationship ⁇ the present invention illuminates only one infrared point source 13 at the initial ,, so that there is at most one spot on the image, and in this case, the correspondence can be easily determined. By illuminating a new infrared point source 13 with each additional point, it is possible to illuminate a plurality of infrared point sources 13 to determine the desired correspondence.
  • the light spot corresponding to the newly added infrared point light source 13 can be determined by comparing the image difference between the current frame and the previous frame, and the corresponding ID of the light spot is The ID of the illuminated infrared point source 13 is added.
  • the general sampling time is 30ms, so in general, each light spot of the previous frame and each of the current frames except the newly added light spots The position difference of the light spots is small, and the processing unit 30 combines the known historical information of the previous frame to make a slight translation of the light spots of the previous frame image, so that the light spots of the previous frame image and the light spots of the current frame image are generated.
  • Corresponding relationship determining, according to the correspondence relationship and the historical information of the previous frame, the corresponding ID of each light spot corresponding to the current frame image, and the same, the light spot having no corresponding relationship with the previous frame image on the current frame image Corresponding to the ID of the newly illuminated infrared point source.
  • the processing unit 30 can call the PnP algorithm to obtain the spatial positioning position of the helmet.
  • the Pn P algorithm belongs to the prior art, and the present invention will not be described again.
  • the present invention provides a method for determining the spot ID by correspondingly searching for the method corresponding to the infrared point source 13ID of the image taken by the infrared camera 20 by sequentially lighting the infrared point source 13. , accurate and efficient.
  • the virtual reality helmet 10 When the virtual reality helmet 10 is stationary, the contrast between the two frames is The ID corresponding to the newly added spot can be judged.
  • the added spot and its corresponding ID are determined by adding displacement, and the method for identifying the spot ID of the virtual reality helmet in multiple motion states is provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A positioning feature point identification method for use in a virtual reality space, comprising the following steps: confirming that all infrared point light sources on a virtual reality headset have been turned off, and if not all of said infrared point light sources have been turned off, turning off the infrared point light sources that are in an illuminated state; illuminating one of the infrared point light sources on the virtual reality headset, and a processing unit recording an infrared point light source ID which corresponds to a light spot on an image captured by an infrared camera; said virtual reality headset maintaining the infrared point light source illuminated at the time of a previous frame in an illuminated state, and also illuminating a new infrared point light source, and the processing unit determining an infrared point light source ID corresponding to a newly added light spot on the image captured by the infrared camera. By means of sequentially illuminating the infrared point light sources and correspondingly searching for infrared point light source IDs corresponding to light spots on an image captured by an infrared camera, the present method provides an accurate and efficient solution for determining light spot IDs.

Description

虚拟现实空间定位特征点识别方法 技术领域  Virtual reality spatial positioning feature point recognition method
[0001] 本发明涉及虚拟现实领域, 更具体地说, 涉及一种虚拟现实空间定位特征点识 别方法。  [0001] The present invention relates to the field of virtual reality, and more particularly to a virtual reality spatial positioning feature point identification method.
背景技术  Background technique
[0002] 空间定位一般采用光学或超声波的模式进行定位和测算, 通过建立模型来推导 待测物体的空间位置。 一般的虚拟现实空间定位系统采用红外点和光感摄像头 接收的方式来确定物体的空间位置, 红外点在近眼显示装置的前端, 在定位吋 , 光感摄像头捕捉红外点的位置进而推算出使用者的物理坐标。 如果知道至少 三个光源和投影的对应关系, 再调用 PnP算法就可得到头盔的空间定位位置, 而 实现这一过程的关键就是确定投影对应的光源 ID (Identity, 序列号) 。 目前的 虚拟现实空间定位在确定投影对应光源 ID吋常常存在对应不准确和对应吋间过 长的缺点, 影响了定位的准确性和效率。  [0002] Spatial positioning generally uses optical or ultrasonic modes for positioning and measurement, and the model is used to derive the spatial position of the object to be measured. The general virtual reality space positioning system uses the infrared point and the light-sensing camera to determine the spatial position of the object. The infrared point is at the front end of the near-eye display device. After positioning, the light-sensing camera captures the position of the infrared point and then derives the user's position. Physical coordinates. If you know the correspondence between at least three light sources and projections, you can call the PnP algorithm to get the spatial positioning position of the helmet. The key to realize this process is to determine the light source ID (Identity, serial number) corresponding to the projection. The current virtual reality space positioning in determining the corresponding light source ID of the projection often has the disadvantages of corresponding inaccuracy and corresponding long interval, which affects the accuracy and efficiency of the positioning.
技术问题  technical problem
[0003] 为了解决当前虚拟现实空间定位方法确定投影 ID (Identity, 序列号) 的准确性 和效率不高的缺陷, 本发明提供一种确定投影 ID准确性和效率较高的虚拟现实 空间定位特征点识别方法。  [0003] In order to solve the defect that the current virtual reality spatial positioning method determines the accuracy and efficiency of the projection ID (Identity), the present invention provides a virtual reality spatial positioning feature that determines the accuracy and efficiency of the projection ID. Point identification method.
问题的解决方案  Problem solution
技术解决方案  Technical solution
[0004] 本发明解决其技术问题所采用的技术方案是: 提供一种虚拟现实空间定位特征 点识别方法, 包括以下步骤:  The technical solution adopted by the present invention to solve the technical problem thereof is as follows: A virtual reality spatial positioning feature point identification method is provided, which includes the following steps:
[0005] S1 : 确认虚拟现实头盔上的红外点光源全部熄灭, 若所述红外点光源未全部熄 灭, 熄灭处于点亮状态的所述红外点光源; [0005] S1: confirming that the infrared point light sources on the virtual reality helmet are all extinguished, and if the infrared point light sources are not all extinguished, extinguishing the infrared point light source in the lighting state;
[0006] S2: 点亮所述虚拟现实头盔上的一个所述红外点光源, 处理单元记录红外摄像 头所拍摄的图像上的光斑点对应的所述红外点光源的 ID; [0006] S2: illuminating one of the infrared point light sources on the virtual reality helmet, and the processing unit records an ID of the infrared point source corresponding to the light spot on the image captured by the infrared camera;
[0007] S3: 所述虚拟现实头盔保持上一帧吋点亮的所述红外点光源处于点亮状态, 并 点亮一个新的红外点光源, 所述处理单元确定红外摄像头所拍摄的图像上新增 加的光斑点对应的所述红外点光源的 ID; [0007] S3: the virtual reality helmet keeps the infrared point light source that is lit in the previous frame, and is in a lighting state, and Illuminating a new infrared point source, the processing unit determining an ID of the infrared point source corresponding to the newly added spot on the image captured by the infrared camera;
[0008] S4: 重复 S3, 直至所有所述红外点光源被点亮并且所述处理单元确定所述红外 摄像头所拍摄的图像上的所有光斑点对应的所述红外点光源的 ID。 S4: repeating S3 until all of the infrared point light sources are illuminated and the processing unit determines an ID of the infrared point source corresponding to all light spots on the image captured by the infrared camera.
[0009] 优选地, 当所述虚拟现实头盔静止吋, 通过比较当前帧和上一帧的图像差别确 定新增的光斑点对应的所述红外点光源的 ID。 [0009] Preferably, when the virtual reality helmet is stationary, the ID of the infrared point source corresponding to the newly added light spot is determined by comparing the image difference between the current frame and the previous frame.
[0010] 优选地, 当所述虚拟现实头盔运动吋, 所述处理单元结合上一帧已知的历史信 息对上一帧图像的光斑点做一个微小的平移使上一帧图像的光斑点与当前帧图 像的光斑点产生对应关系, 根据该对应关系和上一帧的历史信息判断当前帧图 像上有对应关系的每个光斑点的对应 ID。 [0010] Preferably, when the virtual reality helmet moves, the processing unit combines the historical information of the previous frame to make a slight translation of the light spot of the image of the previous frame, so that the light spot of the image of the previous frame is Corresponding relationship is generated between the light spots of the current frame image, and the corresponding ID of each light spot having a corresponding relationship on the current frame image is determined according to the correspondence relationship and the history information of the previous frame.
[0011] 优选地, 当前帧图像上与上一帧图像上无对应关系的光斑点对应新点亮的所述 红外点光源的 ID。 [0011] Preferably, the light spot on the current frame image that has no corresponding relationship with the previous frame image corresponds to the ID of the newly illuminated infrared point light source.
[0012] 优选地, 在执行 S2和 S4的过程中, 如果点亮的所述红外点光源的数量和图像上 的光斑点数量不匹配, 重新执行 Sl。  [0012] Preferably, in the process of performing S2 and S4, if the number of the illuminated infrared point sources and the number of spots on the image do not match, Sl is re-executed.
[0013] 优选地, 在定位过程中, 如果图像上的光斑点数量不满足 PnP算法需要的点的 数量, 重新执行 Sl。 [0013] Preferably, in the positioning process, if the number of light spots on the image does not satisfy the number of points required by the PnP algorithm, Sl is re-executed.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0014] 与现有技术相比, 本发明通过逐次点亮红外点光源的方法对应寻找红外摄像头 所拍摄的图像上光斑对应红外点光源 ID的方法, 提供了一种确定光斑 ID的方法 , 准确且高效。 当虚拟现实头盔静止吋, 通过前后两帧图像的对比即可判断新 增光斑对应的 ID, 当虚拟现实头盔运动吋, 通过添加位移的方式判断新增光斑 及其对应 ID, 提供了虚拟现实头盔多种运动状态下的光斑 ID的识别方法。 通过 对红外点光源数量和图像上的光斑点数量是否匹配和光斑点数量是否满足 PnP算 法需要的点的数量的监控, 保证了定位的准确性, 防止出现偏差。  [0014] Compared with the prior art, the present invention provides a method for determining the spot ID by accurately finding a method corresponding to the infrared point source ID of the image taken by the infrared camera by sequentially lighting the infrared point source. And efficient. When the virtual reality helmet is stationary, the ID corresponding to the newly added spot can be judged by comparing the images of the two frames before and after. When the virtual reality helmet moves, the added spot and its corresponding ID are determined by adding displacement, and the virtual reality helmet is provided. A method of identifying a spot ID in a plurality of motion states. By monitoring the number of infrared point sources and the number of spots on the image and whether the number of spots meets the number of points required by the PnP algorithm, the accuracy of the positioning is ensured and deviations are prevented.
对附图的简要说明  Brief description of the drawing
附图说明  DRAWINGS
[0015] 下面将结合附图及实施例对本发明作进一步说明, 附图中: [0016] 图 1是本发明虚拟现实空间定位特征点识别方法虚拟现实头盔示意图; [0015] The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which: 1 is a schematic diagram of a virtual reality helmet of a virtual reality spatial positioning feature point recognition method according to the present invention;
[0017] 图 2是本发明虚拟现实空间定位特征点识别方法原理示意图; 2 is a schematic diagram showing the principle of a virtual reality spatial positioning feature point recognition method according to the present invention;
[0018] 图 3是红外摄像头拍摄的红外点图像。 [0018] FIG. 3 is an infrared point image taken by an infrared camera.
实施该发明的最佳实施例  BEST MODE FOR CARRYING OUT THE INVENTION
本发明的最佳实施方式  BEST MODE FOR CARRYING OUT THE INVENTION
[0019] 为了解决当前虚拟现实空间定位方法确定投影 ID的准确性和效率不高的缺陷, 本发明提供一种确定投影 ID准确性和效率较高的虚拟现实空间定位特征点识别 方法。 [0019] In order to solve the defect that the current virtual reality spatial positioning method determines that the accuracy and efficiency of the projection ID are not high, the present invention provides a virtual reality spatial positioning feature point identification method for determining the accuracy and efficiency of the projection ID.
[0020] 为了对本发明的技术特征、 目的和效果有更加清楚的理解, 现对照附图详细说 明本发明的具体实施方式。  [0020] In order to more clearly understand the technical features, objects and advantages of the present invention, the embodiments of the present invention are described in detail with reference to the accompanying drawings.
[0021] 图 1和图 2示出了本发明虚拟现实空间定位特征点识别方法的原理图。 本发明虚 拟现实空间定位特征点识别方法包括虚拟现实头盔 10、 红外摄像头 20和处理单 元 30, 红外摄像头 20与处理单元 30电性连接。 虚拟现实头盔 10包括前面板 11, 在虚拟现实头盔 10的前面板 11及上、 下、 左、 右四个侧面板分布有多个的红外 点光源 13, 多个红外点光源 13可以通过虚拟现实头盔 10的固件接口根据需要点 亮或者关闭。  1 and 2 show a schematic diagram of a virtual reality spatial positioning feature point recognition method of the present invention. The virtual reality positioning feature point recognition method of the present invention comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, and the infrared camera 20 is electrically connected to the processing unit 30. The virtual reality helmet 10 includes a front panel 11 , and a plurality of infrared point light sources 13 are distributed on the front panel 11 of the virtual reality helmet 10 and the four side panels of the upper, lower, left and right sides, and the plurality of infrared point light sources 13 can pass through the virtual reality The firmware interface of the helmet 10 is illuminated or turned off as needed.
[0022] 图 3示出了红外摄像头拍摄的红外点图像, 当虚拟现实头盔 10的正面板 11朝向 红外摄像头 (图未示) 吋, 由于红外摄像头的带通特性, 只有红外点光源能 13 在图像上形成光斑投影, 其余部分皆形成均匀的背景图像。 虚拟现实头盔 10上 的红外点光源 13在图像上可以形成光斑点。  [0022] FIG. 3 shows an infrared point image taken by an infrared camera. When the front panel 11 of the virtual reality helmet 10 faces an infrared camera (not shown), due to the band pass characteristic of the infrared camera, only the infrared point source can be 13 A spot projection is formed on the image, and the rest forms a uniform background image. The infrared point source 13 on the virtual reality helmet 10 can form a spot of light on the image.
[0023] 当 ID识别幵始吋, 虚拟现实头盔 10处于初始状态, 点亮虚拟现实头盔 10上的一 个红外点光源 13, 处理单元 30根据图像上的光斑点记录点亮的红外点光源 13与 光斑点的对应关系, 即记录红外摄像头所拍摄的图像上的光斑点对应的红外点 光源 13的 ID。 该过程完成后, 虚拟现实头盔 10保持上一帧吋点亮的红外点光源 1 3处于点亮状态, 并点亮一个新的红外点光源 13, 这吋在红外摄像头 20拍摄的图 像上能找到两个光斑点, 处理单元 30确定新增加的光斑点对应的红外点光源 13 的 ID。 该过程完成后, 虚拟现实头盔 10保持上一帧吋点亮的红外点光源 13处于 点亮状态, 并点亮一个新的红外点光源 13, 并用同样的方法确定图像上新增光 斑点的 ID, 依照上述方法每一帧新增点亮一个红外点光源 13, 直到所有的红外 点光源 13都点亮, 每一个光斑点成功对应点亮的红外点光源 13的 ID, ID识别过 程结束。 [0023] When the ID recognition is started, the virtual reality helmet 10 is in an initial state, lighting an infrared point light source 13 on the virtual reality helmet 10, and the processing unit 30 records the illuminated infrared point source 13 according to the light spots on the image. The correspondence relationship of the light spots, that is, the ID of the infrared point light source 13 corresponding to the light spot on the image captured by the infrared camera. After the process is completed, the virtual reality helmet 10 keeps the infrared light source 13 that is lit in the previous frame, and is lit, and lights up a new infrared point light source 13, which can be found on the image taken by the infrared camera 20. The two light spots, the processing unit 30 determines the ID of the infrared point source 13 corresponding to the newly added light spot. After the process is completed, the virtual reality helmet 10 keeps the infrared spot light source 13 illuminated in the previous frame, and lights up a new infrared point light source 13, and uses the same method to determine the added light on the image. The ID of the spot is newly illuminated by an infrared point source 13 in each frame according to the above method until all the infrared point sources 13 are lit, and each spot successfully corresponds to the ID of the illuminated infrared point source 13, ID identification process End.
[0024] 在增加点亮红外点光源 13的过程中, 如果发生红外点光源 13被遮挡, 点亮的红 外点光源 13的数量和图像上的光斑点数量不匹配吋, 需要重新执行 ID识别过程 。 同吋, 在 ID识别过程结束后的定位过程中, 如果发生红外点光源 13被遮挡, 图像上光斑点的数目不足以满足 PnP算法需要的点的数目吋, 需要重新执行 ID识 别过程。  [0024] In the process of increasing the illumination of the infrared point source 13, if the infrared point source 13 is blocked, the number of the illuminated infrared point source 13 does not match the number of spots on the image, the ID recognition process needs to be re-executed. . Similarly, in the positioning process after the end of the ID recognition process, if the infrared point source 13 is occluded, the number of spots on the image is insufficient to satisfy the number of points required by the PnP algorithm, and the ID recognition process needs to be re-executed.
[0025] 处理单元 30确定新增加的光斑点对应的红外点光源 13的 ID的方法是: 在虚拟现 实头盔 10初始状态下没有上一帧的对应关系吋, 或者上一帧数据丢失需要重新 确定对应关系吋, 本发明在初始吋仅点亮一个红外点光源 13, 这样在图像上也 最多只有一个光斑点, 这种情况下可以很容易确定对应关系。 通过每次增加点 亮一个新的红外点光源 13, 就可以在多个红外点光源 13点亮吋也能确定要求的 对应关系。 具体来说分两种情况, 当虚拟现实头盔 10静止吋, 通过比较当前帧 和上一帧的图像差别即可确定新增的红外点光源 13对应的光斑点, 该光斑点的 对应 ID即为新增点亮的红外点光源 13的 ID。 在虚拟现实头盔 10运动吋, 由于每 帧的采样吋间足够小, 一般采样吋间为 30ms, 所以一般情况下上一帧的每个光 斑点和当前帧上的除新增光斑点外的每个光斑点的位置差别很小, 处理单元 30 结合上一帧已知的历史信息对上一帧图像的光斑点做一个微小的平移使上一帧 图像的光斑点与当前帧图像的光斑点产生对应关系, 根据该对应关系和上一帧 的历史信息判断当前帧图像上有对应关系的每个光斑点的对应 ID, 同吋, 当前 帧图像上与上一帧图像上无对应关系的光斑点对应新点亮的所述红外点光源的 I D。  [0025] The method for the processing unit 30 to determine the ID of the infrared point source 13 corresponding to the newly added light spot is: in the initial state of the virtual reality helmet 10, there is no corresponding relationship of the previous frame, or the data loss of the previous frame needs to be determined again. Corresponding relationship 吋, the present invention illuminates only one infrared point source 13 at the initial ,, so that there is at most one spot on the image, and in this case, the correspondence can be easily determined. By illuminating a new infrared point source 13 with each additional point, it is possible to illuminate a plurality of infrared point sources 13 to determine the desired correspondence. Specifically, in two cases, when the virtual reality helmet 10 is stationary, the light spot corresponding to the newly added infrared point light source 13 can be determined by comparing the image difference between the current frame and the previous frame, and the corresponding ID of the light spot is The ID of the illuminated infrared point source 13 is added. In the virtual reality helmet 10 movement, because the sampling time of each frame is small enough, the general sampling time is 30ms, so in general, each light spot of the previous frame and each of the current frames except the newly added light spots The position difference of the light spots is small, and the processing unit 30 combines the known historical information of the previous frame to make a slight translation of the light spots of the previous frame image, so that the light spots of the previous frame image and the light spots of the current frame image are generated. Corresponding relationship, determining, according to the correspondence relationship and the historical information of the previous frame, the corresponding ID of each light spot corresponding to the current frame image, and the same, the light spot having no corresponding relationship with the previous frame image on the current frame image Corresponding to the ID of the newly illuminated infrared point source.
[0026] ID识别完成后, 处理单元 30再调用 PnP算法就可得到头盔的空间定位位置, Pn P算法属于现有技术, 本发明不再赘述。  After the ID identification is completed, the processing unit 30 can call the PnP algorithm to obtain the spatial positioning position of the helmet. The Pn P algorithm belongs to the prior art, and the present invention will not be described again.
[0027] 与现有技术相比, 本发明通过逐次点亮红外点光源 13的方法对应寻找红外摄像 头 20所拍摄的图像上光斑对应红外点光源 13ID的方法, 提供了一种确定光斑 ID 的方法, 准确且高效。 当虚拟现实头盔 10静止吋, 通过前后两帧图像的对比即 可判断新增光斑对应的 ID, 当虚拟现实头盔运动吋, 通过添加位移的方式判断 新增光斑及其对应 ID, 提供了虚拟现实头盔 10多种运动状态下的光斑 ID的识别 方法。 通过对红外点光源 13数量和图像上的光斑点数量是否匹配和光斑点数量 是否满足 PnP算法需要的点的数量的监控, 保证了定位的准确性, 防止出现偏差 上面结合附图对本发明的实施例进行了描述, 但是本发明并不局限于上述的具 体实施方式, 上述的具体实施方式仅仅是示意性的, 而不是限制性的, 本领域 的普通技术人员在本发明的启示下, 在不脱离本发明宗旨和权利要求所保护的 范围情况下, 还可做出很多形式, 这些均属于本发明的保护之内。 Compared with the prior art, the present invention provides a method for determining the spot ID by correspondingly searching for the method corresponding to the infrared point source 13ID of the image taken by the infrared camera 20 by sequentially lighting the infrared point source 13. , accurate and efficient. When the virtual reality helmet 10 is stationary, the contrast between the two frames is The ID corresponding to the newly added spot can be judged. When the virtual reality helmet moves, the added spot and its corresponding ID are determined by adding displacement, and the method for identifying the spot ID of the virtual reality helmet in multiple motion states is provided. By monitoring whether the number of infrared point light sources 13 and the number of light spots on the image match and whether the number of light spots satisfies the number of points required by the PnP algorithm, the accuracy of the positioning is ensured, and the deviation is prevented from occurring. The embodiment of the present invention is described above with reference to the accompanying drawings. The description is made, but the present invention is not limited to the specific embodiments described above, and the specific embodiments described above are merely illustrative and not restrictive, and those of ordinary skill in the art Many forms may be made without departing from the spirit and scope of the invention as claimed.

Claims

权利要求书 Claim
[权利要求 1] 一种虚拟现实空间定位特征点识别方法, 其特征在于, 包括以下步骤  [Claim 1] A virtual reality spatial positioning feature point recognition method, comprising the following steps
S1 : 确认虚拟现实头盔上的红外点光源全部熄灭, 若所述红外点光源 未全部熄灭, 熄灭处于点亮状态的所述红外点光源; S1: confirming that the infrared point light sources on the virtual reality helmet are all extinguished, and if the infrared point light sources are not completely extinguished, extinguishing the infrared point light source in the lighting state;
S2: 点亮所述虚拟现实头盔上的一个所述红外点光源, 处理单元记录 红外摄像头所拍摄的图像上的光斑点对应的所述红外点光源的 ID; S3: 所述虚拟现实头盔保持上一帧吋点亮的所述红外点光源处于点亮 状态, 并点亮一个新的红外点光源, 所述处理单元确定红外摄像头所 拍摄的图像上新增加的光斑点对应的所述红外点光源的 ID;  S2: illuminating one of the infrared point light sources on the virtual reality helmet, and the processing unit records an ID of the infrared point light source corresponding to the light spot on the image captured by the infrared camera; S3: the virtual reality helmet is kept on The infrared point light source that is lit by one frame is in a lighting state, and illuminates a new infrared point light source, and the processing unit determines the infrared point source corresponding to the newly added light spot on the image captured by the infrared camera ID
S4: 重复 S3, 直至所有所述红外点光源被点亮并且所述处理单元确 定所述红外摄像头所拍摄的图像上的所有光斑点对应的所述红外点光 源的 ID。  S4: S3 is repeated until all of the infrared point light sources are illuminated and the processing unit determines the ID of the infrared point light source corresponding to all the light spots on the image captured by the infrared camera.
[权利要求 2] 根据权利要求 1所述的虚拟现实空间定位特征点识别方法, 其特征在 于, 当所述虚拟现实头盔静止吋, 通过比较当前帧和上一帧的图像差 别确定新增的光斑点对应的所述红外点光源的 ID。  [Claim 2] The virtual reality spatial positioning feature point recognition method according to claim 1, wherein when the virtual reality helmet is stationary, the added light is determined by comparing image differences between the current frame and the previous frame. The ID of the infrared point source corresponding to the spot.
[权利要求 3] 根据权利要求 2所述的虚拟现实空间定位特征点识别方法, 其特征在 于, 当所述虚拟现实头盔运动吋, 所述处理单元结合上一帧已知的历 史信息对上一帧图像的光斑点做一个微小的平移使上一帧图像的光斑 点与当前帧图像的光斑点产生对应关系, 根据该对应关系和上一帧的 历史信息判断当前帧图像上有对应关系的每个光斑点的对应 ID。  [Claim 3] The virtual reality spatial positioning feature point recognition method according to claim 2, wherein, when the virtual reality helmet moves, the processing unit combines the historical information known in the previous frame with the previous one. The light spot of the frame image makes a slight translation so that the light spot of the previous frame image is corresponding to the light spot of the current frame image, and according to the correspondence relationship and the historical information of the previous frame, it is determined that there is a corresponding relationship on the current frame image. The corresponding ID of the light spot.
[权利要求 4] 根据权利要求 3所述的虚拟现实空间定位特征点识别方法, 其特征在 于, 当前帧图像上与上一帧图像上无对应关系的光斑点对应新点亮的 所述红外点光源的 ID。 [Claim 4] The virtual reality spatial locating feature point recognition method according to claim 3, wherein the light spot on the current frame image that has no corresponding relationship with the previous frame image corresponds to the newly illuminated infrared point The ID of the light source.
[权利要求 5] 根据权利要求 1所述的虚拟现实空间定位特征点识别方法, 其特征在 于, 在执行 S2和 S4的过程中, 如果点亮的所述红外点光源的数量和 图像上的光斑点数量不匹配, 重新执行 Sl。  [Claim 5] The virtual reality spatial locating feature point identifying method according to claim 1, wherein, in the process of performing S2 and S4, if the number of the infrared point light sources that are lit and the light on the image If the number of spots does not match, re-execute Sl.
[权利要求 6] 根据权利要求 1所述的虚拟现实空间定位特征点识别方法, 其特征在 [Claim 6] The virtual reality spatial locating feature point identification method according to claim 1, characterized in that
PCT/CN2017/109795 2016-12-16 2017-11-07 Positioning feature point identification method for use in virtual reality space WO2018107923A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611167337.7A CN106774992A (en) 2016-12-16 2016-12-16 The point recognition methods of virtual reality space location feature
CN201611167337.7 2016-12-16

Publications (1)

Publication Number Publication Date
WO2018107923A1 true WO2018107923A1 (en) 2018-06-21

Family

ID=58891904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109795 WO2018107923A1 (en) 2016-12-16 2017-11-07 Positioning feature point identification method for use in virtual reality space

Country Status (2)

Country Link
CN (1) CN106774992A (en)
WO (1) WO2018107923A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914716A (en) * 2020-07-24 2020-11-10 深圳市瑞立视多媒体科技有限公司 Active optical rigid body identification method, device, equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774992A (en) * 2016-12-16 2017-05-31 深圳市虚拟现实技术有限公司 The point recognition methods of virtual reality space location feature
CN107219963A (en) * 2017-07-04 2017-09-29 深圳市虚拟现实科技有限公司 Virtual reality handle pattern space localization method and system
CN107390952A (en) * 2017-07-04 2017-11-24 深圳市虚拟现实科技有限公司 Virtual reality handle characteristic point space-location method
CN107390953A (en) * 2017-07-04 2017-11-24 深圳市虚拟现实科技有限公司 Virtual reality handle space localization method
CN115937725B (en) * 2023-03-13 2023-06-06 江西科骏实业有限公司 Gesture display method, device and equipment of space interaction device and storage medium thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662501A (en) * 2012-03-19 2012-09-12 Tcl集团股份有限公司 Cursor positioning system and method, remotely controlled device and remote controller
CN104637080A (en) * 2013-11-07 2015-05-20 深圳先进技术研究院 Three-dimensional drawing system and three-dimensional drawing method based on human-computer interaction
CN105867611A (en) * 2015-12-29 2016-08-17 乐视致新电子科技(天津)有限公司 Space positioning method, device and system in virtual reality system
US20160252976A1 (en) * 2015-02-26 2016-09-01 Konica Minolta Laboratory U.S.A., Inc. Method and apparatus for interactive user interface with wearable device
CN106200985A (en) * 2016-08-10 2016-12-07 北京天远景润科技有限公司 Desktop type individual immerses virtual reality interactive device
CN106200981A (en) * 2016-07-21 2016-12-07 北京小鸟看看科技有限公司 A kind of virtual reality system and wireless implementation method thereof
CN106774992A (en) * 2016-12-16 2017-05-31 深圳市虚拟现实技术有限公司 The point recognition methods of virtual reality space location feature

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0813040A3 (en) * 1996-06-14 1999-05-26 Xerox Corporation Precision spatial mapping with combined video and infrared signals
CN102540673B (en) * 2012-03-21 2015-06-10 海信集团有限公司 Laser dot position determining system and method
CN103593051B (en) * 2013-11-11 2017-02-15 百度在线网络技术(北京)有限公司 Head-mounted type display equipment
CN105931272B (en) * 2016-05-06 2019-04-05 上海乐相科技有限公司 A kind of Moving Objects method for tracing and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662501A (en) * 2012-03-19 2012-09-12 Tcl集团股份有限公司 Cursor positioning system and method, remotely controlled device and remote controller
CN104637080A (en) * 2013-11-07 2015-05-20 深圳先进技术研究院 Three-dimensional drawing system and three-dimensional drawing method based on human-computer interaction
US20160252976A1 (en) * 2015-02-26 2016-09-01 Konica Minolta Laboratory U.S.A., Inc. Method and apparatus for interactive user interface with wearable device
CN105867611A (en) * 2015-12-29 2016-08-17 乐视致新电子科技(天津)有限公司 Space positioning method, device and system in virtual reality system
CN106200981A (en) * 2016-07-21 2016-12-07 北京小鸟看看科技有限公司 A kind of virtual reality system and wireless implementation method thereof
CN106200985A (en) * 2016-08-10 2016-12-07 北京天远景润科技有限公司 Desktop type individual immerses virtual reality interactive device
CN106774992A (en) * 2016-12-16 2017-05-31 深圳市虚拟现实技术有限公司 The point recognition methods of virtual reality space location feature

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914716A (en) * 2020-07-24 2020-11-10 深圳市瑞立视多媒体科技有限公司 Active optical rigid body identification method, device, equipment and storage medium
CN111914716B (en) * 2020-07-24 2023-10-20 深圳市瑞立视多媒体科技有限公司 Active light rigid body identification method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN106774992A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018107923A1 (en) Positioning feature point identification method for use in virtual reality space
WO2017092339A1 (en) Method and device for processing collected sensor data
WO2018113433A1 (en) Method for screening and spatially locating virtual reality feature points
US9406170B1 (en) Augmented reality system with activity templates
WO2002054217A1 (en) Handwriting data input device and method, and authenticating device and method
JP2017049762A (en) System and method
CN105323520B (en) Projector apparatus, interactive system and interaction control method
JP2009093612A (en) System and method for labeling feature clusters in frames of image data for optical navigation
US8901855B2 (en) LED light illuminating control system and method
US11712619B2 (en) Handle controller
RU2012130358A (en) LIGHTING TOOL FOR CREATING LIGHT SCENES
JP2008012102A (en) Method and apparatus for outputting sound linked with image
TWI705354B (en) Eye tracking apparatus and light source control method thereof
TWI526879B (en) Interactive system, remote controller and operating method thereof
WO2019033322A1 (en) Handheld controller, and tracking and positioning method and system
CN111783640A (en) Detection method, device, equipment and storage medium
JP2015115649A (en) Device control system and device control method
CN106599930B (en) Virtual reality space positioning feature point screening method
JP4409545B2 (en) Three-dimensional position specifying device and method, depth position specifying device
JP5799232B2 (en) Lighting control device
US10215858B1 (en) Detection of rigid shaped objects
US9329679B1 (en) Projection system with multi-surface projection screen
US8224025B2 (en) Group tracking in motion capture
WO2020008576A1 (en) Determination method, determination program, and information processing device
CN111752386B (en) Space positioning method, system and head-mounted equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17880900

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 04/11/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17880900

Country of ref document: EP

Kind code of ref document: A1