WO2018113433A1 - Method for screening and spatially locating virtual reality feature points - Google Patents

Method for screening and spatially locating virtual reality feature points Download PDF

Info

Publication number
WO2018113433A1
WO2018113433A1 PCT/CN2017/109794 CN2017109794W WO2018113433A1 WO 2018113433 A1 WO2018113433 A1 WO 2018113433A1 CN 2017109794 W CN2017109794 W CN 2017109794W WO 2018113433 A1 WO2018113433 A1 WO 2018113433A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
infrared
processing unit
virtual reality
Prior art date
Application number
PCT/CN2017/109794
Other languages
French (fr)
Chinese (zh)
Inventor
李宗乘
Original Assignee
深圳市虚拟现实技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市虚拟现实技术有限公司 filed Critical 深圳市虚拟现实技术有限公司
Publication of WO2018113433A1 publication Critical patent/WO2018113433A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Definitions

  • the present invention relates to the field of virtual reality, and more particularly to a virtual reality feature point screening space positioning method.
  • Spatial positioning generally uses optical or ultrasonic modes for positioning and measurement, and the model is used to derive the spatial position of the object to be measured.
  • the general virtual reality space positioning system uses the infrared point and the light-sensing camera to determine the spatial position of the object.
  • the infrared point is at the front end of the near-eye display device.
  • the light-sensing camera captures the position of the infrared point and then derives the user's position. Physical coordinates. If you know the correspondence between at least three light sources and projections, call the PnP algorithm to get the spatial position of the helmet.
  • the key to achieving this process is to determine the source ID (Identity) of the projection.
  • the current virtual reality spatial positioning is determined by the inaccurate image recognition in a certain distance and direction.
  • the corresponding projection light source I D ⁇ is too long and the image recognition is inaccurate, which affects the accuracy and efficiency of the positioning.
  • the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
  • a virtual reality feature point screening space positioning method which includes the following steps:
  • S1 ensuring that all infrared point light sources are in an on state, the processing unit controls the infrared camera to take an image of the virtual reality helmet, and calculates coordinates of the light spots of each of the infrared point source images;
  • S2 the processing unit performs ID identification on each light spot in the imaged image to find an ID corresponding to all the light spots;
  • S3 the processing image control the infrared point light source corresponding to the ID is at least 4 ⁇ in a lighting state, turning off the remaining infrared point light sources, and the processing unit controls the infrared camera to capture the virtual reality
  • the image of the helmet is calculated and positioned using the PnP algorithm
  • S4 When the number of spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 to S3 are re-executed.
  • the shape of the image is rectangular, and the length of the long side of the image is d
  • the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d'>d/2 ⁇
  • the processing unit finds a spot of light closest to the center of the imaged image, and maintains the infrared point source of the spot corresponding to the ID and the three closest to the infrared point source
  • the infrared point source is in a lit state, and the other infrared point sources are turned off at the same time.
  • the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d' ⁇ d/2 ⁇ , the processing unit finds at least four of the infrared point light sources outside the relative positions of the infrared point light sources corresponding to the light spots and keeps lighting, and turns off the other infrared point light sources.
  • the processing unit combines the known historical information of the previous frame to make a slight translation of the light spot of the image of the previous frame, so that the light spot of the image of the previous frame is corresponding to the light spot of the current frame image. And determining, according to the correspondence relationship and the historical information of the previous frame, a corresponding ID of each light spot having a corresponding relationship on the current frame image.
  • the present invention increases the efficiency of positioning by turning off the infrared point source which complicates the calculation, and uses the relative position of the infrared point source on the imaged image to filter the infrared point source that needs to be turned off.
  • a screening method is given. The method of comparing the maximum distance between the light spots with the long side distance of the imaged picture to distinguish the closing of the infrared point light source corresponding to the light spot is simple and easy, and the operability is strong.
  • the infrared point source corresponding to the four light spots selected in the middle is illuminated, which can be better calculated by the PnP algorithm, and the same is also ensured.
  • the positioned light spots do not quickly move out of the imaged image, preventing repeated ID recognition and costing a lot of time.
  • the infrared point source corresponding to at least 4 light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the light is also ensured.
  • the distance between the spots is large enough that it will not be produced due to pixels or the like. A large error is caused by making a slight translation of the light spot to make the current light spot correspond to the light spot of the previous frame image, thereby avoiding repeated ID recognition and saving a lot of time.
  • FIG. 1 is a schematic diagram showing the principle of a virtual reality feature point screening spatial positioning method according to the present invention
  • FIG. 2 is a schematic diagram of an infrared point source distribution of a virtual reality feature point screening spatial positioning method according to the present invention
  • FIG. 3 shows one of images taken by an infrared camera
  • FIG. 4 illustrates one of the imaged images presented after the infrared point source is turned off
  • FIG. 5 shows two images taken by an infrared camera
  • FIG. 6 shows the second image of the image presented after the infrared point source is turned off.
  • the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
  • the virtual reality feature point screening spatial positioning method comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, and the infrared camera 20 is electrically connected to the processing unit 30.
  • the virtual reality helmet 10 includes a front panel 11, and a plurality of infrared point light sources 13 are distributed on the front panel 11 of the virtual reality helmet 10 and the four side panels of the upper, lower, left, and right sides.
  • the number of infrared point sources 13 must be at least the minimum number that the PnP algorithm can operate.
  • the shape of the infrared point light source 13 is not particularly limited.
  • infrared point light sources 13 on the front panel 11 we take the number of infrared point light sources 13 on the front panel 11 to be seven, and the seven infrared point light sources form a shape of approximately "w".
  • a plurality of infrared point sources 13 can be illuminated or turned off as needed by the firmware interface of the virtual reality helmet 10.
  • the infrared point light source 13 on the virtual reality helmet 10 forms a light spot on the image by the shooting of the infrared camera 20. Due to the band pass characteristic of the infrared camera, only the infrared point light source 13 can form a spot projection on the image, and the remaining portions are uniformly formed. Background image.
  • the infrared point source 13 on the virtual reality helmet 10 can form a spot of light on the image. Referring to FIG. 3 and FIG.
  • FIG. 3 shows an image 4 of the infrared point source 13 captured by the infrared camera 20.
  • the image of the image 41 is rectangular, and the length of the longer side of the rectangle is d.
  • the processing unit 30 controls the infrared camera 20 to take an image of the virtual reality helmet 10 with seven spots on the image 41.
  • the processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected. When d'>d/2 ⁇ , it indicates that the range of the pupil spot on the image 41 is large, since each spot is sequentially ID
  • the processing unit 30 first performs ID identification on each spot in the image 41. Finding the ID corresponding to all the light spots, and then finding the light spot closest to the center position of the imaged image 41 as the center point, maintaining the infrared spot light source 13 corresponding to the ID of the light spot and the three infrared rays closest to the infrared point light source.
  • the point light source 13 is in a lighting state, and the other infrared point light sources 13 are turned off.
  • the processing unit 30 can track each light spot and calibrate the corresponding ID.
  • the method is: in spatial positioning, since the sampling time of each frame is small enough, generally 30 ms, in general, the position difference of each light spot of the previous frame and each light spot on the current frame is small, and the processing is small.
  • the unit 30 combines the known historical information of the previous frame to make a slight translation of the light spot of the previous frame image to make the light spot of the previous frame image Frame image before the light spot is generated corresponding relationship, on the basis of the correspondence and a history information is determined to have a correspondence between each light spot corresponding to the ID of the current frame image.
  • the processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10.
  • the virtual reality helmet 10 causes the number of spots in the image 41 to be smaller than the number of spots required by the PnP algorithm, the above method is re-executed to select a new infrared point source 13 to be lit.
  • FIG. 5 there are seven light spots on the imaged image 41.
  • the processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected.
  • d' d/2 ⁇
  • the PnP algorithm can also meet the needs of the PnP algorithm.
  • the processing unit 30 first ID identifies each spot in the image, finds the ID corresponding to all the light spots, and then finds the relative position of the infrared point source 13 corresponding to the IDs. Keeping these infrared point sources at least 4 infrared point sources 13 outside 13 is in the light state, and the other infrared point light sources 13 are turned off. This ensures that the light spots on the imaged image 41 are not too dense, thereby affecting the accuracy of the measurement.
  • the processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10.
  • the above method is re-executed to select a new infrared point source 13 that needs to be illuminated.
  • the processing unit 30 calls the PnP algorithm to obtain the spatial positioning position of the helmet, Pn.
  • the P algorithm belongs to the prior art, and the present invention will not be described again.
  • the present invention increases the efficiency of positioning by turning off the infrared point source 13 which complicates the calculation, and uses the relative position of the infrared point source 13 on the imaged picture 41 to filter the need to be closed.
  • the infrared point source 13 gives a screening method. The method of comparing the maximum distance between the light spots with the long side distance of the imaged image 41 to distinguish the closing of the infrared point light source 13 corresponding to the light spot is simple and easy, and the operability is strong.
  • the infrared point light source 13 corresponding to the four light spots selected in the middle portion is illuminated, and the PnP algorithm can be used for calculation, and the same is ensured.
  • the spot of light used for positioning does not quickly move out of the imaged picture 41, preventing repeated ID recognition and taking a lot of time.
  • the infrared point light source 13 corresponding to at least four light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the same is also ensured.
  • the distance between the light spots is large enough, and there is no large error due to pixels or the like. By making a slight translation of the light spots, the current light spots are corresponding to the light spots of the previous frame image, thereby avoiding repeated ID recognition and saving. A lot of time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

A method for screening and spatially locating virtual reality feature points, comprising the following steps S1: after ensuring that all infrared point sources are in an on state, a processing unit controls an infrared camera to capture an image of a virtual reality helmet and calculates the coordinates of the light spot of each infrared point source image; S2: the processing unit carries out ID identification on each light spot in an imaging picture and finds IDs corresponding to all of the light spots; S3: the processing unit controls at least four infrared point sources corresponding to the IDs to be in a lighted state and turns off the other infrared point sources, and the processing unit controls the infrared camera to capture an image of the virtual reality helmet and uses a perspective-n-point (PnP) algorithm to carry out operation positioning of the image; and S4: when the number of light spots in the imaging picture does not meet the number required by the PnP algorithm, steps S1 to S3 are carried out again. Compared to existing technology, said method turns off infrared point sources which complicate calculation, thereby increasing locating efficiency.

Description

虚拟现实特征点筛选空间定位方法 技术领域  Virtual reality feature point screening spatial positioning method
[0001] 本发明涉及虚拟现实领域, 更具体地说, 涉及一种虚拟现实特征点筛选空间定 位方法。  [0001] The present invention relates to the field of virtual reality, and more particularly to a virtual reality feature point screening space positioning method.
背景技术  Background technique
[0002] 空间定位一般采用光学或超声波的模式进行定位和测算, 通过建立模型来推导 待测物体的空间位置。 一般的虚拟现实空间定位系统采用红外点和光感摄像头 接收的方式来确定物体的空间位置, 红外点在近眼显示装置的前端, 在定位吋 , 光感摄像头捕捉红外点的位置进而推算出使用者的物理坐标。 如果知道至少 三个光源和投影的对应关系, 再调用 PnP算法就可得到头盔的空间位置。 而实现 这一过程的关键就是确定投影对应的光源 ID (Identity, 序列号) 。 目前的虚拟 现实空间定位由于在一定距离和方向上图片识别不准确导致确定投影对应光源 I D吋对应吋间过长和图片识别不准确, 进而影响了定位的准确性和效率。  [0002] Spatial positioning generally uses optical or ultrasonic modes for positioning and measurement, and the model is used to derive the spatial position of the object to be measured. The general virtual reality space positioning system uses the infrared point and the light-sensing camera to determine the spatial position of the object. The infrared point is at the front end of the near-eye display device. After positioning, the light-sensing camera captures the position of the infrared point and then derives the user's position. Physical coordinates. If you know the correspondence between at least three light sources and projections, call the PnP algorithm to get the spatial position of the helmet. The key to achieving this process is to determine the source ID (Identity) of the projection. The current virtual reality spatial positioning is determined by the inaccurate image recognition in a certain distance and direction. The corresponding projection light source I D吋 is too long and the image recognition is inaccurate, which affects the accuracy and efficiency of the positioning.
技术问题  technical problem
[0003] 为了解决当前虚拟现实空间定位准确性和效率不高的缺陷, 本发明提供一种可 以提高定位准确性和效率的虚拟现实特征点筛选空间定位方法。  [0003] In order to solve the defect that the current virtual reality space positioning accuracy and efficiency are not high, the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
问题的解决方案  Problem solution
技术解决方案  Technical solution
[0004] 本发明解决其技术问题所采用的技术方案是: 提供一种虚拟现实特征点筛选空 间定位方法, 包括以下步骤:  The technical solution adopted by the present invention to solve the technical problem thereof is as follows: A virtual reality feature point screening space positioning method is provided, which includes the following steps:
[0005] S1 : 确保所有红外点光源处于幵启状态, 处理单元控制红外摄像头拍摄虚拟现 实头盔的图像, 并计算每个所述红外点光源影像的光斑点的坐标; [0005] S1: ensuring that all infrared point light sources are in an on state, the processing unit controls the infrared camera to take an image of the virtual reality helmet, and calculates coordinates of the light spots of each of the infrared point source images;
[0006] S2: 所述处理单元对成像图片中的每个光斑点进行 ID识别, 找出所有光斑点对 应的 ID; [0006] S2: the processing unit performs ID identification on each light spot in the imaged image to find an ID corresponding to all the light spots;
[0007] S3: 所述处理图像控制对应 ID的所述红外点光源至少 4盏处于点亮状态, 关闭 其余的所述红外点光源, 所述处理单元控制所述红外摄像头拍摄所述虚拟现实 头盔的图像并利用 PnP算法对其进行运算定位; [0007] S3: the processing image control the infrared point light source corresponding to the ID is at least 4 盏 in a lighting state, turning off the remaining infrared point light sources, and the processing unit controls the infrared camera to capture the virtual reality The image of the helmet is calculated and positioned using the PnP algorithm;
[0008] S4: 当成像图片上光斑点的个数不满足 PnP算法需要的数量吋, 重新执行 S1到 S3。 [0008] S4: When the number of spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 to S3 are re-executed.
[0009] 优选地, 所述成像图片的形状为矩形, 所述成像图片的矩形长边长度为 d, 所 述处理单元计算光斑点两两之间的距离, 从中选出最大距离 d', 当 d'〉d/2吋, 所 述处理单元找出最靠近所述成像图片中心位置的光斑点, 保持该光斑点对应 ID 的所述红外点光源以及与该红外点光源最接近的 3个所述红外点光源处于点亮状 态, 同吋关闭其他所述红外点光源。  [0009] Preferably, the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d'>d/2吋, the processing unit finds a spot of light closest to the center of the imaged image, and maintains the infrared point source of the spot corresponding to the ID and the three closest to the infrared point source The infrared point source is in a lit state, and the other infrared point sources are turned off at the same time.
[0010] 优选地, 所述成像图片的形状为矩形, 所述成像图片的矩形长边长度为 d, 所 述处理单元计算光斑点两两之间的距离, 从中选出最大距离 d', 当 d' < d/2吋, 所 述处理单元找出光斑点对应的所述红外点光源中相对位置靠外的至少 4个所述红 外点光源并保持点亮, 关闭其他所述红外点光源。  [0010] Preferably, the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d' < d/2吋, the processing unit finds at least four of the infrared point light sources outside the relative positions of the infrared point light sources corresponding to the light spots and keeps lighting, and turns off the other infrared point light sources.
[0011] 优选地, 所述处理单元结合上一帧已知的历史信息对上一帧图像的光斑点做一 个微小的平移使上一帧图像的光斑点与当前帧图像的光斑点产生对应关系, 根 据该对应关系和上一帧的历史信息判断当前帧图像上有对应关系的每个光斑点 的对应 ID。  [0011] Preferably, the processing unit combines the known historical information of the previous frame to make a slight translation of the light spot of the image of the previous frame, so that the light spot of the image of the previous frame is corresponding to the light spot of the current frame image. And determining, according to the correspondence relationship and the historical information of the previous frame, a corresponding ID of each light spot having a corresponding relationship on the current frame image.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0012] 与现有技术相比, 本发明利用关闭会使计算复杂化的红外点光源的做法增加了 定位的效率, 利用红外点光源在成像图片上的相对位置来筛选需要关闭的红外 点光源给出了一种筛选方法。 利用光斑点间最大距离与成像图片长边距离对比 的方法来区分如何对光斑点对应的红外点光源的关闭进行取舍, 简单易行, 可 操作性很强。 当光斑点间最大距离大于成像图片长边长度的一半吋, 采用选取 偏中部的 4个光斑点对应的红外点光源点亮, 可以较好地利用 PnP算法进行计算 , 同吋也保证了用于定位的光斑点不会迅速移出成像图片, 防止反复进行 ID识 别而耗费大量吋间。 当光斑点间最大距离小于成像图片长边长度的一半吋, 采 用选取偏外部的至少 4个光斑点对应的红外点光源点亮, 可以较好地利用 PnP算 法进行计算, 同吋也保证了光斑点之间的距离足够大, 不会由于像素等原因产 生较大误差通过对光斑点做微小平移来使当前光斑点与上一帧图像的光斑点对 应, 避免了反复进行 ID识别, 节省了大量的吋间。 [0012] Compared with the prior art, the present invention increases the efficiency of positioning by turning off the infrared point source which complicates the calculation, and uses the relative position of the infrared point source on the imaged image to filter the infrared point source that needs to be turned off. A screening method is given. The method of comparing the maximum distance between the light spots with the long side distance of the imaged picture to distinguish the closing of the infrared point light source corresponding to the light spot is simple and easy, and the operability is strong. When the maximum distance between the light spots is greater than half of the length of the long side of the image, the infrared point source corresponding to the four light spots selected in the middle is illuminated, which can be better calculated by the PnP algorithm, and the same is also ensured. The positioned light spots do not quickly move out of the imaged image, preventing repeated ID recognition and costing a lot of time. When the maximum distance between the light spots is less than half of the length of the long side of the image, the infrared point source corresponding to at least 4 light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the light is also ensured. The distance between the spots is large enough that it will not be produced due to pixels or the like. A large error is caused by making a slight translation of the light spot to make the current light spot correspond to the light spot of the previous frame image, thereby avoiding repeated ID recognition and saving a lot of time.
对附图的简要说明  Brief description of the drawing
附图说明  DRAWINGS
[0013] 下面将结合附图及实施例对本发明作进一步说明, 附图中:  [0013] The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:
[0014] 图 1是本发明虚拟现实特征点筛选空间定位方法原理示意图;  1 is a schematic diagram showing the principle of a virtual reality feature point screening spatial positioning method according to the present invention;
[0015] 图 2是本发明虚拟现实特征点筛选空间定位方法红外点光源分布示意图;  2 is a schematic diagram of an infrared point source distribution of a virtual reality feature point screening spatial positioning method according to the present invention;
[0016] 图 3示出了红外摄像头拍摄的图像之一;  [0016] FIG. 3 shows one of images taken by an infrared camera;
[0017] 图 4示出了经过红外点光源关闭后呈现的成像图片之一;  [0017] FIG. 4 illustrates one of the imaged images presented after the infrared point source is turned off;
[0018] 图 5示出了红外摄像头拍摄的图像之二;  [0018] FIG. 5 shows two images taken by an infrared camera;
[0019] 图 6示出了经过红外点光源关闭后呈现的成像图片之二。  [0019] FIG. 6 shows the second image of the image presented after the infrared point source is turned off.
实施该发明的最佳实施例  BEST MODE FOR CARRYING OUT THE INVENTION
本发明的最佳实施方式  BEST MODE FOR CARRYING OUT THE INVENTION
[0020] 为了解决当前虚拟现实空间定位准确性和效率不高的缺陷, 本发明提供一种可 以提高定位准确性和效率的虚拟现实特征点筛选空间定位方法。  [0020] In order to solve the defect that the current virtual reality spatial positioning accuracy and efficiency are not high, the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
[0021] 为了对本发明的技术特征、 目的和效果有更加清楚的理解, 现对照附图详细说 明本发明的具体实施方式。  [0021] In order to more clearly understand the technical features, objects and effects of the present invention, the embodiments of the present invention are described in detail with reference to the accompanying drawings.
[0022] 请参阅图 1和图 2。 本发明虚拟现实特征点筛选空间定位方法包括虚拟现实头盔 10、 红外摄像头 20和处理单元 30, 红外摄像头 20与处理单元 30电性连接。 虚拟 现实头盔 10包括前面板 11, 在虚拟现实头盔 10的前面板 11及上、 下、 左、 右四 个侧面板分布有多个的红外点光源 13。 红外点光源 13的数量至少要满足 PnP算法 可以运行的最小数量。 红外点光源 13的形状没有特别的限制。 为了举例说明, 我们取红外点光源 13在前面板 11上的数量为 7个, 7个红外点光源组成近似 "w"的 形状。 多个红外点光源 13可以通过虚拟现实头盔 10的固件接口根据需要点亮或 者关闭。 虚拟现实头盔 10上的红外点光源 13通过红外摄像头 20的拍摄在图像上 形成光点, 由于红外摄像头的带通特性, 只有红外点光源能 13在图像上形成光 斑投影, 其余部分皆形成均匀的背景图像。 虚拟现实头盔 10上的红外点光源 13 在图像上可以形成光斑点。 [0023] 请参阅图 3和图 4, 图 3示出了红外摄像头 20拍摄的红外点光源 13的成像图片 41 , 成像图片 41为矩形, 矩形较长边长度为 d。 确保所有红外点光源处于幵启状态 , 处理单元 30控制红外摄像头 20拍摄虚拟现实头盔 10的图像, 在成像图片 41上 有七个光斑点。 处理单元 30根据光斑点在成像图片 41上的位置计算出每个光斑 点的坐标, 并测算两两光斑点之间的距离, 从中选出最大距离 d'。 当 d'〉d/2吋, 说明此吋光斑点在成像图片 41上所占的范围较大, 由于每个光斑点依次进行 ID[0022] Please refer to FIG. 1 and FIG. 2. The virtual reality feature point screening spatial positioning method comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, and the infrared camera 20 is electrically connected to the processing unit 30. The virtual reality helmet 10 includes a front panel 11, and a plurality of infrared point light sources 13 are distributed on the front panel 11 of the virtual reality helmet 10 and the four side panels of the upper, lower, left, and right sides. The number of infrared point sources 13 must be at least the minimum number that the PnP algorithm can operate. The shape of the infrared point light source 13 is not particularly limited. For the sake of illustration, we take the number of infrared point light sources 13 on the front panel 11 to be seven, and the seven infrared point light sources form a shape of approximately "w". A plurality of infrared point sources 13 can be illuminated or turned off as needed by the firmware interface of the virtual reality helmet 10. The infrared point light source 13 on the virtual reality helmet 10 forms a light spot on the image by the shooting of the infrared camera 20. Due to the band pass characteristic of the infrared camera, only the infrared point light source 13 can form a spot projection on the image, and the remaining portions are uniformly formed. Background image. The infrared point source 13 on the virtual reality helmet 10 can form a spot of light on the image. Referring to FIG. 3 and FIG. 4, FIG. 3 shows an image 4 of the infrared point source 13 captured by the infrared camera 20. The image of the image 41 is rectangular, and the length of the longer side of the rectangle is d. Ensuring that all of the infrared point sources are in an on state, the processing unit 30 controls the infrared camera 20 to take an image of the virtual reality helmet 10 with seven spots on the image 41. The processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected. When d'>d/2吋, it indicates that the range of the pupil spot on the image 41 is large, since each spot is sequentially ID
(Identity, 序列号) 识别和 PnP算法运行会耗费大量吋间, 仅取其中一部分点也 可以满足 PnP算法的需要, 此吋, 处理单元 30首先对成像图片 41中的每个光斑点 进行 ID识别, 找出所有光斑点对应的 ID, 然后找出最靠近成像图片 41中心位置 的光斑点作为中心点, 保持该光斑点对应 ID的红外点光源 13以及与该红外点光 源最接近的 3个红外点光源 13处于点亮状态, 同吋关闭其他红外点光源 13, 此吋 , 在下一帧的成像图片 41上仅存在 4个光斑点, 处理单元 30可以跟踪每个光斑点 并标定对应 ID, 具体方法是: 在空间定位吋, 由于每帧的采样吋间足够小, 一 般为 30ms, 所以一般情况下上一帧的每个光斑点和当前帧上的每个光斑点的位 置差别很小, 处理单元 30结合上一帧已知的历史信息对上一帧图像的光斑点做 一个微小的平移使上一帧图像的光斑点与当前帧图像的光斑点产生对应关系, 根据该对应关系和上一帧的历史信息即可判断当前帧图像上有对应关系的每个 光斑点的对应 ID。 在所有光斑点对应 ID已知的情况下, 处理单元 30直接调用 PnP 算法即可得出虚拟现实头盔 10的空间定位位置。 当虚拟现实头盔 10由于移动导 致成像图片 41中光斑数量少于 PnP算法所需的光斑数量吋, 重新执行上述方法选 择新的需要点亮的红外点光源 13。 (Identity, serial number) The identification and operation of the PnP algorithm can take a lot of time. Only a part of the points can satisfy the needs of the PnP algorithm. Therefore, the processing unit 30 first performs ID identification on each spot in the image 41. Finding the ID corresponding to all the light spots, and then finding the light spot closest to the center position of the imaged image 41 as the center point, maintaining the infrared spot light source 13 corresponding to the ID of the light spot and the three infrared rays closest to the infrared point light source The point light source 13 is in a lighting state, and the other infrared point light sources 13 are turned off. Thereafter, there are only 4 light spots on the image 41 of the next frame, and the processing unit 30 can track each light spot and calibrate the corresponding ID. The method is: in spatial positioning, since the sampling time of each frame is small enough, generally 30 ms, in general, the position difference of each light spot of the previous frame and each light spot on the current frame is small, and the processing is small. The unit 30 combines the known historical information of the previous frame to make a slight translation of the light spot of the previous frame image to make the light spot of the previous frame image Frame image before the light spot is generated corresponding relationship, on the basis of the correspondence and a history information is determined to have a correspondence between each light spot corresponding to the ID of the current frame image. In the case where all the light spot corresponding IDs are known, the processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10. When the virtual reality helmet 10 causes the number of spots in the image 41 to be smaller than the number of spots required by the PnP algorithm, the above method is re-executed to select a new infrared point source 13 to be lit.
[0024] 请参阅图 5和图 6, 在图 5中, 成像图片 41上有七个光斑点。 处理单元 30根据光 斑点在成像图片 41上的位置计算出每个光斑点的坐标, 并测算两两光斑点之间 的距离, 从中选出最大距离 d'。 当 d' < d/2吋, 说明此吋光斑点在成像图片 41上所 占的范围较小, 由于每个光斑点依次进行 ID识别和 PnP算法运行会耗费大量吋间 , 仅取其中一部分点也可以满足 PnP算法的需要, 此吋, 处理单元 30首先对图像 中的每个光斑点进行 ID识别, 找出所有光斑点对应的 ID, 然后找出这些 ID对应 的红外点光源 13中相对位置靠外的至少 4个红外点光源 13, 保持这些红外点光源 13处于点亮状态, 关闭其他红外点光源 13。 这样做可以保证成像图片 41上的光 斑点不会过密, 从而影响测量的精确度。 处理单元 30直接调用 PnP算法即可得出 虚拟现实头盔 10的空间定位位置。 当虚拟现实头盔 10由于移动导致成像图片 41 中光斑数量少于 PnP算法所需的光斑数量吋, 重新执行上述方法选择新的需要点 亮的红外点光源 13。 [0024] Please refer to FIG. 5 and FIG. 6. In FIG. 5, there are seven light spots on the imaged image 41. The processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected. When d'< d/2吋, it indicates that the range of the calender spot on the image 41 is small, and it takes a lot of time to perform ID recognition and PnP algorithm operation for each spot, and only some of them are taken. The PnP algorithm can also meet the needs of the PnP algorithm. Therefore, the processing unit 30 first ID identifies each spot in the image, finds the ID corresponding to all the light spots, and then finds the relative position of the infrared point source 13 corresponding to the IDs. Keeping these infrared point sources at least 4 infrared point sources 13 outside 13 is in the light state, and the other infrared point light sources 13 are turned off. This ensures that the light spots on the imaged image 41 are not too dense, thereby affecting the accuracy of the measurement. The processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10. When the virtual reality helmet 10 causes the number of spots in the imaged picture 41 to be less than the number of spots required by the PnP algorithm due to the movement, the above method is re-executed to select a new infrared point source 13 that needs to be illuminated.
[0025] ID识别完成后, 处理单元 30再调用 PnP算法就可得到头盔的空间定位位置, Pn [0025] After the ID identification is completed, the processing unit 30 calls the PnP algorithm to obtain the spatial positioning position of the helmet, Pn.
P算法属于现有技术, 本发明不再赘述。 The P algorithm belongs to the prior art, and the present invention will not be described again.
[0026] 与现有技术相比, 本发明利用关闭会使计算复杂化的红外点光源 13的做法增加 了定位的效率, 利用红外点光源 13在成像图片 41上的相对位置来筛选需要关闭 的红外点光源 13给出了一种筛选方法。 利用光斑点间最大距离与成像图片 41长 边距离对比的方法来区分如何对光斑点对应的红外点光源 13的关闭进行取舍, 简单易行, 可操作性很强。 当光斑点间最大距离大于成像图片 41长边长度的一 半吋, 采用选取偏中部的 4个光斑点对应的红外点光源 13点亮, 可以较好地利用 PnP算法进行计算, 同吋也保证了用于定位的光斑点不会迅速移出成像图片 41, 防止反复进行 ID识别而耗费大量吋间。 当光斑点间最大距离小于成像图片 41长 边长度的一半吋, 采用选取偏外部的至少 4个光斑点对应的红外点光源 13点亮, 可以较好地利用 PnP算法进行计算, 同吋也保证了光斑点之间的距离足够大, 不 会由于像素等原因产生较大误差通过对光斑点做微小平移来使当前光斑点与上 一帧图像的光斑点对应, 避免了反复进行 ID识别, 节省了大量的吋间。  Compared with the prior art, the present invention increases the efficiency of positioning by turning off the infrared point source 13 which complicates the calculation, and uses the relative position of the infrared point source 13 on the imaged picture 41 to filter the need to be closed. The infrared point source 13 gives a screening method. The method of comparing the maximum distance between the light spots with the long side distance of the imaged image 41 to distinguish the closing of the infrared point light source 13 corresponding to the light spot is simple and easy, and the operability is strong. When the maximum distance between the light spots is greater than half of the length of the long side of the image 41, the infrared point light source 13 corresponding to the four light spots selected in the middle portion is illuminated, and the PnP algorithm can be used for calculation, and the same is ensured. The spot of light used for positioning does not quickly move out of the imaged picture 41, preventing repeated ID recognition and taking a lot of time. When the maximum distance between the light spots is less than half of the length of the long side of the image 41, the infrared point light source 13 corresponding to at least four light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the same is also ensured. The distance between the light spots is large enough, and there is no large error due to pixels or the like. By making a slight translation of the light spots, the current light spots are corresponding to the light spots of the previous frame image, thereby avoiding repeated ID recognition and saving. A lot of time.
[0027] 上面结合附图对本发明的实施例进行了描述, 但是本发明并不局限于上述的具 体实施方式, 上述的具体实施方式仅仅是示意性的, 而不是限制性的, 本领域 的普通技术人员在本发明的启示下, 在不脱离本发明宗旨和权利要求所保护的 范围情况下, 还可做出很多形式, 这些均属于本发明的保护之内。  The embodiments of the present invention have been described above with reference to the drawings, but the present invention is not limited to the specific embodiments described above, and the specific embodiments described above are merely illustrative and not restrictive. The skilled person is able to make various forms within the scope of the present invention without departing from the spirit and scope of the invention as claimed.

Claims

权利要求书 Claim
[权利要求 1] 一种虚拟现实特征点筛选空间定位方法, 其特征在于, 包括以下步骤  [Claim 1] A virtual reality feature point screening spatial positioning method, comprising the following steps
S1 : 确保所有红外点光源处于幵启状态, 处理单元控制红外摄像头拍 摄虚拟现实头盔的图像, 并计算每个所述红外点光源影像的光斑点的 坐标; S1: ensuring that all the infrared point light sources are in an on state, the processing unit controls the infrared camera to take an image of the virtual reality helmet, and calculates coordinates of the light spots of each of the infrared point source images;
S2: 所述处理单元对成像图片中的每个光斑点进行 ID识别, 找出所 有光斑点对应的 ID;  S2: the processing unit performs ID identification on each light spot in the imaged image to find an ID corresponding to all the light spots;
S3: 所述处理图像控制对应 ID的所述红外点光源至少 4盏处于点亮状 态, 关闭其余的所述红外点光源, 所述处理单元控制所述红外摄像头 拍摄所述虚拟现实头盔的图像并利用 PnP算法对其进行运算定位; S4: 当成像图片上光斑点的个数不满足 PnP算法需要的数量吋, 重新 执行 S1到 S3。  S3: the infrared light source of the processing image control corresponding ID is at least 4 盏 in a lighting state, and the remaining infrared point light sources are turned off, and the processing unit controls the infrared camera to capture an image of the virtual reality helmet and The PnP algorithm is used to perform operation and positioning; S4: When the number of spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 to S3 are re-executed.
[权利要求 2] 根据权利要求 1所述的虚拟现实特征点筛选空间定位方法, 其特征在 于, 所述成像图片的形状为矩形, 所述成像图片的矩形长边长度为 d , 所述处理单元计算光斑点两两之间的距离, 从中选出最大距离 d', 当 d'〉d/2吋, 所述处理单元找出最靠近所述成像图片中心位置的光 斑点, 保持该光斑点对应 ID的所述红外点光源以及与该红外点光源最 接近的 3个所述红外点光源处于点亮状态, 同吋关闭其他所述红外点 光源。  The method for locating a virtual reality feature point screening space according to claim 1, wherein the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit Calculating the distance between the two spots of the light spot, and selecting the maximum distance d', when d'>d/2吋, the processing unit finds the spot of light closest to the center position of the imaged image, and keeps the spot corresponding to the spot The infrared point source of the ID and the three infrared point sources closest to the infrared point source are in a lighting state, and the other infrared point sources are turned off.
[权利要求 3] 根据权利要求 1所述的虚拟现实特征点筛选空间定位方法, 其特征在 于, 所述成像图片的形状为矩形, 所述成像图片的矩形长边长度为 d , 所述处理单元计算光斑点两两之间的距离, 从中选出最大距离 d', 当 d' < d/2吋, 所述处理单元找出光斑点对应的所述红外点光源中相 对位置靠外的至少 4个所述红外点光源并保持点亮, 关闭其他所述红 外点光源。  The method for locating a virtual reality feature point screening space according to claim 1, wherein the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit Calculating a distance between the two spots of the light spot, and selecting a maximum distance d′, wherein when d′ < d/2吋, the processing unit finds at least 4 of the relative positions of the infrared point light sources corresponding to the light spots. The infrared point source is kept lit, and the other infrared point sources are turned off.
[权利要求 4] 根据权利要求 1-3任一项所述的虚拟现实特征点筛选空间定位方法, 其特征在于, 所述处理单元结合上一帧已知的历史信息对上一帧图像 的光斑点做一个微小的平移使上一帧图像的光斑点与当前帧图像的光 斑点产生对应关系, 根据该对应关系和上一帧的历史信息判断当前帧 图像上有对应关系的每个光斑点的对应 ID。 The method for locating a virtual reality feature point screening space according to any one of claims 1 to 3, wherein the processing unit combines the historical information of the previous frame with the image of the previous frame. The light spot makes a slight translation so that the light spot of the previous frame image is corresponding to the light spot of the current frame image, and each light having a corresponding relationship on the current frame image is determined according to the correspondence relationship and the history information of the previous frame. The corresponding ID of the spot.
PCT/CN2017/109794 2016-12-22 2017-11-07 Method for screening and spatially locating virtual reality feature points WO2018113433A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611199871.6A CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method
CN201611199871.6 2016-12-22

Publications (1)

Publication Number Publication Date
WO2018113433A1 true WO2018113433A1 (en) 2018-06-28

Family

ID=58601028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109794 WO2018113433A1 (en) 2016-12-22 2017-11-07 Method for screening and spatially locating virtual reality feature points

Country Status (2)

Country Link
CN (1) CN106599929B (en)
WO (1) WO2018113433A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739803A (en) * 2021-08-30 2021-12-03 中国电子科技集团公司第五十四研究所 Indoor and underground space positioning method based on infrared datum point

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (en) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 Virtual reality feature point screening space positioning method
CN107219963A (en) * 2017-07-04 2017-09-29 深圳市虚拟现实科技有限公司 Virtual reality handle pattern space localization method and system
CN107562189B (en) * 2017-07-21 2020-12-11 广州励丰文化科技股份有限公司 Space positioning method based on binocular camera and service equipment
CN110555879B (en) 2018-05-31 2023-09-08 京东方科技集团股份有限公司 Space positioning method, device, system and computer readable medium thereof
US12062189B2 (en) 2020-08-25 2024-08-13 Htc Corporation Object tracking method and object tracking device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106599930A (en) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 Virtual reality space locating feature point selection method
CN106599929A (en) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 Virtual reality feature point screening spatial positioning method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106599930A (en) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 Virtual reality space locating feature point selection method
CN106599929A (en) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 Virtual reality feature point screening spatial positioning method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739803A (en) * 2021-08-30 2021-12-03 中国电子科技集团公司第五十四研究所 Indoor and underground space positioning method based on infrared datum point
CN113739803B (en) * 2021-08-30 2023-11-21 中国电子科技集团公司第五十四研究所 Indoor and underground space positioning method based on infrared datum points

Also Published As

Publication number Publication date
CN106599929A (en) 2017-04-26
CN106599929B (en) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2018113433A1 (en) Method for screening and spatially locating virtual reality feature points
US10780585B2 (en) Robot and electronic device for performing hand-eye calibration
US9268412B2 (en) Input apparatus having an input recognition unit and input recognition method by using the same
TW202009653A (en) Direction determination system and direction determination method
CN100590577C (en) Touch screen positioning device and method thereof
US20140111632A1 (en) Pupil detection device
TWI471784B (en) Optical position input system and method
WO2018107923A1 (en) Positioning feature point identification method for use in virtual reality space
CN104717422B (en) Show equipment and display methods
JP2014170511A (en) System, image projection device, information processing device, information processing method, and program
WO2016031350A1 (en) Control device, control method, and program
US20210208673A1 (en) Joint infrared and visible light visual-inertial object tracking
JP2014089081A5 (en) Measuring apparatus, control method therefor, and program
JP6870474B2 (en) Gaze detection computer program, gaze detection device and gaze detection method
TWI526879B (en) Interactive system, remote controller and operating method thereof
JP2014086420A (en) Lighting control system and method for led lights
US20180300579A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
JP2015115649A (en) Device control system and device control method
JP2016184362A (en) Input device, input operation detection method, and input operation detection computer program
CN104376323B (en) A kind of method and device for determining target range
CN106599930B (en) Virtual reality space positioning feature point screening method
JP2016532217A (en) Method and apparatus for detecting eyes with glint
JP2014098625A (en) Measurement instrument, method, and program
JP2017204685A (en) Information processing device and information processing method
JP5336325B2 (en) Image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17883630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 14.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17883630

Country of ref document: EP

Kind code of ref document: A1