CN106599930B - Virtual reality space positioning feature point screening method - Google Patents

Virtual reality space positioning feature point screening method Download PDF

Info

Publication number
CN106599930B
CN106599930B CN201611200542.9A CN201611200542A CN106599930B CN 106599930 B CN106599930 B CN 106599930B CN 201611200542 A CN201611200542 A CN 201611200542A CN 106599930 B CN106599930 B CN 106599930B
Authority
CN
China
Prior art keywords
infrared
virtual reality
spot
processing unit
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611200542.9A
Other languages
Chinese (zh)
Other versions
CN106599930A (en
Inventor
李宗乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3glasses Vr Technology Co ltd
Original Assignee
Shenzhen 3glasses Vr Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3glasses Vr Technology Co ltd filed Critical Shenzhen 3glasses Vr Technology Co ltd
Priority to CN201611200542.9A priority Critical patent/CN106599930B/en
Publication of CN106599930A publication Critical patent/CN106599930A/en
Application granted granted Critical
Publication of CN106599930B publication Critical patent/CN106599930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention provides a virtual reality space positioning feature point screening method, which comprises the following steps: ensuring that all infrared point light sources are in an open state, controlling an infrared camera to shoot images of a virtual reality helmet by a processing unit, and calculating coordinates of light spots of images of all the infrared point light sources; the processing unit identifies the ID of each light spot in the imaging picture and finds out the ID corresponding to all the light spot points; the processing unit calculates six-degree-of-freedom information of the virtual reality helmet and finds out at least 4 infrared point light sources facing the infrared camera according to the direction information of the virtual reality helmet; the processing unit controls at least 4 infrared point light sources which are right opposite to the infrared camera to be in a lighting state, other infrared point light sources are closed, and the processing unit controls the infrared camera to shoot images of the virtual reality helmet and utilizes a PnP algorithm to perform operation positioning on the images.

Description

Virtual reality space positioning feature point screening method
Technical Field
The invention relates to the field of virtual reality, in particular to a method for screening positioning feature points in a virtual reality space.
Background
The spatial positioning generally adopts an optical or ultrasonic mode for positioning and measurement, and derives the spatial position of the object to be measured by establishing a model. A general virtual reality space positioning system determines the space position of an object by adopting a receiving mode of an infrared point and a light sensing camera, wherein the infrared point is arranged at the front end of a near-eye display device, and when the positioning is carried out, the light sensing camera captures the position of the infrared point so as to further calculate the physical coordinates of a user. If the corresponding relation between at least three light sources and the projection is known, the spatial position of the helmet can be obtained by calling the PnP algorithm. The key to this process is to determine the corresponding illuminant ID (Identity). The current virtual reality space positioning has the defects that the image identification is inaccurate in a certain distance and direction, so that the corresponding time is too long and the image identification is inaccurate when the light source ID corresponding to the projection is determined, and the positioning accuracy and efficiency are influenced.
Disclosure of Invention
In order to overcome the defect that the positioning accuracy and efficiency of the current virtual reality space are not high, the invention provides a virtual reality space positioning feature point screening method capable of improving the positioning accuracy and efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows: the method for screening the positioning feature points in the virtual reality space comprises the following steps:
s1: ensuring that all infrared point light sources are in an open state, controlling an infrared camera to shoot images of a virtual reality helmet by a processing unit, and calculating coordinates of light spots of images of all the infrared point light sources;
s2: the processing unit identifies the ID of each light spot in the imaging picture and finds out the ID corresponding to all the light spot points;
s3: the processing unit calculates six-degree-of-freedom information of the virtual reality helmet and finds out at least 4 infrared point light sources facing the infrared camera according to the direction information of the virtual reality helmet;
s4: the processing unit controls at least 4 infrared point light sources which are right opposite to the infrared camera to be in a lighting state, other infrared point light sources are closed, and the processing unit controls the infrared camera to shoot images of the virtual reality helmet and utilizes a PnP algorithm to perform operation positioning on the images.
Preferably, the processing unit finds out a spot point closest to the center position of the imaging picture as a center point, keeps the infrared point light source corresponding to the spot point ID and the 3 infrared point light sources closest to the infrared point light source in a lighting state, and simultaneously turns off the other infrared point light sources.
Preferably, the processing unit controls the on and off of the infrared point light source, and ensures that 4 light spots exist on the imaging picture.
Preferably, when the leftmost spot point in the imaging picture disappears, the processing unit controls the unopened infrared spot point source closest to the rightmost spot point to light up.
Preferably, when the rightmost spot point in the imaging picture disappears, the processing unit controls the unopened infrared spot light source closest to the infrared spot light source corresponding to the leftmost spot point to light up.
Preferably, the light spot point corresponding to the newly added infrared point light source can be determined by comparing the image difference of the imaging pictures of the current frame and the previous frame, and the corresponding ID of the light spot is the ID of the newly added lighted infrared point light source.
Preferably, the processing unit performs a slight translation on the light spot of the previous frame image in combination with the known historical information of the previous frame to generate a corresponding relationship between the light spot of the previous frame image and the light spot of the current frame image, and determines the corresponding ID of each light spot having the corresponding relationship on the current frame image according to the corresponding relationship and the historical information of the previous frame.
Compared with the prior art, the method increases the positioning efficiency by closing the infrared point light source which can complicate calculation, and provides a screening method for screening the infrared point light source to be closed by using the relative position of the infrared point light source on an imaging picture. The infrared point light source which is just opposite to the infrared camera is lightened, ID identification is facilitated, and meanwhile the light spots on the imaging picture can be prevented from rapidly moving out of the imaging picture to influence the efficiency of space positioning. The orientation of the virtual reality helmet is judged by utilizing the positioning position, and an infrared point light source which is over against the infrared camera can be quickly found. The method for searching the three infrared point light sources closest to the infrared point light source by taking the infrared point light source closest to the center of the imaging picture as the center point can quickly search the four infrared point light sources which are opposite to the infrared camera. When the number of the light spots on the imaging picture is reduced, the processing unit controls the corresponding infrared point light sources to be lightened so as to ensure the stability of the number of the light spots on the imaging picture, the positioning is convenient, and the condition that the number of the light spots cannot meet the number required by a PnP algorithm so as to cause the positioning failure can be effectively prevented. By adding a tiny translation method to the light spots, the light spots can be ensured to correspond to the ID of the infrared point light source when the virtual reality helmet is changed in position.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic diagram illustrating the principle of the method for screening the positioning feature points in the virtual reality space according to the present invention;
FIG. 2 is a schematic diagram of distribution of infrared point light sources according to the virtual reality space positioning feature point screening method of the present invention;
FIG. 3 shows one of the images taken by the infrared camera;
FIG. 4 shows one of the imaged pictures presented after the infrared point light source has been turned off;
FIG. 5 shows a second image taken by the infrared camera;
FIG. 6 shows a third image taken by an infrared camera;
FIG. 7 shows four images taken by an infrared camera;
fig. 8 shows five images taken by the infrared camera.
Detailed Description
In order to overcome the defect that the positioning accuracy and efficiency of the current virtual reality space are not high, the invention provides a virtual reality space positioning feature point screening method capable of improving the positioning accuracy and efficiency.
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
Please refer to fig. 1-2. The screening method of the virtual reality space positioning feature points comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, wherein the infrared camera 20 is electrically connected with the processing unit 30. The virtual reality helmet 10 comprises a front panel 11, and a plurality of infrared point light sources 13 are distributed on the front panel 11 and four side panels, namely an upper side panel, a lower side panel, a left side panel and a right side panel, of the virtual reality helmet 10. The number of infrared point light sources 13 is at least the minimum number that the PnP algorithm can operate. The shape of the infrared point light source 13 is not particularly limited. For illustration, we take the number of infrared point light sources 13 on the front panel 11 as 7, and the 7 infrared point light sources constitute an approximate "w" shape. The plurality of infrared point light sources 13 can be turned on or off as needed through the firmware interface of the virtual reality helmet 10. The infrared point light source 13 on the virtual reality helmet 10 forms a light spot on an image through shooting of the infrared camera 20, and due to the band-pass characteristic of the infrared camera, only the infrared point light source 13 can form a light spot projection on the image, and the rest parts form a uniform background image. The infrared point light source 13 on the virtual reality helmet 10 can form a spot of light on the image.
Referring to fig. 3-4, fig. 3 shows an image 41 of the infrared point light source 13 captured by the infrared camera 20. All the infrared point light sources 13 are ensured to be in an on state, the processing unit 30 controls the infrared camera 20 to shoot the image of the virtual reality helmet 10, and seven light spots exist on the imaging picture 41. The processing unit 30 first calculates coordinates of each light spot according to the position of the light spot on the imaging picture 41, then performs ID identification on each light spot in the imaging picture 41, finds out IDs corresponding to all light spot points, and obtains information of six degrees of freedom of the virtual reality helmet 10 by using a PnP algorithm. The processing unit 30 determines the relative position of the virtual reality helmet 10 and the infrared camera 20 according to the six-degree-of-freedom information of the virtual reality helmet 10, and keeps at least 4 infrared point light sources 13 on the virtual reality helmet 10, which are opposite to the infrared camera 20, in an on state, and turns off other infrared point light sources 13. The turned-on 4 infrared point light sources 13 are screened out in the following way: the processing unit 30 finds out the spot point closest to the center position of the imaging picture 41 as a center point, keeps the infrared point light source 13 corresponding to the spot point ID and the 3 infrared point light sources 13 closest to the infrared point light source in a lighting state, and simultaneously turns off the other infrared point light sources 13.
At this time, there are only 4 light spot points on the imaged picture 41 of the next frame, and the processing unit 30 can track each light spot and calibrate the corresponding ID, and the specific method is as follows: in the spatial positioning, because the sampling time of each frame is small enough, generally 30ms, the position difference between each light spot of the previous frame and each light spot of the current frame is small in general, the processing unit 30 performs a small translation on the light spot of the previous frame image in combination with the known historical information of the previous frame to generate a corresponding relationship between the light spot of the previous frame image and the light spot of the current frame image, and the corresponding ID of each light spot having the corresponding relationship on the current frame image can be determined according to the corresponding relationship and the historical information of the previous frame. Under the condition that the corresponding IDs of all the light spot points are known, the processing unit 30 directly calls the PnP algorithm to obtain the spatial location position of the virtual reality helmet 10.
Referring to fig. 5-8, when the number of light spots in the imaging picture 41 decreases due to the movement of the virtual reality helmet 10, the processing unit 30 controls the virtual reality helmet 10 to turn on the corresponding infrared point light source 13 for supplement, so as to keep the number of light spots on the imaging picture 41 at 4. Specifically, when the leftmost spot point in the imaging picture 41 disappears due to the movement of the virtual reality helmet 10, the processing unit 30 controls the unopened infrared spot light source 13 closest to the infrared spot light source 13 corresponding to the rightmost spot point to light up; when the rightmost spot point in the imaging picture 41 disappears due to the movement of the virtual reality helmet 10, the processing unit 30 controls the unopened infrared spot light source 13 closest to the infrared spot light source 13 corresponding to the leftmost spot point to be lighted up, so as to keep 4 spot points in the imaging picture 41, and ensure that the PnP algorithm can run smoothly. For the newly lighted infrared point light source 13, the light spot point corresponding to the newly added infrared point light source 13 can be determined by comparing the image difference between the current frame and the previous frame, and the corresponding ID of the light spot is the ID of the newly added lighted infrared point light source 13.
After the ID identification is completed, the processing unit 30 calls the PnP algorithm to obtain the spatial location position of the helmet, and the PnP algorithm belongs to the prior art and is not described in detail herein.
Compared with the prior art, the method increases the positioning efficiency by turning off the infrared point light source 13 which can complicate calculation, and screens the infrared point light source 13 needing to be turned off by using the relative position of the infrared point light source 13 on the imaging picture 41. The infrared point light source 13 facing the infrared camera 20 is lighted to facilitate ID identification, and meanwhile, the light spots on the imaging picture 41 can be prevented from rapidly moving out of the imaging picture 41 to influence the efficiency of spatial positioning. By judging the orientation of the virtual reality helmet 10 by using the positioning position, the infrared point light source 13 facing the infrared camera 20 can be quickly found. The four infrared point light sources 13 facing the infrared camera 20 can be found out relatively quickly by finding the three infrared point light sources 13 closest thereto with the infrared point light source 13 closest to the center of the imaging picture 41 as the center point. When the number of light spots on the imaging picture 41 is reduced, the processing unit 30 controls the corresponding infrared point light source 13 to light up to ensure the stability of the number of the light spots on the imaging picture 41, so that the positioning is facilitated, and the situation that the number of the light spots cannot meet the number required by the PnP algorithm and the positioning cannot be performed can be effectively prevented. By adding a small translation to the light spot, it can be ensured that the light spot corresponds to the ID of the infrared point light source 13 when the virtual reality helmet 10 changes its position.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A virtual reality space positioning feature point screening method is characterized by comprising the following steps:
s1: ensuring that all infrared point light sources are in an open state, controlling an infrared camera to shoot images of a virtual reality helmet by a processing unit, and calculating coordinates of light spots of images of all the infrared point light sources;
s2: the processing unit identifies the ID of each light spot in the imaging picture and finds out the ID corresponding to all the light spot points;
s3: the processing unit calculates six-degree-of-freedom information of the virtual reality helmet and finds out at least 4 infrared point light sources facing the infrared camera according to the direction information of the virtual reality helmet;
s4: the processing unit controls at least 4 infrared point light sources which are right opposite to the infrared camera to be in a lighting state, other infrared point light sources are closed, and the processing unit controls the infrared camera to shoot images of the virtual reality helmet and utilizes a PnP algorithm to perform operation positioning on the images.
2. The method for screening the positioning feature points in the virtual reality space according to claim 1, wherein the processing unit finds out a spot point closest to the center position of the imaging picture as a center point, keeps the infrared point light source corresponding to the ID of the spot point and the 3 infrared point light sources closest to the infrared point light source in a lighting state, and simultaneously turns off the other infrared point light sources.
3. The method of claim 2, wherein the processing unit controls the infrared point light sources to turn on and off to ensure that there are 4 light spots on the image.
4. The method for screening the positioning feature points in the virtual reality space according to claim 3, wherein when the leftmost spot in the image disappears, the processing unit controls the infrared spot light source which is closest to the rightmost spot to be turned on and is not turned on to be turned on.
5. The method for screening positioning feature points in virtual reality space according to claim 3, wherein when the rightmost spot point in the image disappears, the processing unit controls the non-turned-on infrared spot light source closest to the leftmost spot point to be turned on.
6. The method for screening the positioning feature points in the virtual reality space according to any one of claims 4 to 5, wherein the light spot points corresponding to the newly added infrared point light sources can be determined by comparing the image difference between the current frame and the previous frame of the imaging picture, and the corresponding ID of the light spot is the ID of the newly added lighted infrared point light source.
7. The method for screening the positioning feature points in the virtual reality space according to any one of claims 1 to 5, wherein the processing unit performs a slight translation on the light spot point of the previous frame of image in combination with the known historical information of the previous frame of image to generate a corresponding relationship between the light spot point of the previous frame of image and the light spot point of the current frame of image, and determines the corresponding ID of each light spot having a corresponding relationship on the current frame of image according to the corresponding relationship and the historical information of the previous frame of image.
CN201611200542.9A 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method Active CN106599930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611200542.9A CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611200542.9A CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Publications (2)

Publication Number Publication Date
CN106599930A CN106599930A (en) 2017-04-26
CN106599930B true CN106599930B (en) 2021-06-11

Family

ID=58602663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611200542.9A Active CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Country Status (1)

Country Link
CN (1) CN106599930B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (en) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 Virtual reality feature point screening space positioning method
CN107390952A (en) * 2017-07-04 2017-11-24 深圳市虚拟现实科技有限公司 Virtual reality handle characteristic point space-location method
CN108414195B (en) * 2018-01-17 2020-09-08 深圳市绚视科技有限公司 Detection method, device and system of light source emitter to be detected and storage device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Eye gazing direction inspection based on image processing technique;Hao, Q等;《Optical Design and Testing II, Pts 1 and 2》;20051231;第5638卷;第124-132页 *
双目视觉在助老助残机器人定位系统中的应用;刘圭圭等;《微型机与应用》;20160710(第13期);第45-47,50页 *

Also Published As

Publication number Publication date
CN106599930A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
US9560345B2 (en) Camera calibration
US11115633B2 (en) Method and system for projector calibration
CN106599929B (en) Virtual reality feature point screening space positioning method
CA2870751C (en) Point-of-gaze detection device, point-of-gaze detecting method, personal parameter calculating device, personal parameter calculating method, program, and computer-readable storage medium
US9087258B2 (en) Method for counting objects and apparatus using a plurality of sensors
JP4341723B2 (en) Light projection device, lighting device
US9295141B2 (en) Identification device, method and computer program product
US20160266722A1 (en) Operation detection device, operation detection method and projector
CN105631390B (en) Method and system for spatial finger positioning
CN106599930B (en) Virtual reality space positioning feature point screening method
JP5772714B2 (en) Light detection device and vehicle control system
US10500482B2 (en) Method of operating a video gaming system
US9888188B2 (en) Image capture enhancement using dynamic control image
JP2020515127A (en) Display system and method for delivering multi-view content
WO2018107923A1 (en) Positioning feature point identification method for use in virtual reality space
TWI526879B (en) Interactive system, remote controller and operating method thereof
JP2015115649A (en) Device control system and device control method
JP2017192008A (en) Object detection apparatus
JP6011173B2 (en) Pupil detection device and pupil detection method
KR20150111627A (en) control system and method of perforamance stage using indexing of objects
JP6430813B2 (en) Position detection apparatus, position detection method, gazing point detection apparatus, and image generation apparatus
US11747478B2 (en) Stage mapping and detection using infrared light
JP2014160017A (en) Management device, method and program
TWI520100B (en) Free space orientation and position determining method and system
US20190073513A1 (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant