CN106599929B - Virtual reality feature point screening space positioning method - Google Patents

Virtual reality feature point screening space positioning method Download PDF

Info

Publication number
CN106599929B
CN106599929B CN201611199871.6A CN201611199871A CN106599929B CN 106599929 B CN106599929 B CN 106599929B CN 201611199871 A CN201611199871 A CN 201611199871A CN 106599929 B CN106599929 B CN 106599929B
Authority
CN
China
Prior art keywords
spot
infrared
virtual reality
processing unit
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611199871.6A
Other languages
Chinese (zh)
Other versions
CN106599929A (en
Inventor
李宗乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3glasses Vr Technology Co ltd
Original Assignee
Shenzhen 3glasses Vr Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3glasses Vr Technology Co ltd filed Critical Shenzhen 3glasses Vr Technology Co ltd
Priority to CN201611199871.6A priority Critical patent/CN106599929B/en
Publication of CN106599929A publication Critical patent/CN106599929A/en
Priority to PCT/CN2017/109794 priority patent/WO2018113433A1/en
Application granted granted Critical
Publication of CN106599929B publication Critical patent/CN106599929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a virtual reality feature point screening space positioning method, which comprises the following steps: training a neural network by utilizing the preprocessed pictures; keeping an infrared point light source of the virtual reality helmet in an open state, and shooting by an infrared camera; preprocessing the picture to obtain a preprocessed image; and inputting the obtained preprocessed image into a neuron to obtain the ID of the infrared point light source corresponding to each light spot. Compared with the prior art, the method for determining the light spot ID is accurate and efficient by introducing the neural network algorithm into the virtual reality space positioning method. By preprocessing the training image and the test image, the method prevents the diversification of the pictures from influencing the identification accuracy rate, carries out standardized processing on the diversified pictures, and greatly increases the success rate and accuracy of ID identification.

Description

Virtual reality feature point screening space positioning method
Technical Field
The invention relates to the field of virtual reality, in particular to a virtual reality feature point screening space positioning method.
Background
The spatial positioning generally adopts an optical or ultrasonic mode for positioning and measurement, and derives the spatial position of the object to be measured by establishing a model. A general virtual reality space positioning system determines the space position of an object by adopting a receiving mode of an infrared point and a light sensing camera, wherein the infrared point is arranged at the front end of a near-eye display device, and when the positioning is carried out, the light sensing camera captures the position of the infrared point so as to further calculate the physical coordinates of a user. If the corresponding relation between at least three light sources and the projection is known, the spatial position of the helmet can be obtained by calling the PnP algorithm. The key to this process is to determine the corresponding illuminant ID (Identity). The current virtual reality space positioning has the defects that the image identification is inaccurate in a certain distance and direction, so that the corresponding time is too long and the image identification is inaccurate when the light source ID corresponding to the projection is determined, and the positioning accuracy and efficiency are influenced.
Disclosure of Invention
In order to overcome the defect that the positioning accuracy and efficiency of the current virtual reality space are not high, the invention provides the virtual reality feature point screening space positioning method which can improve the positioning accuracy and efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows: the virtual reality feature point screening space positioning method comprises the following steps:
s1: ensuring that all infrared point light sources are in an open state, controlling an infrared camera to shoot images of a virtual reality helmet by a processing unit, and calculating coordinates of light spots of images of all the infrared point light sources;
s2: the processing unit identifies the ID of each light spot in the imaging picture and finds out the ID corresponding to all the light spot points;
s3: the processing unit controls at least 4 infrared point light sources corresponding to IDs to be in a lighting state, the rest infrared point light sources are closed, and the processing unit controls the infrared camera to shoot images of the virtual reality helmet and utilizes a PnP algorithm to perform operation positioning on the images;
s4: when the number of the light spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 through S3 are re-executed.
Preferably, the imaging picture is rectangular, the length of the long side of the rectangular imaging picture is d, the processing unit calculates the distance between each two spot points, selects the maximum distance d ', and when d' > d/2, the processing unit finds the spot point closest to the central position of the imaging picture, keeps the infrared spot light source corresponding to the spot point ID and the 3 infrared spot light sources closest to the infrared spot light source in a lighting state, and simultaneously closes the other infrared spot light sources.
Preferably, the imaging picture is rectangular, the length of the long side of the rectangular imaging picture is d, the processing unit calculates the distance between each two of the light spot points, the maximum distance d 'is selected from the distance, when d' < d/2, the processing unit finds out at least 4 infrared point light sources which are positioned outwards relative to the infrared point light sources and correspond to the light spot points, keeps lighting the infrared point light sources, and closes other infrared point light sources.
Preferably, the processing unit performs a slight translation on the light spot of the previous frame image in combination with the known historical information of the previous frame to generate a corresponding relationship between the light spot of the previous frame image and the light spot of the current frame image, and determines the corresponding ID of each light spot having the corresponding relationship on the current frame image according to the corresponding relationship and the historical information of the previous frame.
Compared with the prior art, the method increases the positioning efficiency by closing the infrared point light source which can complicate calculation, and provides a screening method for screening the infrared point light source to be closed by using the relative position of the infrared point light source on an imaging picture. The method for comparing the maximum distance between the light spots with the long-edge distance of the imaging picture is used for distinguishing whether the infrared point light source corresponding to the light spots is closed or not, and the method is simple and easy to implement and has strong operability. When the maximum distance between the light spots is larger than half of the length of the long edge of the imaging picture, the infrared point light sources corresponding to the 4 light spot points in the middle are selected to be lightened, calculation can be better performed by utilizing a PnP algorithm, meanwhile, the light spot points for positioning are also ensured not to be rapidly moved out of the imaging picture, and the phenomenon that a large amount of time is consumed due to repeated ID identification is avoided. When the maximum distance between the light spots is smaller than half of the length of the long edge of the imaging picture, the infrared point light sources corresponding to at least 4 light spot points outside are selected to be lightened, the PnP algorithm can be well utilized for calculation, meanwhile, the distance between the light spot points is ensured to be large enough, large errors caused by pixels and the like can be avoided, the current light spot point corresponds to the light spot point of the previous frame of image through slight translation of the light spot points, repeated ID identification is avoided, and a large amount of time is saved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic diagram illustrating the principle of a virtual reality feature point screening spatial positioning method according to the present invention;
FIG. 2 is a schematic diagram of distribution of infrared point light sources in a virtual reality feature point screening spatial positioning method according to the present invention;
FIG. 3 shows one of the images taken by the infrared camera;
FIG. 4 shows one of the imaged pictures presented after the infrared point light source has been turned off;
FIG. 5 shows a second image taken by the infrared camera;
fig. 6 shows a second image presented after the infrared point light source is turned off.
Detailed Description
In order to overcome the defect that the positioning accuracy and efficiency of the current virtual reality space are not high, the invention provides the virtual reality feature point screening space positioning method which can improve the positioning accuracy and efficiency.
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
Please refer to fig. 1-2. The virtual reality feature point screening space positioning method comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, wherein the infrared camera 20 is electrically connected with the processing unit 30. The virtual reality helmet 10 comprises a front panel 11, and a plurality of infrared point light sources 13 are distributed on the front panel 11 and four side panels, namely an upper side panel, a lower side panel, a left side panel and a right side panel, of the virtual reality helmet 10. The number of infrared point light sources 13 is at least the minimum number that the PnP algorithm can operate. The shape of the infrared point light source 13 is not particularly limited. For illustration, we take the number of infrared point light sources 13 on the front panel 11 as 7, and the 7 infrared point light sources constitute an approximate "w" shape. The plurality of infrared point light sources 13 can be turned on or off as needed through the firmware interface of the virtual reality helmet 10. The infrared point light source 13 on the virtual reality helmet 10 forms a light spot on an image through shooting of the infrared camera 20, and due to the band-pass characteristic of the infrared camera, only the infrared point light source 13 can form a light spot projection on the image, and the rest parts form a uniform background image. The infrared point light source 13 on the virtual reality helmet 10 can form a spot of light on the image.
Referring to fig. 3-4, fig. 3 shows an image 41 of the infrared point light source 13 captured by the infrared camera 20, where the image 41 is rectangular and the length of the longer side of the rectangle is d. All the infrared point light sources are ensured to be in an on state, the processing unit 30 controls the infrared camera 20 to shoot the image of the virtual reality helmet 10, and seven light spots exist on the imaging picture 41. The processing unit 30 calculates the coordinates of each spot according to the position of the spot on the imaged picture 41, and calculates the distance between two spots, from which the maximum distance d' is selected. When d' > d/2, the occupied range of the light spot on the imaging picture 41 is larger at this time, since it takes a lot of time for each light spot to sequentially perform ID (Identity) identification and PnP algorithm operation, only a part of the points can be taken to satisfy the requirements of the PnP algorithm, and at this time, the processing unit 30 first performs ID identification on each light spot in the imaged picture 41, finds out the corresponding IDs of all light spots, then, the light spot point closest to the center position of the imaging picture 41 is found out as the center point, the infrared point light source 13 corresponding to the ID of the light spot point and the 3 infrared point light sources 13 closest to the infrared point light source are kept in the lighting state, and at the same time, the other infrared point light sources 13 are turned off, only 4 light spot points exist on the imaging picture 41 of the next frame, and the processing unit 30 can track each light spot point and calibrate the corresponding ID, and the specific method is as follows: in the spatial positioning, because the sampling time of each frame is small enough, generally 30ms, the position difference between each light spot of the previous frame and each light spot of the current frame is small in general, the processing unit 30 performs a small translation on the light spot of the previous frame image in combination with the known historical information of the previous frame to generate a corresponding relationship between the light spot of the previous frame image and the light spot of the current frame image, and the corresponding ID of each light spot having the corresponding relationship on the current frame image can be determined according to the corresponding relationship and the historical information of the previous frame. Under the condition that the corresponding IDs of all the light spot points are known, the processing unit 30 directly calls the PnP algorithm to obtain the spatial location position of the virtual reality helmet 10. When the number of the light spots in the imaging picture 41 caused by the movement of the virtual reality helmet 10 is less than the number of the light spots required by the PnP algorithm, the above method is executed again to select a new infrared point light source 13 to be lighted.
Referring to fig. 5-6, in fig. 5, there are seven light spots on the image 41. The processing unit 30 calculates the coordinates of each spot according to the position of the spot on the imaged picture 41, and calculates the distance between two spots, from which the maximum distance d' is selected. When d' < d/2, it is stated that the range occupied by the light spots on the imaging picture 41 is small, and since it takes a lot of time for each light spot to sequentially perform ID identification and PnP algorithm operation, only a part of the light spots can also meet the requirements of the PnP algorithm, at this time, the processing unit 30 first performs ID identification on each light spot in the image, finds out IDs corresponding to all the light spot, then finds out at least 4 infrared point light sources 13 which are located outward relative to the infrared point light sources 13 corresponding to the IDs, keeps the infrared point light sources 13 in a lighting state, and turns off other infrared point light sources 13. This ensures that the spots on the image 41 are not too dense and thus the accuracy of the measurement is compromised. The processing unit 30 can directly invoke the PnP algorithm to obtain the spatial location position of the virtual reality helmet 10. When the number of the light spots in the imaging picture 41 caused by the movement of the virtual reality helmet 10 is less than the number of the light spots required by the PnP algorithm, the above method is executed again to select a new infrared point light source 13 to be lighted.
After the ID identification is completed, the processing unit 30 calls the PnP algorithm to obtain the spatial location position of the helmet, and the PnP algorithm belongs to the prior art and is not described in detail herein.
Compared with the prior art, the method increases the positioning efficiency by turning off the infrared point light source 13 which can complicate calculation, and screens the infrared point light source 13 needing to be turned off by using the relative position of the infrared point light source 13 on the imaging picture 41. The method for comparing the maximum distance between the light spots with the long-side distance of the imaging picture 41 is used for distinguishing whether the infrared point light source 13 corresponding to the light spots is closed or not, and the method is simple and easy to implement and high in operability. When the maximum distance between the light spots is larger than half of the length of the long edge of the imaging picture 41, the infrared point light sources 13 corresponding to the 4 light spot points in the middle are selected to be lightened, calculation can be better performed by utilizing a PnP algorithm, meanwhile, the light spot points for positioning are also ensured not to be rapidly moved out of the imaging picture 41, and the phenomenon that a large amount of time is consumed due to repeated ID identification is avoided. When the maximum distance between the light spots is smaller than half of the length of the long edge of the imaging picture 41, the infrared point light source 13 corresponding to at least 4 light spot points outside is selected to be lightened, the PnP algorithm can be well utilized for calculation, meanwhile, the distance between the light spot points is ensured to be large enough, large errors caused by pixels and the like can be avoided, the current light spot point corresponds to the light spot point of the previous frame of image through slight translation of the light spot point, repeated ID identification is avoided, and a large amount of time is saved.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

1. A virtual reality feature point screening space positioning method is characterized by comprising the following steps:
s1: ensuring that all the infrared point light sources are in an open state, controlling the infrared camera to shoot images of the virtual reality helmet by the processing unit, and calculating coordinates of light spots of images of all the infrared point light sources;
s2: the processing unit identifies the ID of each light spot in the imaging picture and finds out the ID corresponding to all the light spot points;
s3: the processing unit controls at least 4 infrared point light sources corresponding to the ID to be in a lighting state, the rest infrared point light sources are closed, and the processing unit controls the infrared camera to shoot the image of the virtual reality helmet and utilizes a PnP algorithm to perform operation positioning on the image;
s4: when the number of the light spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 through S3 are re-executed.
2. The virtual reality feature point screening spatial localization method of claim 1, wherein the imaging picture is rectangular, the length of the long side of the rectangular imaging picture is d, the processing unit calculates the distance between each two of the spot points, selects the maximum distance d 'from the distance, and when d' > d/2, the processing unit finds the spot point closest to the center position of the imaging picture, keeps the infrared spot light source corresponding to the spot point ID and the 3 infrared spot light sources closest to the infrared spot light source in a lighting state, and simultaneously turns off the other infrared spot light sources.
3. The virtual reality feature point screening spatial positioning method of claim 1, wherein the imaging picture is rectangular, the length of the long side of the rectangular imaging picture is d, the processing unit calculates the distance between each two of the spot points, selects the maximum distance d 'from the distance, and when d' < d/2, the processing unit finds out at least 4 infrared spot light sources which are positioned at the outer sides of the infrared spot light sources corresponding to the spot points and keep on lighting, and closes the other infrared spot light sources.
4. The virtual reality feature point screening spatial localization method of any one of claims 1-3, wherein the processing unit performs a slight translation on the light spot point of the previous frame image in combination with the known historical information of the previous frame to generate a corresponding relationship between the light spot point of the previous frame image and the light spot point of the current frame image, and determines the corresponding ID of each light spot having a corresponding relationship on the current frame image according to the corresponding relationship and the historical information of the previous frame.
CN201611199871.6A 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method Active CN106599929B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611199871.6A CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method
PCT/CN2017/109794 WO2018113433A1 (en) 2016-12-22 2017-11-07 Method for screening and spatially locating virtual reality feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611199871.6A CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method

Publications (2)

Publication Number Publication Date
CN106599929A CN106599929A (en) 2017-04-26
CN106599929B true CN106599929B (en) 2021-03-19

Family

ID=58601028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611199871.6A Active CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method

Country Status (2)

Country Link
CN (1) CN106599929B (en)
WO (1) WO2018113433A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (en) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 Virtual reality feature point screening space positioning method
CN107562189B (en) * 2017-07-21 2020-12-11 广州励丰文化科技股份有限公司 Space positioning method based on binocular camera and service equipment
CN110555879B (en) 2018-05-31 2023-09-08 京东方科技集团股份有限公司 Space positioning method, device, system and computer readable medium thereof
US20220067949A1 (en) * 2020-08-25 2022-03-03 Htc Corporation Object tracking method and object tracking device
CN113739803B (en) * 2021-08-30 2023-11-21 中国电子科技集团公司第五十四研究所 Indoor and underground space positioning method based on infrared datum points

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (en) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 Virtual reality feature point screening space positioning method
CN106599930B (en) * 2016-12-22 2021-06-11 深圳市虚拟现实技术有限公司 Virtual reality space positioning feature point screening method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Eye gazing direction inspection based on image processing technique;Hao, Q等;《Optical Design and Testing II, Pts 1 and 2》;20051231;第5638卷;第124-132页 *
双目视觉在助老助残机器人定位系统中的应用;刘圭圭等;《微型机与应用》;20160710(第13期);第45-47,50页 *

Also Published As

Publication number Publication date
CN106599929A (en) 2017-04-26
WO2018113433A1 (en) 2018-06-28

Similar Documents

Publication Publication Date Title
CN106599929B (en) Virtual reality feature point screening space positioning method
CN107346406B (en) Method and system for information transmission
CN110967166B (en) Detection method, detection device and detection system of near-eye display optical system
EP2608536B1 (en) Method for counting objects and apparatus using a plurality of sensors
CN103501688B (en) The method and apparatus that point of fixation maps
US20150116502A1 (en) Apparatus and method for dynamically selecting multiple cameras to track target object
CN107543530B (en) Method, system, and non-transitory computer-readable recording medium for measuring rotation of ball
CN101013028A (en) Image processing method and image processor
JP2000357055A (en) Method and device for correcting projection image and machine readable medium
CN102369498A (en) Touch pointers disambiguation by active display feedback
KR20130114899A (en) Image sensing method using dual camera and apparatus thereof
CN110087049A (en) Automatic focusing system, method and projector
KR20160145545A (en) Method of enhanced alignment of two means of projection
CN108369744A (en) It is detected by the 3D blinkpunkts of binocular homography
WO2023165223A1 (en) Measuring method and apparatus for near-eye display
CN106599930B (en) Virtual reality space positioning feature point screening method
CN109814401A (en) Control method, household appliance and the readable storage medium storing program for executing of household appliance
CN107707898B (en) The image distortion correcting method and laser-projector of laser-projector
US8229167B2 (en) Optical tracking device and positioning method thereof
US20140218477A1 (en) Method and system for creating a three dimensional representation of an object
CN105391998B (en) Automatic detection method and apparatus for resolution of low-light night vision device
CN103186233B (en) Panoramic interaction control method for eye location
CN116382473A (en) Sight calibration, motion tracking and precision testing method based on self-adaptive time sequence analysis prediction
CN101980299A (en) Chessboard calibration-based camera mapping method
CN106530774A (en) Led signal lamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant