CN114549285A - Controller positioning method and device, head-mounted display equipment and storage medium - Google Patents

Controller positioning method and device, head-mounted display equipment and storage medium Download PDF

Info

Publication number
CN114549285A
CN114549285A CN202210074244.9A CN202210074244A CN114549285A CN 114549285 A CN114549285 A CN 114549285A CN 202210074244 A CN202210074244 A CN 202210074244A CN 114549285 A CN114549285 A CN 114549285A
Authority
CN
China
Prior art keywords
light
light spot
image
controller
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210074244.9A
Other languages
Chinese (zh)
Inventor
章烛明
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN202210074244.9A priority Critical patent/CN114549285A/en
Publication of CN114549285A publication Critical patent/CN114549285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a positioning method and device of a controller, a head-mounted display device and a storage medium, and relates to the technical field of virtual reality. The method comprises the following steps: acquiring an image containing a controller in a real environment as a first image; acquiring a first number of light spots which accord with a preset distribution condition from a first image to serve as a first light spot set; determining a three-dimensional model matched with the first light point set as a target three-dimensional model from the three-dimensional models corresponding to the controllers; mapping the target three-dimensional model to a two-dimensional plane to obtain a second image; acquiring a second number of light spots matched with the preset light spot distribution in the second image from the second light spot set in the first image as a third light spot set; and determining the pose information of the controller in the real environment based on the first light spot set and the third light spot set. In this way, a sufficient number of light spot sets formed by the light emitting units of the controller can be accurately identified, so that the pose information of the controller determined based on the light spot sets is more accurate.

Description

Controller positioning method and device, head-mounted display equipment and storage medium
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a method and an apparatus for positioning a controller, a head-mounted display device, and a storage medium.
Background
A Head Mounted Display (HMD) is a Display device that can be worn on the Head of a user, and can achieve different effects such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The HMD may be used in conjunction with a handle control tracker in which a virtual, augmented, or mixed reality scene is presented on the HMD and the user interacts with elements in the scene by controlling a handle controller held in the hand.
In the related art, the handle controller is provided with the light-emitting unit, the image acquisition device is used for acquiring the image of the controller in the motion process, and the posture and the position information of the handle controller are reversely solved according to the position information of the light-emitting point corresponding to the light-emitting unit in the image, so that the real-time tracking of the handle controller is realized. However, the lighting point formed by the controller lighting unit on the captured image cannot be accurately identified, so that the posture and position information of the handle controller inversely solved based on the lighting point is inaccurate.
Disclosure of Invention
In view of the above, the present application provides a method and an apparatus for positioning a controller, a head-mounted display device, and a storage medium.
In a first aspect, an embodiment of the present application provides a method for positioning a controller, where the method includes: acquiring an image containing a controller in a real environment as a first image, wherein the controller is provided with a plurality of light-emitting units, and the first image comprises light spots corresponding to the light-emitting units; acquiring a first number of light spots which accord with a preset distribution condition from the first image to obtain a first light spot set; determining a three-dimensional model matched with the first light point set from three-dimensional models corresponding to the controller to serve as a target three-dimensional model; mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller; acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots in the first image except the first light spot set; determining pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In a second aspect, an embodiment of the present application provides a positioning apparatus for a controller, where the apparatus includes: the device comprises an image acquisition module, a first light spot set acquisition module, a three-dimensional model acquisition module, a mapping module, a third light spot set acquisition module and a positioning module. The image acquisition module is used for acquiring an image containing a controller in a real environment as a first image, the controller is provided with a plurality of light-emitting units, and the first image comprises light spots corresponding to the light-emitting units; the first light spot set acquisition module is used for acquiring a first number of light spots which accord with a preset distribution condition from the first image to obtain a first light spot set; the three-dimensional model acquisition module is used for determining a three-dimensional model matched with the first light point set from a three-dimensional model corresponding to the controller as a target three-dimensional model; the mapping module is used for mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, and the second image comprises preset light spots corresponding to all light-emitting units on the controller; a third light spot set obtaining module, configured to obtain, from a second light spot set in the first image, a second number of light spots that are matched with the preset light spot distribution in the second image, to obtain a third light spot set, where the second light spot set includes other light spots in the first image except the first light spot set; a positioning module configured to determine pose information of the controller in the real environment based on the first light spot set and the third light spot set.
In a third aspect, an embodiment of the present application provides a head-mounted display device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the positioning method of the controller provided by the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the positioning method of the controller provided in the first aspect.
In the scheme provided by the application, an image containing a controller in a real environment is obtained and used as a first image, the controller is provided with a plurality of light-emitting units, and the first image comprises light spots corresponding to the light-emitting units; acquiring a first number of light spots which accord with a preset distribution condition from a first image to obtain a first light spot set; determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controllers as a target three-dimensional model; mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller; acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots except the first light spot set in the first image; and determining the pose information of the controller in the real environment based on the first light spot set and the third light spot set. In this way, based on the distribution of the preset light points in the second image, a sufficient number of light point sets formed by the light emitting units of the controller are accurately identified from the first image, so that the pose information of the controller determined based on the light point sets is more accurate, that is, the accuracy of positioning and tracking the controller is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario provided in an embodiment of the present application.
Fig. 2 is a flowchart illustrating a positioning method of a controller according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of a light emitting unit of a controller according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of a preset light spot in a second image according to an embodiment of the present application.
Fig. 5 is a flowchart illustrating a positioning method of a controller according to another embodiment of the present application.
Fig. 6 shows a schematic diagram of a light spot distribution of a controller provided in an embodiment of the present application.
Fig. 7 shows a flow diagram of the substeps of step S370 in fig. 5.
Fig. 8 shows a flow diagram of the substeps of step S372 in fig. 7.
Fig. 9 is a flowchart illustrating a positioning method of a controller according to yet another embodiment of the present application.
Fig. 10 is a block diagram of a positioning device of a controller according to an embodiment of the present application.
Fig. 11 is a block diagram of a head mounted display device according to an embodiment of the present application for performing a positioning method of a controller according to an embodiment of the present application.
Fig. 12 is a memory unit for storing or carrying program codes for implementing a positioning method of a controller according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The head-mounted display device is a display device which can be worn on the head of a user, and can achieve different effects such as VR, AR and MR. The head-mounted display device can be used with the handle control tracker in a matched mode, in the process, a virtual reality, augmented reality or mixed reality scene is presented on the head-mounted display device, and a user interacts with elements in the scene through the handle controller held in the hand by control.
In the related art, the handle controller is provided with the light-emitting unit, the image acquisition device is used for acquiring the image of the controller in the motion process, and the posture and the position information of the handle controller are reversely solved according to the position information of the light-emitting point corresponding to the light-emitting unit in the image, so that the real-time tracking of the handle controller is realized. However, an interference light source generally exists in a real environment where the handle controller is located, so that the acquired position information of the light emitting point is inaccurate, and further, the posture and the position information of the handle controller which is reversely solved are inaccurate.
In view of the above problems, the inventors propose a method and an apparatus for positioning a controller, a head-mounted display device, and a storage medium, which may first obtain a first light spot set from a first image including the controller, then obtain a second image mapped by a target three-dimensional model of the controller based on the first light spot set, and obtain a third light spot set based on the second image, and finally determine pose information of the controller according to the first light spot set and the third light spot set. This is described in detail below.
An application environment of the positioning method of the controller provided in the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application scenario provided for an embodiment of the present application, where the application scenario includes a positioning system 10 of a controller. The positioning system 10 of the controller may include a controller 110 and a head-mounted display device 120, and the controller 110 and the head-mounted display device 120 are connected through a wireless or wired network to implement data transmission between the controller 110 and the head-mounted display device based on the network connection, where the transmitted data includes, but is not limited to, audio, video, text, images, and the like. The number of the controllers 110 may be 1 or multiple, which is not limited in this embodiment.
In some embodiments, the head-mounted display device 120 may capture an image of the controller in the display environment based on an image capture device used by the head-mounted display device, acquire the first light spot set and the third light spot set from the image, and determine pose information of the controller in the real environment based on the first light spot set and the third light spot set, so as to achieve positioning of the controller.
In other embodiments, the head-mounted display device 120 may capture an image of the controller in the display environment based on an image capture device used by the head-mounted display device 120, send the image to a server, obtain the first light spot set and the third light spot set from the image by the server, determine the pose information of the controller in the real environment based on the first light spot set and the third light spot set, and finally feed the determined pose information back to the head-mounted display device 120 to achieve positioning of the controller. The server includes, but is not limited to, an individual server, a server cluster, a local server, a cloud server, and the like.
Referring to fig. 2, fig. 2 is a diagram illustrating a method and an apparatus for positioning a controller, a head-mounted display device, and a storage medium according to an embodiment of the present disclosure. The following describes in detail a positioning method of the controller provided in the embodiment of the present application with reference to fig. 2. The positioning method of the controller can comprise the following steps:
step S210: the method comprises the steps of obtaining an image containing a controller in a real environment as a first image, wherein the controller is provided with a plurality of light emitting units, and the first image comprises light spots corresponding to the light emitting units.
In this embodiment, taking the analysis and calculation of the pose information of the controller by the head-mounted display device as an example, the head-mounted display device may capture and acquire an image including the controller in the real environment as the first image based on an external image capturing device that is built in or connected to the head-mounted display device. The image acquisition equipment comprises but is not limited to a monocular camera, a binocular camera, a multi-view camera, a black and white camera, a color camera and the like; the controller carries a plurality of light emitting units, such as light emitting unit 11, light emitting unit 12, light emitting unit 13, light emitting unit 14, light emitting unit 15, light emitting unit 16, light emitting unit 21, light emitting unit 22, light emitting unit 23, light emitting unit 24, light emitting unit 25, and light emitting unit 26 in fig. 3, of course, the number of light emitting units may not be limited to the number in the figure, and the number may be preset, such as 10, 16, or 20, and the like, which is not limited in this embodiment. The light-emitting units can be light-emitting diodes, the intervals between different adjacent light-emitting units are different, and the arrangement of two adjacent light-emitting units on the same ring is high and low, so that when the controller is shot from different angles, the arrangement of the light-emitting units on the controller is not repeated, distinction is realized, the arrangement of light spots in the first image acquired from different angles is different, and the efficiency and the accuracy of acquiring the light spots are improved. The shape of the light emitting unit includes, but is not limited to, a circle, a triangle, a pentagram, etc., and the color of the light emitting unit may be any color, such as red, yellow, blue, reddish blue, etc., which is not limited in this embodiment.
In some embodiments, an image including the controller in the real environment may be acquired as the first image by the image acquisition device every preset time period. The preset time length may be a preset time length, for example, 100ms, 500ms, or 1s, and the preset time length may also be adjusted according to different application scenarios, which is not limited in this embodiment.
Step S220: and acquiring a first number of light spots meeting a preset distribution condition from the first image to obtain a first light spot set.
In this embodiment, the real environment where the controller is located may include other interference light sources, and therefore, the light spot formed by the light emitting unit on the controller and the interference light spot formed by the interference light source may exist in the acquired first image at the same time. Based on this, in order to avoid the false recognition of the interference light spot, that is, in order to accurately recognize all the light spots in the first image, a first number of light spots may be obtained from the first image according to the preset distribution condition to obtain a first light spot set, so as to subsequently find out other light spots from the first image according to the first light spot set.
It can be understood that in the PNP (Perspective-n-Point) algorithm, when an image is given, at least 3 points are needed to solve the pose information of the controller. Based on this, the value of the first number may be preset to be 3, and correspondingly, the preset distribution condition may be that an included angle between two line segments formed by the 3 light spots is greater than a preset degree (e.g., 120 degrees), and lengths of the two line segments match with the preset length.
Step S230: and determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
Based on this, after the first light Point set is obtained, the pose information of the controller in the first image can be preliminarily estimated by adopting a P3P (Perspective-3-Point) method to serve as estimated pose information; and acquiring a three-dimensional model under the estimated pose information based on a prestored three-dimensional model of the controller to serve as a target three-dimensional model. Specifically, three-dimensional coordinates of a light-emitting unit corresponding to the first light point set in a world coordinate system are acquired, and pixel coordinates of the first light point set in the first image in the pixel coordinate system are acquired; acquiring a rotation matrix and a translation matrix from a world coordinate system to a pixel coordinate system according to the three-dimensional coordinate, the pixel coordinate and the geometric relationship among each light spot in the first light spot set; and acquiring estimated pose information of the controller based on the rotation matrix and the translation matrix.
Step S240: and mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller.
In this embodiment, after the target three-dimensional model is obtained, the target three-dimensional model may be mapped to a two-dimensional plane to obtain a second image, and it can be understood that the second image obtained by mapping includes preset light points corresponding to all light emitting units on the controller.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an arrangement of each preset light point in the second image mapped by the controller in fig. 3 under different pose information. Of course, the aforementioned estimated pose information includes, but is not limited to, several different pose information in fig. 4.
Step S250: and acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots in the first image except the first light spot set.
Based on this, the other light spots in the first image except the first light spot set are acquired as the second light spot set, and understandably, the second light spot set includes the remaining light spots and the interference light spots, so that a second number of light spots matching with the preset distribution condition can be acquired from the second light spot set based on the preset distribution condition of the preset light spots in the second image, and a third light spot set is obtained. Since the second image is obtained by mapping based on the target three-dimensional model, and the distribution of each light-emitting unit in the target three-dimensional model in the three-dimensional space is known, correspondingly, the distribution of the preset light point corresponding to each light-emitting unit in the second image is also known.
In some embodiments, the second number may be a preset number, for example, 4, 5, 8, 10, or 15, and the like, which is not limited in this embodiment. The second quantity can be correspondingly adjusted according to different positioning accuracy of the controller, and when the requirement on the positioning accuracy of the controller is higher, the second quantity can be set to be a larger numerical value, so that the controller can be positioned more accurately; when the requirement on the timeliness of the positioning of the controller is high, the second number can be set to be a small number, so that the calculation speed can be increased, the timeliness of the positioning of the controller is guaranteed, and the positioning delay is avoided.
In other embodiments, the second number may be determined based on the target three-dimensional model and pose information of the controller, and specifically, the number of visible light points in the second image mapped by the target three-dimensional model under the pose information is acquired, and the difference between the number of visible light points and the first number is acquired as the second number. Therefore, all visible light points are obtained as much as possible, the pose information of the controller is determined according to all the visible light points, and the accuracy of positioning the controller is guaranteed. In this way, if the number of the acquired visible light points is still smaller than the second number within the preset time duration, the acquired visible light points are used as a third light point set, and other visible light points are not continuously acquired, so that the problem that the visible light points which should be originally displayed in the first image are not displayed due to the fact that the light-emitting unit on the controller is shielded by other objects, and further the time duration for searching the visible light points is increased can be avoided.
Step S260: determining pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In this embodiment, after the first light spot set and the third light spot set are acquired, methods such as PNP (Perspective-n-Point), for example, EPnP (Efficient Perspective-n-Point), DLS (Direct Least Squares), or BA (Bundle Adjustment) may be used, which are not limited in this embodiment, and the first light spot set and the third light spot set are solved reversely to acquire pose information of the controller in the display environment, where the pose information may include one or both of position information and pose information, the position information may be represented by coordinates, and the pose information may be represented by angles. Therefore, the pose information of the controller at different moments is acquired based on the first image acquired at intervals of preset time, and real-time tracking and positioning of the controller are achieved.
In some embodiments, an IMU (Inertial Measurement Unit) sensor may be built in the controller, and pose information of the controller acquired by the IMU is acquired as first pose information; acquiring pose information of the controller determined based on the first light spot set and the third light spot set as second pose information; and fusing the first position and posture information and the second position and posture information to obtain the actual position and posture information of the controller. Therefore, the pose information determined based on the IMU sensor and the pose information determined according to the light spot set are fused, the accuracy of the finally obtained pose information of the controller can be improved, and the head-mounted display device is favorable for tracking and positioning the controller in real time.
In some embodiments, since there may be a case where the positions of some interfering light sources and the light-emitting unit on the controller are very close to each other in a real environment, in order to avoid misidentifying interfering light spots formed by the interfering light sources, a second number of light spots matching with the preset light spot distribution in the second image may be obtained from the second light spot set in the first image, and after a third light spot set is obtained, it is further determined whether the size of each light spot in the third light spot set meets a preset size condition; if the size of the target light spot does not meet the preset size condition, the target light spot is removed from the third light spot set, wherein the target light spot is any light spot in the third light spot set; and finally, determining the pose information of the controller in the real environment based on the first light spot set and the third light spot set after the target light spot is eliminated. Therefore, the size judgment condition of the light spot is introduced, the false identification of interference light spots close to other positions is avoided, the accuracy of the obtained light spot set is improved, the pose information of the controller determined based on the light spot set is more accurate, and the accuracy of positioning and tracking the controller is improved.
In some embodiments, the first image is captured with a plurality of controllers, for example, if the user holds one controller in both the left and right hands, then the first image includes 2 controllers, such as the first controller and the second controller. The light emitting units on the 2 controllers are arranged in a mirror image mode, namely the arrangement of the light emitting units on the 2 controllers is different. Therefore, a first light point set corresponding to the first controller and a first light point set corresponding to the second controller may be determined according to the distribution of light points in the first image, and the pose information of the first controller in the real environment is determined based on the first light point set corresponding to the first controller, and the pose information of the second controller in the real environment is determined according to the first light point set corresponding to the second controller. In this way, the different arrangement of the light-emitting units on the plurality of controllers is helpful for the head-mounted display device to determine the first light point set corresponding to each controller according to the distribution of the light points in the first image, so that the tracking and positioning of each controller in the plurality of controllers are realized.
In this embodiment, a target three-dimensional model is determined by a first light spot set, a second image of the target three-dimensional model mapped to a two-dimensional plane is obtained, distribution of preset light spots in the second image is obtained, and finally, a sufficient number of light spot sets formed by light emitting units of a controller are accurately identified from the first image based on the distribution of the preset light spots in the second image, so that pose information of the controller determined based on the light spot sets is more accurate, that is, accuracy of positioning and tracking the controller is improved.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a positioning method and apparatus for a controller, a head-mounted display device, and a storage medium according to another embodiment of the present disclosure. The following describes in detail a positioning method of the controller provided in the embodiment of the present application with reference to fig. 5. The positioning method of the controller can comprise the following steps:
step S310: the method comprises the steps of obtaining an image containing a controller in a real environment as a first image, wherein the controller is provided with a plurality of light emitting units, and the first image comprises light spots corresponding to the light emitting units.
In this embodiment, the specific content in step S310 may refer to the content in the foregoing embodiments, and is not described herein again.
Step S320: and grouping all the light spots in the first image to obtain a plurality of light spot combinations, wherein the number of the light spots contained in each light spot combination in the plurality of light spot combinations is the first number.
In this embodiment, all the light spots in the first image are grouped to obtain a plurality of light spot combinations, wherein the number of the light spots included in each light spot combination is the first number. Wherein the first number of light spots included in each light spot combination may be all adjacent (e.g. adjacent 3 light spots are 1 light spot combination); of course, the first number of light spots included in each light spot combination may not be adjacent, and different grouping manners may be preset according to different requirements, which is not limited in this embodiment.
In some embodiments, if the plurality of light emitting units carried on the controller are arranged into a plurality of rings, all the light spots corresponding to any one of the plurality of rings on the first image are grouped to obtain the plurality of light spot combinations. For example, when the first number is 3 and the number of the rings is 2, each of the plurality of spot combinations includes 3 spots in the upper ring or 3 spots in the lower ring. Therefore, the number of the multiple light spot combinations obtained by grouping the light spots on the same circular ring is limited, so that the first light spot set is matched in the subsequent multiple light spot combinations, the matching speed of the first light spot set is improved, and the efficiency of positioning and tracking the controller is improved.
In other embodiments, if the plurality of light emitting units carried on the controller are arranged into a plurality of rings, all the light spots on a target ring of the plurality of rings are grouped to obtain a plurality of first light spot combinations, and the number included in each first light spot combination is a third number; grouping all light spots on any one of the rings except the target ring to obtain a plurality of second light spot combinations, wherein the number of the second light spot combinations is the fourth number; and carrying out different combinations on the plurality of first light spot combinations and the plurality of second light spot combinations to obtain the plurality of light spot combinations, wherein the sum of the third number and the fourth number is equal to the first number. Illustratively, when the first number is 3 and the number of the rings is 2, each of the plurality of spot combinations includes 1 spot in the upper ring and 2 spots in the lower ring, or includes 2 spots in the upper ring and 1 spot in the lower ring, which is not limited in this embodiment. Thus, since the various light spot combinations are grouped from different rings, more light spot combinations are included, and more matching options are provided.
Step S330: and acquiring the distance between adjacent light spots in each light spot combination in the plurality of light spot combinations and the angle between a plurality of line segments formed by the adjacent light spots as the light spot distribution information corresponding to each light spot combination.
Based on this, after a plurality of spot combinations are obtained, the distance between adjacent spots in each spot combination and the angle between a plurality of line segments formed by the adjacent spots are acquired as spot distribution information corresponding to each spot combination. For example, referring to fig. 6, the light spot combination 1 includes a light spot a, a light spot B, and a light spot C, and the distances AB and BC between adjacent light spots and the angle × 1 between the line segment AB and the line segment BC are obtained as the light spot distribution information corresponding to the light spot combination 1.
Step S340: and acquiring the light spot combination of which the light spot distribution information simultaneously meets the preset distance distribution condition and the preset angle distribution condition from the plurality of light spot combinations to serve as the first light spot set.
In this embodiment, the preset distance distribution condition may be that a difference between a distance between adjacent light spots and the preset distance is within an error threshold, and the preset angle distribution condition may be that an angle between two line segments formed by the adjacent light spots is not smaller than a preset angle. After light spot distribution information corresponding to each light spot combination in multiple light spot combinations is obtained, whether the light spot distribution information corresponding to each light spot combination simultaneously meets a preset distance distribution condition and a preset angle distribution condition is respectively judged, if the target light spot distribution information simultaneously meets the preset distance distribution condition and the preset angle distribution condition, the light spot combination corresponding to the target light spot distribution information is obtained to serve as a first light spot set, and the target light spot distribution information is the light spot distribution information corresponding to any light spot combination.
In some embodiments, if there are a plurality of target light spot distribution information that simultaneously satisfy the preset distance distribution condition and the preset angle distribution condition, a light spot combination corresponding to the target light spot distribution information that most satisfies the preset distance distribution condition and the preset angle distribution condition in the plurality of target light spot distribution information may be obtained as the first light spot set. Specifically, a light spot combination corresponding to target light spot distribution information in which a difference between a distance between adjacent light spots in the plurality of target light spot distribution information and a preset distance is minimum and/or a difference between an angle between two line segments formed by the adjacent light spots and the preset angle is minimum is obtained as the first light spot set.
Step S350: and determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
Step S360: and mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller.
In this embodiment, the specific contents in step S350 to step S360 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S370: and acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots in the first image except the first light spot set.
In some embodiments, referring to fig. 7, step S370 may include the following steps:
step S371: and acquiring a visible light spot in the preset light spot in the second image according to the orientation of the target three-dimensional model, wherein the visible light spot is a light spot contained in an image obtained by shooting the controller from the direction opposite to the orientation.
In this embodiment, due to the singularity of the distribution of the light emitting units on the controller, the first image of the controller taken from different directions contains different light spots and the distribution of the light spots. Still referring to fig. 4, fig. 4 shows different distribution states of the light emitting units on the observation controller in different directions (front, back, right, and top), and thus, the visible light points in the first image obtained by photographing the controller from different directions are also different.
It will be understood that the target three-dimensional model is acquired from the first set of light points in the first image, and therefore, the visible light points in the second image acquired based on the target three-dimensional model are captured and displayed at corresponding positions in the first image. Based on this, it is possible to determine the direction facing the direction of the target three-dimensional model as the target shooting direction from the direction of the acquired target three-dimensional model, and acquire the light spot included in the image obtained from the target shooting direction shooting controller in the second image as the visible light spot. Illustratively, the black dots in fig. 4 represent visible light dots, and the gray dots represent other dots than the visible light dots in the preset light dots.
Step S372: and acquiring a second number of light spots matched with the visible light spot distribution in the second image from the second light spot set to obtain a third light spot set.
In some embodiments, the light spot in the first light spot set is used as a first light spot, and the light spot in the second light spot set is used as a second light spot, please refer to fig. 8, wherein step S372 may include the following steps:
step S3721: and acquiring relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set to obtain position distribution information corresponding to each second light spot, wherein the position distribution information comprises the relative position information between each second light spot and all first light spots in the first light spot set.
In this embodiment, since the light points matching the distribution of the visible light points in the second image are to be obtained from the second light point set, the relative position information between each second light point in the second light point set and each first light point in the first light point set can be obtained first, and the position distribution information corresponding to each second light point can be obtained. The specific implementation manner of obtaining the relative position information, where the position distribution information includes the relative position information between each second light spot and all the first light spots in the first light spot set, and the relative position information includes the distance between the second light spot and each first light spot, and/or the angle between a plurality of line segments formed by the second light spot and each first light spot, may refer to the content in the foregoing embodiments, and is not described herein again.
Step S3722: and acquiring a visible light spot corresponding to the first light spot set from the visible light spots as a first visible light spot.
Step S3723: and acquiring other visible light points except the first visible light point in the visible light points as second visible light points.
Visible light points corresponding to the first light point set one by one exist in the second image, so that the visible light points corresponding to the first light point set can be obtained and used as first visible light points; and acquiring other visible light points except the first visible light point in the second image as second visible light points. It will be appreciated that a second set of spots, matching the distribution of the second set of visible spots, may be added to the third set of spots.
Step S3724: and acquiring preset position information between each second visible light spot and each first visible light spot to obtain preset distribution information corresponding to each second visible light spot, wherein the preset distribution information comprises the preset position information between each second visible light spot and all the first visible light spots.
Based on this, the preset position information between each second visible light spot and each first visible light spot is further acquired, and the preset distribution information corresponding to each second visible light spot is obtained. The preset distribution information comprises preset position information between each second visible light spot and all the first visible light spots, and the preset position information comprises a distance between each second visible light spot and each first visible light spot, and/or an angle between a plurality of line segments formed by the second visible light spot and each first visible light spot. The preset position information may be position information acquired from a preset position database according to the orientation of the target three-dimensional model, or may be position information calculated in real time according to the orientation of the target three-dimensional model, which is not limited in this embodiment.
Step S3725: and acquiring a second light spot of which the position distribution information is matched with the preset position information from the second light spot set to obtain a third light spot set.
In this embodiment, after the preset distribution information of the second visible light points and the position distribution information of the second light points are acquired, the position distribution information of each second light point in the second light point set is matched with the preset distribution information of each second visible light point, and if there is a successful matching of the target second light point, the third light point set is generated based on the target second light point. Wherein the target second spot is any one of the second set of spots.
Specifically, if the position distribution information and the preset position information meet the specified distribution condition, it is determined that the position distribution information matches the preset information. The specified distribution condition may include a specified distance distribution condition and a specified angle distribution condition, the specified distance distribution condition may be that a difference between a distance between the second light point and the first light point and a distance between the corresponding second visible light point and the first visible light point is within a specified distance threshold, and the specified angle distribution condition may be that a difference between an angle between a plurality of line segments formed by the second light point and the first light point and an angle between a plurality of line segments formed by the corresponding second visible light point and the first visible light point is within a specified angle threshold.
Step S380: determining pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In this embodiment, the specific content in step 380 may refer to the content in the foregoing embodiments, and is not described herein again.
In this embodiment, the distribution of the second visible light points in the second image is obtained, then the second light points in the first image are concentrated to obtain the second light points matched with the distribution of the second visible light points, so as to obtain a third light point set, and finally, the pose information of the controller in the real environment is determined based on the first light point set and the third light point set. Therefore, only the second light point in the second light point set is matched with the distribution of the second visible light point in the second image, the matching times are reduced, the efficiency of obtaining the third light point set is improved, and the efficiency of carrying out real-time positioning and tracking on the controller based on the light point set is further improved.
Referring to fig. 9, fig. 9 is a diagram illustrating a positioning method and apparatus of a controller, a head-mounted display device, and a storage medium according to still another embodiment of the present disclosure. The following describes in detail a positioning method of the controller provided in the embodiment of the present application with reference to fig. 9. The positioning method of the controller can comprise the following steps:
step S410: the method comprises the steps of obtaining an image containing a controller in a real environment as a first image, wherein the controller is provided with a plurality of light emitting units, and the first image comprises light spots corresponding to the light emitting units.
Step S420: and acquiring a first number of light spots meeting a preset distribution condition from the first image to obtain a first light spot set.
Step S430: and determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
Step S440: and mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller.
In this embodiment, the specific contents in step S410 to step S440 may refer to the contents in the foregoing embodiments, and are not described herein again.
Step S450: and acquiring a light spot matched with the preset light spot distribution in the second image from the second light spot set in the first image, and adding the light spot to the third light spot set.
In some embodiments, when there are a plurality of light spots in the second light spot set matching the preset light spot distribution, any one matching light spot can be selected and added to the third light spot set. Therefore, the speed of acquiring the concentrated light spot belonging to the third light spot is ensured, and the real-time performance of tracking and positioning the controller is ensured.
In other embodiments, when there are a plurality of light spots in the second light spot set matching the preset light spot distribution, the light spot most matching the preset light spot distribution can be obtained and added to the third light spot set. The principle of obtaining the light spot that is most matched with the preset light spot distribution is similar to the principle of obtaining the light spot combination corresponding to the target light spot distribution information that best meets the preset distance distribution condition and the preset angle distribution condition among the plurality of target light spot distribution information in the foregoing embodiment, and the most matched light spot can be determined according to the matching condition of the position distribution information and the preset position information, which is not described herein again. Therefore, the target three-dimensional model acquired each time can be more accurate, the accuracy of acquiring the light spots matched with the preset light spot distribution based on the target three-dimensional model subsequently is improved, the finally acquired third light spot set is more accurate, and the accuracy of carrying out real-time positioning tracking on the controller is ensured.
Step S460: if the number of light spots included in the third light spot set is smaller than the second number, acquiring a three-dimensional model matched with the first light spot set and the third light spot set as the target three-dimensional model, and repeatedly executing the step of mapping the target three-dimensional model to a two-dimensional plane to obtain a second image to the second light spot set in the first image, acquiring one light spot matched with the preset light spot distribution in the second image, and adding the light spot to the third light spot set until the number of light spots included in the third light spot set is equal to the second number.
Based on this, it may be determined whether the number of light spots included in the third light spot set is smaller than a second number, and if the number of light spots included in the third light spot set is smaller than the second number, the three-dimensional model corresponding to the controller is obtained based on the first light spot set and the third light spot set, and the step of mapping the three-dimensional model to the two-dimensional plane is repeatedly performed to obtain the second image to the second light spot set in the first image, obtain one light spot matching the preset light spot distribution in the second image, and add the light spot to the third light spot set until the number of light spots included in the third light spot set is equal to the second number. Therefore, only one light spot is obtained at each time and added to the third light spot set, and then the corresponding target three-dimensional model is determined based on the third light spot set and the first light spot set, so that the accuracy of determining the target three-dimensional model is improved, and further the accuracy of subsequently obtaining the third light spot set according to the target three-dimensional model is also improved, namely, enough light spot sets which are more accurately distributed are obtained, so that the pose information of the controller determined based on the light spot sets is more accurate, and the accuracy of positioning and tracking the controller is improved.
In some embodiments, if the aforementioned steps are repeatedly executed for multiple times, and the number of light spots included in the third light spot set is still less than the second number, it is determined whether the number of times of repeatedly executing the steps reaches a preset number, or it is determined whether the duration of acquiring the third light spot set reaches a preset duration; if the number of times of repeatedly executing the step reaches a preset number of times, or the duration of acquiring the third light point set reaches a preset duration, it represents that some light points in the first image may be blocked by other objects and a second number of light points cannot be acquired, at this time, the step of determining the pose information of the controller in the real environment based on the first light point set and the third light point set may be directly executed. Therefore, the problem of positioning delay caused by long time consumed by acquiring the third light point set can be solved, the head-mounted display equipment can acquire the pose information of the controller in time, and the real-time tracking and positioning of the controller are realized.
Step S470: determining pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In this embodiment, the specific content in step S470 may refer to the content in the foregoing embodiments, and is not described herein again.
In this embodiment, 1 light spot matched with the preset light spot distribution in the second image is obtained from the second light spot set in the first image each time, and is added to the third light spot set, and then the corresponding target three-dimensional model is determined based on the third light spot set and the first light spot set. Therefore, the accuracy of the determined target three-dimensional model is improved, the distribution of the preset light points determined based on the target three-dimensional model is more accurate, the light point set matched with the distribution of the preset light points is more accurately identified from the first image, the pose information of the controller determined based on the light point set is more accurate, and the accuracy of positioning and tracking the controller is improved.
Referring to fig. 10, a block diagram of a positioning apparatus 500 of a controller according to an embodiment of the present disclosure is shown. The apparatus 500 may comprise: an image acquisition module 510, a first spot set acquisition module 520, a three-dimensional model acquisition module 530, a mapping module 540, a third spot set acquisition module 550, and a localization module 560.
The image obtaining module 510 is configured to obtain an image including a controller in a real environment as a first image, where the controller carries a plurality of light emitting units, and the first image includes light spots corresponding to the light emitting units.
The first light spot set obtaining module 520 is configured to obtain a first number of light spots meeting a preset distribution condition from the first image, so as to obtain a first light spot set.
The three-dimensional model obtaining module 530 is configured to determine, from the three-dimensional model corresponding to the controller, a three-dimensional model matched with the first light point set as a target three-dimensional model.
The mapping module 540 is configured to map the target three-dimensional model to a two-dimensional plane to obtain a second image, where the second image includes preset light spots corresponding to all light emitting units on the controller.
The third light spot set obtaining module 550 is configured to obtain a second number of light spots from a second light spot set in the first image, where the second number of light spots is matched with the preset light spot distribution in the second image, to obtain a third light spot set, where the second light spot set includes other light spots in the first image except the first light spot set.
The positioning module 560 is configured to determine pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In some embodiments, the preset distribution condition includes a preset distance distribution condition and a preset angle distribution condition, and the first light spot set obtaining module 520 may include: a light spot combination acquisition unit, a light spot distribution acquisition unit, and a first light spot set acquisition unit. The light spot combination obtaining unit may be configured to group all the light spots in the first image to obtain a plurality of light spot combinations, where the number of light spots included in each of the plurality of light spot combinations is the first number. The spot distribution acquiring unit may be configured to acquire, as the spot distribution information corresponding to each of the plurality of spot combinations, a distance between adjacent spots in each of the plurality of spot combinations and an angle between a plurality of line segments formed by the adjacent spots. The first light spot set acquiring unit may be configured to acquire, as the first light spot set, a light spot combination in which the light spot distribution information simultaneously satisfies the preset distance distribution condition and the preset angle distribution condition, from the plurality of light spot combinations.
In this manner, the light spot combination obtaining unit may be specifically configured to, if the plurality of light emitting units carried on the controller are arranged in a plurality of rings, group all the light spots on the first image corresponding to any one of the rings in the plurality of rings, so as to obtain the plurality of light spot combinations.
In some embodiments, the third light spot set acquisition module 550 may include: a visible light spot acquisition unit and a third light spot set acquisition unit. The visible light point obtaining unit may be configured to obtain a visible light point in the preset light point in the second image according to an orientation of the three-dimensional model, where the visible light point is a light point included in an image obtained by shooting the controller from a direction directly facing the orientation. The third light spot set obtaining unit may be configured to obtain, from the second light spot set, a second number of light spots matching the distribution of the visible light spots in the second image, to obtain the third light spot set.
In this manner, the light spot in the first light spot set is used as the first light spot, the light spot in the second light spot set is used as the second light spot, and the third light spot set obtaining unit may be specifically configured to: acquiring relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set to obtain position distribution information corresponding to each second light spot, wherein the position distribution information comprises the relative position information between each second light spot and all first light spots in the first light spot set; acquiring a visible light spot corresponding to the first light spot set from the visible light spots as a first visible light spot; acquiring other visible light points except the first visible light point in the visible light points as second visible light points; acquiring preset position information between each second visible light spot and each first visible light spot to obtain preset distribution information corresponding to each second visible light spot, wherein the preset distribution information comprises the preset position information between each second visible light spot and all first visible light spots; and acquiring a second light spot of which the position distribution information is matched with the preset position information from the second light spot set to obtain a third light spot set.
In some embodiments, the positioning device 500 of the controller may further include a size determination module. The size judgment module may be configured to acquire a second number of light spots in the second image, which are matched with the preset light spot distribution in the second image, from the second light spot set in the first image, and after a third light spot set is obtained, judge whether the size of each light spot in the third light spot set meets a preset size condition, and if the size of a target light spot does not meet the preset size condition, remove the target light spot from the third light spot set, where the target light spot is any light spot in the third light spot set. The positioning module 560 may be specifically configured to determine pose information of the controller in the real environment based on the first light spot set and the third light spot set after the target light spot is eliminated.
In other embodiments, the third light spot set acquisition module 550 may be specifically configured to: acquiring a light spot matched with the preset light spot distribution in the second image from a second light spot set in the first image, and adding the light spot to the third light spot set; if the number of light spots included in the third light spot set is smaller than the second number, acquiring a three-dimensional model matched with the first light spot set and the third light spot as the target three-dimensional model, and repeatedly executing the step of mapping the target three-dimensional model to a two-dimensional plane to obtain a second image to the second light spot set in the first image, acquiring one light spot matched with the preset light spot distribution in the second image, and adding the light spot to the third light spot set until the number of light spots included in the third light spot set is equal to the second number.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, in the scheme provided in the embodiment of the present application, an image including a controller in a real environment is obtained as a first image, the controller carries a plurality of light emitting units, and the first image includes light spots corresponding to the light emitting units; acquiring a first number of light spots which accord with a preset distribution condition from a first image to obtain a first light spot set; determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controllers as a target three-dimensional model; mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller; acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots except the first light spot set in the first image; and determining the pose information of the controller in the real environment based on the first light spot set and the third light spot set. Therefore, the light spot sets formed by the light emitting units of the controller in a sufficient number can be accurately identified, so that the pose information of the controller determined based on the light spot sets is more accurate, and the accuracy of positioning and tracking the controller is improved.
A head-mounted display device provided by the present application will be described with reference to the drawings.
Referring to fig. 11, fig. 11 is a block diagram illustrating a structure of a head-mounted display device 600 according to an embodiment of the present disclosure, and a method for positioning a controller according to an embodiment of the present disclosure may be performed by the head-mounted display device 600. Among other things, the head mounted display device 600 may be a device that is capable of running an application.
The head-mounted display device 600 in the embodiments of the present application may include one or more of the following components: a processor 601, a memory 602, and one or more applications, wherein the one or more applications may be stored in the memory 602 and configured to be executed by the one or more processors 601, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 601 may include one or more processing cores. The processor 601 connects various parts within the overall head mounted display device 600 using various interfaces and lines, performs various functions of the head mounted display device 600 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 602, and calling data stored in the memory 602. Alternatively, the processor 601 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 601 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the above modem may be integrated into the processor 901, and implemented by a communication chip.
The Memory 602 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 602 may be used to store instructions, programs, code sets, or instruction sets. The memory 602 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the head-mounted display device 600 during use (such as the various correspondences described above), and the like.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 700 has stored therein program code that can be called by a processor to perform the methods described in the above-described method embodiments.
The computer-readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 includes a non-transitory computer-readable storage medium. The computer readable storage medium 700 has storage space for program code 710 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 710 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of positioning a controller, the method comprising:
acquiring an image containing a controller in a real environment as a first image, wherein the controller is provided with a plurality of light-emitting units, and the first image comprises light spots corresponding to the light-emitting units;
acquiring a first number of light spots which accord with a preset distribution condition from the first image to obtain a first light spot set;
determining a three-dimensional model matched with the first light point set from three-dimensional models corresponding to the controller to serve as a target three-dimensional model;
mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all light-emitting units on the controller;
acquiring a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots in the first image except the first light spot set;
determining pose information of the controller in the real environment based on the first set of light points and the third set of light points.
2. The method according to claim 1, wherein the preset distribution condition comprises a preset distance distribution condition and a preset angle distribution condition, and the obtaining a first number of light points meeting the preset distribution condition from the first image to obtain a first light point set comprises:
grouping all light spots in the first image to obtain a plurality of light spot combinations, wherein the number of the light spots contained in each light spot combination in the plurality of light spot combinations is the first number;
acquiring the distance between adjacent light spots in each light spot combination in the multiple light spot combinations and the angle between a plurality of line segments formed by the adjacent light spots as light spot distribution information corresponding to each light spot combination;
and acquiring the light spot combination of which the light spot distribution information simultaneously meets the preset distance distribution condition and the preset angle distribution condition from the plurality of light spot combinations to serve as the first light spot set.
3. The method of claim 2, wherein the grouping of all light points included in the first image results in a plurality of light point combinations, including:
if the plurality of light emitting units carried on the controller are arranged into a plurality of rings, grouping all the light spots of any one of the plurality of rings corresponding to the first image to obtain the plurality of light spot combinations.
4. The method according to any one of claims 1 to 3, wherein said obtaining a second number of light points from a second light point set in the first image, which matches the preset light point distribution in the second image, to obtain a third light point set comprises:
acquiring a visible light spot in the preset light spot in the second image according to the orientation of the target three-dimensional model, wherein the visible light spot is a light spot contained in an image obtained by shooting the controller from the direction opposite to the orientation;
and acquiring a second number of light spots matched with the visible light spot distribution in the second image from the second light spot set to obtain a third light spot set.
5. The method of claim 4, wherein the light points in the first light point set are used as first light points, the light points in the second light point set are used as second light points, and the obtaining a second number of light points from the second light point set, which is matched with the visible light point distribution in the second image, obtains the third light point set, and comprises:
acquiring relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set to obtain position distribution information corresponding to each second light spot, wherein the position distribution information comprises the relative position information between each second light spot and all first light spots in the first light spot set;
acquiring a visible light spot corresponding to the first light spot set from the visible light spots as a first visible light spot;
acquiring other visible light points except the first visible light point in the visible light points as second visible light points;
acquiring preset position information between each second visible light spot and each first visible light spot to obtain preset distribution information corresponding to each second visible light spot, wherein the preset distribution information comprises the preset position information between each second visible light spot and all first visible light spots;
and acquiring a second light spot of which the position distribution information is matched with the preset position information from the second light spot set to obtain a third light spot set.
6. The method of claim 1, further comprising, after said obtaining a second number of light points from a second light point set in the first image that match the preset light point distribution in the second image, resulting in a third light point set:
judging whether the size of each light spot in the third light spot set meets a preset size condition or not;
if the size of the target light spot does not meet the preset size condition, the target light spot is removed from the third light spot set, and the target light spot is any light spot in the third light spot set;
the determining pose information of the controller in the real environment based on the first set of light points and the third set of light points comprises:
and determining the pose information of the controller in the real environment based on the first light spot set and the third light spot set after the target light spot is eliminated.
7. The method according to any one of claims 1 to 3, wherein said obtaining a second number of light points from a second light point set in the first image, which matches the preset light point distribution in the second image, to obtain a third light point set comprises:
acquiring a light spot matched with the preset light spot distribution in the second image from a second light spot set in the first image, and adding the light spot to the third light spot set;
if the number of light spots included in the third light spot set is smaller than the second number, acquiring a three-dimensional model matched with the first light spot set and the third light spot as the target three-dimensional model, and repeatedly executing the step of mapping the target three-dimensional model to a two-dimensional plane to obtain a second image to the second light spot set in the first image, acquiring one light spot matched with the preset light spot distribution in the second image, and adding the light spot to the third light spot set until the number of light spots included in the third light spot set is equal to the second number.
8. A positioning device for a controller, the device comprising:
the image acquisition module is used for acquiring an image containing a controller in a real environment as a first image, the controller is provided with a plurality of light-emitting units, and the first image comprises light spots corresponding to the light-emitting units;
the first light spot set acquisition module is used for acquiring a first number of light spots meeting a preset distribution condition from the first image to obtain a first light spot set;
the three-dimensional model acquisition module is used for determining a three-dimensional model matched with the first light point set from a three-dimensional model corresponding to the controller as a target three-dimensional model;
the mapping module is used for mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, and the second image comprises preset light spots corresponding to all light-emitting units on the controller;
a third light spot set obtaining module, configured to obtain, from a second light spot set in the first image, a second number of light spots that are matched with the preset light spot distribution in the second image, to obtain a third light spot set, where the second light spot set includes other light spots in the first image except the first light spot set;
a positioning module configured to determine pose information of the controller in the real environment based on the first light spot set and the third light spot set.
9. A head-mounted display device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to perform the method according to any of claims 1-7.
CN202210074244.9A 2022-01-21 2022-01-21 Controller positioning method and device, head-mounted display equipment and storage medium Pending CN114549285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210074244.9A CN114549285A (en) 2022-01-21 2022-01-21 Controller positioning method and device, head-mounted display equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210074244.9A CN114549285A (en) 2022-01-21 2022-01-21 Controller positioning method and device, head-mounted display equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114549285A true CN114549285A (en) 2022-05-27

Family

ID=81671054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210074244.9A Pending CN114549285A (en) 2022-01-21 2022-01-21 Controller positioning method and device, head-mounted display equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549285A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082520A (en) * 2022-06-14 2022-09-20 歌尔股份有限公司 Positioning tracking method and device, terminal equipment and computer readable storage medium
WO2024041202A1 (en) * 2022-08-22 2024-02-29 华为技术有限公司 Image processing method, calibration system, and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082520A (en) * 2022-06-14 2022-09-20 歌尔股份有限公司 Positioning tracking method and device, terminal equipment and computer readable storage medium
WO2024041202A1 (en) * 2022-08-22 2024-02-29 华为技术有限公司 Image processing method, calibration system, and related equipment

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
CN114549285A (en) Controller positioning method and device, head-mounted display equipment and storage medium
CN110070621B (en) Electronic device, method for displaying augmented reality scene and computer readable medium
CN108830894A (en) Remote guide method, apparatus, terminal and storage medium based on augmented reality
EP4057109A1 (en) Data processing method and apparatus, electronic device and storage medium
US11132845B2 (en) Real-world object recognition for computing device
CN110782492B (en) Pose tracking method and device
US20190164281A1 (en) Robotic pill filling, counting, and validation
KR20200027846A (en) Method, terminal unit and server for providing task assistance information in mixed reality
EP3806039A1 (en) Spatial positioning method and device, system thereof and computer-readable medium
CN109688343A (en) The implementation method and device of augmented reality studio
CN110737414B (en) Interactive display method, device, terminal equipment and storage medium
US20240037788A1 (en) 3d pose estimation in robotics
CN107610236B (en) Interaction method and system based on graph recognition
US20240062449A1 (en) Illumination rendering method and apparatus, and electronic device and storage medium
CN116091701A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, computer equipment and storage medium
CN111198609A (en) Interactive display method and device, electronic equipment and storage medium
CN110598605B (en) Positioning method, positioning device, terminal equipment and storage medium
CN115471808A (en) Processing method and device for reconstructing target point cloud based on reference image
CN110471577B (en) 360-degree omnibearing virtual touch control method, system, platform and storage medium
CN110826376A (en) Marker identification method and device, terminal equipment and storage medium
CN111047710B (en) Virtual reality system, interactive device display method, and computer-readable storage medium
CN111176445B (en) Interactive device identification method, terminal equipment and readable storage medium
CN111897432A (en) Pose determining method and device and electronic equipment
US20200342610A1 (en) Method of Pose Change Notification and Related Interactive Image Processing System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination