Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
The head-mounted display device is a display device which can be worn on the head of a user, and can achieve different effects of VR, AR, MR and the like. The head-mounted display device can be matched with the handle control tracker for use, in the process, a virtual reality, augmented reality or mixed reality scene is presented on the head-mounted display device, and a user interacts with elements in the scene by controlling the handle controller held in the hand.
In the related art, a light-emitting unit is carried on a handle controller, an image in the motion process of the controller is acquired through an image acquisition device, and the posture and the position information of the handle controller are reversely solved according to the position information of a light-emitting point corresponding to the light-emitting unit in the image, so that the real-time tracking of the handle controller is realized. However, in the real environment where the handle controller is located, there is generally an interference light source, which leads to inaccurate position information of the obtained light emitting point, and further leads to inaccurate posture and position information of the handle controller which is reversely solved.
In view of the foregoing, the inventor proposes a positioning method, a device, a head-mounted display device, and a storage medium for a controller, where a first light point set may be obtained from a first image including the controller, a second image mapped by a target three-dimensional model of the controller is obtained based on the first light point set, a third light point set is obtained based on the second image, and finally pose information of the controller is determined according to the first light point set and the third light point set. This will be described in detail below.
The application environment of the positioning method of the controller provided by the embodiment of the application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the application, where the application scenario includes a positioning system 10 of a controller. The positioning system 10 of the controller may include a controller 110 and a head-mounted display device 120, where the controller 110 and the head-mounted display device 120 are connected through a wireless or wired network, so as to implement data transmission between the controller 110 and the head-mounted display device based on the network connection, and the transmitted data includes, but is not limited to, audio, video, text, images, and the like. The number of the controllers 110 may be 1 or more, which is not limited in this embodiment.
In some embodiments, the head-mounted display device 120 may capture an image of the controller in the display environment based on the image capturing device used by itself, obtain the first light point set and the third light point set from the image, and determine pose information of the controller in the real environment based on the first light point set and the third light point set, so as to implement positioning of the controller.
In other embodiments, the head-mounted display device 120 may capture an image of the controller in the display environment based on the image capturing device used by itself, send the image to the server, obtain the first light point set and the third light point set from the image by the server, determine pose information of the controller in the real environment based on the first light point set and the third light point set, and finally, feed back the determined pose information to the head-mounted device 120 to implement positioning of the controller. Wherein the servers include, but are not limited to, individual servers, server clusters, local servers, cloud servers, and the like.
Referring to fig. 2, fig. 2 is a diagram illustrating a positioning method, a positioning device, a head-mounted display device and a storage medium for a controller according to an embodiment of the application. The positioning method of the controller according to the embodiment of the present application will be described in detail with reference to fig. 2. The positioning method of the controller may include the steps of:
step S210, acquiring an image containing a controller in a real environment, wherein the controller is used as a first image, a plurality of light emitting units are carried on the controller, and the first image comprises light spots corresponding to the light emitting units.
In this embodiment, taking the case where the head-mounted display device analyzes and calculates pose information of the controller as an example, the head-mounted display device may capture an image including the controller in the real environment as the first image based on an internal or external image capturing device connected to itself. The image capturing device includes, but is not limited to, a monocular camera, a binocular camera, a multi-view camera, a black-and-white camera, a color camera, and the like, and the controller carries a plurality of light emitting units, such as the light emitting unit 11, the light emitting unit 12, the light emitting unit 13, the light emitting unit 14, the light emitting unit 15, the light emitting unit 16, the light emitting unit 21, the light emitting unit 22, the light emitting unit 23, the light emitting unit 24, the light emitting unit 25, and the light emitting unit 26 in fig. 3, and of course, the number of the light emitting units may be not limited to the number in the figure, and may be preset, such as 10, 16, or 20, and the like, which is not limited in this embodiment. The light emitting units can be light emitting diodes, the intervals between different adjacent light emitting units are inconsistent, and the arrangement of two adjacent light emitting units on the same circular ring is one high and one low, so that the arrangement of the light emitting units on the controller is not repeated when the controller is shot from different angles, the distinguishing property exists, the light spots in the first images acquired from different angles are arranged differently, and the light spot acquisition efficiency and accuracy are improved. The shape of the light emitting unit includes, but is not limited to, a circle, a triangle, a pentagram, etc., and the color of the light emitting unit may be any color, such as red, yellow, blue, red blue, etc., which is not limited in this embodiment.
In some embodiments, an image including a controller in a real environment may be acquired as the first image by the image acquisition device every a preset period of time. The preset duration may be a preset duration, for example, 100ms, 500ms, or 1s, or the duration may be adjusted according to different application scenarios, which is not limited in this embodiment.
Step S220, a first number of light spots meeting preset distribution conditions are acquired from the first image, and a first light spot set is obtained.
In this embodiment, the controller may include other interference light sources in the real environment, so that the obtained first image may have both a light spot formed by the light emitting unit on the controller and an interference light spot formed by the interference light source. Based on this, in order to avoid misidentification of the interfering light spots, i.e. in order to accurately identify all light spots in the first image, a first number of light spots may be obtained from the first image according to a preset distribution condition, so as to obtain a first light spot set, so that other light spots are found from the first image according to the first light spot set.
It will be appreciated that in the PNP (PERSPECTIVE-n-Point, perspective n-Point) algorithm, at least 3 points are required to solve for the pose information of the controller when a single image is given. Based on this, the first number may be preset to 3, and correspondingly, the preset distribution condition may be that an included angle between two line segments formed by 3 light spots is greater than a preset degree (for example, 120 degrees), and the lengths of the two line segments are matched with the preset length.
And S230, determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
Based on the above, after the first light Point set is obtained, the pose information of the controller in the first image can be estimated preliminarily by adopting a P3P (PERSPECTIVE-3-Point) method to be used as estimated pose information, and then a three-dimensional model under the estimated pose information is obtained based on a pre-stored three-dimensional model of the controller to be used as a target three-dimensional model. The method comprises the steps of obtaining three-dimensional coordinates of a light emitting unit corresponding to a first light spot set in a world coordinate system, obtaining pixel coordinates of the first light spot set in a first image under a pixel coordinate system, obtaining a rotation matrix and a translation matrix from the world coordinate system to the pixel coordinate system according to the three-dimensional coordinates, the pixel coordinates and geometric relations among the light spots in the first light spot set, and obtaining estimated pose information of a controller based on the rotation matrix and the translation matrix.
And step S240, mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all the light emitting units on the controller.
In this embodiment, after the target three-dimensional model is obtained, the target three-dimensional model may be mapped to a two-dimensional plane to obtain a second image, where understandably, the second image obtained by mapping includes preset light points corresponding to all the light emitting units on the controller.
Referring to fig. 4, fig. 4 is a schematic diagram showing an arrangement of each preset light spot in a second image mapped by the controller in fig. 3 under different pose information. Of course, the aforementioned estimated pose information includes, but is not limited to, several different pose information in fig. 4.
Step S250, obtaining a second number of light spots matched with the preset light spot distribution in the second image from the second light spot set in the first image, so as to obtain a third light spot set, wherein the second light spot set comprises other light spots except the first light spot set in the first image.
Based on this, the other light spots in the first image than the first light spot set are acquired as the second light spot set, and it is understood that the second light spot set includes the remaining light spots and the interfering light spots, and therefore, based on the preset distribution situation of the preset light spots in the second image, the second number of light spots matched with the preset distribution situation can be acquired from the second light spot set, so as to obtain the third light spot set. Since the second image is mapped based on the target three-dimensional model, and the distribution of each light emitting unit in the target three-dimensional model in the three-dimensional space is known, correspondingly, the distribution of the preset light point corresponding to each light emitting unit in the second image is also known.
In some embodiments, the second number may be a preset number, for example, 4, 5, 8, 10, 15, or the like, which is not limited by the present example. According to the different precision of the controller positioning, the second quantity can be correspondingly adjusted, when the precision requirement on the controller positioning is higher, the second quantity can be set to be a larger value, so that the controller can be more accurately positioned, when the timeliness requirement on the controller positioning is higher, the second quantity can be set to be a smaller value, so that the calculation speed can be improved, the timeliness of the controller positioning is further ensured, and the positioning delay is avoided.
In other embodiments, the second number may be determined based on the pose information of the target three-dimensional model and the controller, specifically, the number of visible light points in the second image mapped by the target three-dimensional model under the pose information is obtained, and a difference between the number of visible light points and the first number is obtained as the aforementioned second number. Therefore, all visible light spots are obtained as much as possible, pose information of the controller is determined according to all the visible light spots, and the accuracy of positioning the controller is guaranteed. In this way, if the number of the obtained visible light points is still smaller than the second number within the preset duration, the obtained visible light points are used as a third light point set, and other visible light points are not continuously obtained, so that the problem that the visible light points which are originally required to be displayed in the first image are not displayed due to the fact that the light emitting units on the controller are shielded by other objects can be avoided, and further the duration of finding the visible light points is increased is solved, that is, the real-time performance of positioning the controller is effectively ensured, the head-mounted display device can timely obtain the pose information of the controller, and the real-time tracking and positioning of the controller are realized.
Step S260, determining pose information of the controller in the real environment based on the first light point set and the third light point set.
In this embodiment, after the first light Point set and the third light Point set are acquired, a method such as PNP (PERSPECTIVE-n-Point, perspective n-Point) may be used, for example, a method such as EPnP (EFFICIENT PERSPECTIVE-n-Point, efficient perspective n-Point), DLS (DIRECT LEAST-Squares, direct least Squares) or BA (Bundle Adjustment, beam spread) may be used, which is not limited in this embodiment, and the first light Point set and the third light Point set are reversely solved, so as to acquire pose information of the controller in the display environment, where the pose information may include one or two of position information and pose information, the position information may be represented by coordinates, and the pose information may be represented by an angle. Thus, based on the first images acquired every other preset time, pose information of the controller at different moments is acquired, and real-time tracking and positioning of the controller are realized.
In some embodiments, an IMU (Inertial Measurement Unit ) sensor can be built in the controller to acquire pose information of the controller acquired by the IMU as first pose information, acquire pose information of the controller determined based on the first light point set and the third light point set as second pose information, and fuse the first pose information with the second pose information to obtain actual pose information of the controller. Therefore, the pose information determined based on the IMU sensor is fused with the pose information determined according to the light spot set, so that the accuracy of the finally obtained pose information of the controller can be improved, and the real-time tracking and positioning of the head-mounted display device to the controller are facilitated.
In some embodiments, because in the real environment, there may be a situation that the positions of some interference light sources are very similar to those of the light emitting units on the controller, in order to avoid misidentifying the interference light points formed by the interference light sources, after obtaining a third light point set from the second light point set in the first image, a second number of light points matched with the preset light point distribution in the second image is obtained, further determining whether the size of each light point in the third light point set meets the preset size condition, if the size of the target light point does not meet the preset size condition, rejecting the target light point from the third light point set, wherein the target light point is any light point in the third light point set, and finally, determining pose information of the controller in the real environment based on the first light point set and the third light point set after rejecting the target light point. Therefore, the size judgment condition of the light spot is introduced, the false recognition of the interference light spot close to other positions is avoided, the accuracy of the obtained light spot set is improved, and further, the pose information of the controller determined based on the light spot set is more accurate, namely, the accuracy of positioning and tracking the controller is improved.
In some embodiments, the acquired first image includes a plurality of controls, for example, if the user holds one control in both the left and right hands, then the first image includes 2 controls, such as the first control and the second control. Wherein the light emitting units on the 2 controllers are arranged in a mirror image manner, i.e. the arrangement of the light emitting units on the 2 controllers is not the same. Therefore, the first light point set corresponding to the first controller and the first light point set corresponding to the second controller may be determined according to the distribution of the light points in the first image, and the pose information of the first controller in the real environment is determined based on the first light point set corresponding to the first controller and the pose information of the second controller in the real environment is determined according to the first light point set corresponding to the second controller, which are not described herein. Therefore, the head-mounted display device is facilitated to determine the first light point set corresponding to each controller according to the distribution of the light points in the first image by arranging the light emitting units on the controllers differently, and tracking and positioning of each controller in the controllers are achieved.
In this embodiment, the target three-dimensional model is determined through the first light spot set, then a second image of the target three-dimensional model mapped to the two-dimensional plane is obtained, and the distribution of preset light spots in the second image is obtained, finally, based on the distribution of the preset light spots in the second image, a sufficient number of light spot sets formed by the light emitting units of the controller are accurately identified from the first image, so that pose information of the controller determined based on the light spot sets is more accurate, that is, accuracy of positioning and tracking of the controller is improved.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a positioning method, a positioning device, a head-mounted display device and a storage medium for a controller according to another embodiment of the application. The following describes in detail the positioning method of the controller according to the embodiment of the present application with reference to fig. 5. The positioning method of the controller may include the steps of:
step S310, an image containing a controller in a real environment is acquired and used as a first image, the controller carries a plurality of light emitting units, and the first image comprises light spots corresponding to the light emitting units.
In this embodiment, the specific content in step S310 may refer to the content in the foregoing embodiment, which is not described herein.
Step S320, grouping all light spots in the first image to obtain a plurality of light spot combinations, wherein each light spot combination in the plurality of light spot combinations comprises the first number of light spots.
In this embodiment, all the light spots in the first image are grouped to obtain a plurality of light spot combinations, where the number of light spots included in each light spot combination is the first number. The first number of light spots included in each light spot combination may be adjacent (for example, 3 adjacent light spots are 1 light spot combination), and of course, the first number of light spots included in each light spot combination may not be adjacent, and different grouping modes may be preset according to different requirements, which is not limited in this embodiment.
In some embodiments, if the plurality of light emitting units carried on the controller are arranged into a plurality of circles, all light spots corresponding to the first image on any one of the plurality of circles are grouped, so as to obtain the plurality of light spot combinations. For example, when the first number is 3 and the number of circles is 2, each of the plurality of light spot combinations includes 3 light spots in the upper circle or 3 light spots in the lower circle. Therefore, the number of the plurality of light spot combinations obtained by grouping on the same circular ring is limited, so that the matching of the first light spot set in the following plurality of light spot combinations is facilitated, the matching speed of the first light spot set is improved, and the efficiency of positioning and tracking the controller is further improved.
In other embodiments, if a plurality of light emitting units carried on the controller are arranged into a plurality of rings, all light spots on a target ring in the plurality of rings are grouped to obtain a plurality of first light spot combinations, the number of each first light spot combination is a third number, all light spots on any ring except the target ring in the plurality of rings are grouped to obtain a plurality of second light spot combinations, the number of each second light spot combination is a fourth number, and different combinations are performed on the plurality of first light spot combinations and the plurality of second light spot combinations to obtain the plurality of light spot combinations, wherein the sum value of the third number and the fourth number is equal to the first number. Illustratively, when the first number is 3 and the number of circles is 2, each of the plurality of light spot combinations includes 1 light spot in the upper circle and 2 light spots in the lower circle, or includes 2 light spots in the upper circle and 1 light spot in the lower circle, which is not limited in this embodiment. Thus, since multiple light spot combinations are grouped from different rings, more light spot combinations are included, providing more matching options.
Step S330, the distance between adjacent light spots in each light spot combination in the plurality of light spot combinations and the angle between a plurality of line segments formed by the adjacent light spots are obtained as light spot distribution information corresponding to each light spot combination.
Based on this, after a plurality of spot combinations are obtained, the distance between adjacent spots in each spot combination and the angle between the line segments formed by the adjacent spots are obtained as spot distribution information corresponding to each spot combination. For example, referring to fig. 6, a spot combination 1 includes a spot a, a spot B, and a spot C, and distances AB and BC between adjacent spots and an angle +.1 between a line segment AB and a line segment BC are obtained as spot distribution information corresponding to the spot combination 1.
Step S340, obtaining, from the plurality of light spot combinations, the light spot combination in which the light spot distribution information simultaneously meets the preset distance distribution condition and the preset angle distribution condition, as the first light spot set.
In this embodiment, the preset distance distribution condition may be that a difference between a distance between adjacent light spots and the preset distance is within an error threshold, and the preset angle distribution condition may be that an angle between two line segments formed by the adjacent light spots is not smaller than a preset angle. After the light spot distribution information corresponding to each light spot combination in the plurality of light spot combinations is obtained, judging whether the light spot distribution information corresponding to each light spot combination meets the preset distance distribution condition and the preset angle distribution condition at the same time, and if the target light spot distribution information meets the preset distance distribution condition and the preset angle distribution condition at the same time, obtaining the light spot combination corresponding to the target light spot distribution information as a first light spot set, wherein the target light spot distribution information is the light spot distribution information corresponding to any light spot combination.
In some embodiments, if there are multiple target light spot distribution information that simultaneously meet the preset distance distribution condition and the preset angle distribution condition, a light spot combination corresponding to the target light spot distribution information that most meets the preset distance distribution condition and the preset angle distribution condition in the multiple target light spot distribution information may be obtained as the first light spot set. Specifically, a light spot combination corresponding to target light spot distribution information with the smallest difference between the angle between two line segments formed by the adjacent light spots and the preset angle is obtained as a first light spot set, wherein the difference between the distance between the adjacent light spots and the preset distance is smallest in the plurality of target light spot distribution information.
And S350, determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
And step S360, mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all the light emitting units on the controller.
In this embodiment, the specific content in step S350 to step S360 may refer to the content in the foregoing embodiment, and will not be described herein.
Step S370, obtaining a second number of light spots matched with the preset light spot distribution in the second image from a second light spot set in the first image to obtain a third light spot set, wherein the second light spot set comprises other light spots except the first light spot set in the first image.
In some embodiments, referring to fig. 7, step S370 may include the steps of:
and step S371, according to the direction of the target three-dimensional model, obtaining visible light spots in the preset light spots in the second image, wherein the visible light spots are light spots contained in an image obtained by shooting the controller from the direction opposite to the direction.
In this embodiment, the first image of the controller taken from different directions contains different light spots and the distribution of the light spots due to the singularity of the distribution of the light emitting units on the controller. Still referring to fig. 4, fig. 4 shows different distribution states of the light emitting units on the viewing controller in different directions (front, back, right and top), and thus, the visible light points in the first image obtained by photographing the controller from different directions are also different.
It will be appreciated that the target three-dimensional model is acquired from a first set of light points in the first image, and therefore, the visible light points in the second image acquired based on the target three-dimensional model are both captured and displayed at corresponding locations in the first image. Based on this, the direction facing the direction of the orientation of the target three-dimensional model can be determined as the target shooting direction based on the obtained orientation of the target three-dimensional model, and the light spot included in the image obtained from the target shooting direction shooting controller in the second image can be obtained as the visible light spot. Illustratively, the black dots in fig. 4 represent visible light spots, and the gray dots represent other ones of the preset light spots than the visible light spots.
Step S372, obtaining a second number of light spots matching the visible light spot distribution in the second image from the second light spot set, to obtain the third light spot set.
In some embodiments, the first light spot is a first light spot, the second light spot is a second light spot, please refer to fig. 8, step S372 may include the steps of:
Step S3721, obtaining relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set, and obtaining position distribution information corresponding to each second light spot, wherein the position distribution information comprises relative position information between each second light spot and all first light spots in the first light spot set.
In the present embodiment, since the light spots matching the visible light spot distribution in the second image are to be acquired from the second light spot set, the relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set can be acquired first, and the position distribution information corresponding to each second light spot can be obtained. The position distribution information includes relative position information between each second light spot and all first light spots in the first light spot set, where the relative position information includes a distance between the second light spot and each first light spot, and/or an angle between a plurality of line segments formed by the second light spot and each first light spot, and a specific implementation manner of obtaining the relative position information may refer to the content in the foregoing embodiment, which is not described herein again.
Step S3722, obtaining the visible light points corresponding to the first light point set from the visible light points as first visible light points.
And step S3723, acquiring other visible light spots except the first visible light spot in the visible light spots as second visible light spots.
The second image has visible light spots corresponding to the first light spot set one by one, so that the visible light spots corresponding to the first light spot set can be acquired as the first visible light spots, and other visible light spots except the first visible light spots in the second image are acquired as the second visible light spots. It will be appreciated that the second light points in the second set of light points matching the distribution of the second visible light points can be added to the third set of light points.
Step S3724, obtaining preset position information between each second visible light spot and each first visible light spot, and obtaining preset distribution information corresponding to each second visible light spot, where the preset distribution information includes preset position information between each second visible light spot and all the first visible light spots.
Based on the information, the preset position information between each second visible light spot and each first visible light spot is further acquired, and the preset distribution information corresponding to each second visible light spot is obtained. The preset distribution information comprises preset position information between each second visible light spot and all first visible light spots, wherein the preset position information comprises a distance between the second visible light spot and each first visible light spot and/or angles between a plurality of line segments formed by the second visible light spot and each first visible light spot. The preset position information may be position information obtained from a preset position database according to the orientation of the target three-dimensional model, or may be position information calculated in real time according to the orientation of the target three-dimensional model, which is not limited in this embodiment.
Step S3725, obtaining a second light spot with the position distribution information matched with the preset position information from the second light spot set, thereby obtaining the third light spot set.
In this embodiment, after the preset distribution information of the second visible light points and the position distribution information of the second light points are obtained, the position distribution information of each second light point in the second light point set is matched with the preset distribution information of each second visible light point, and if the target second light point is successfully matched, a third light point set is generated based on the target second light point. The target second light spot is any second light spot in the second light spot set.
Specifically, if the position distribution information and the preset position information meet the specified distribution condition, the position distribution information is judged to be matched with the preset information. The specified distribution condition may include a specified distance distribution condition and a specified angle distribution condition, and the specified distance distribution condition may be that a difference between a distance between the second light spot and the first light spot and a distance between the corresponding second visible light spot and the first visible light spot is within a specified distance threshold, and the specified angle distribution condition may be that a difference between angles between a plurality of line segments formed by the second light spot and the first light spot and angles between a plurality of line segments formed by the corresponding second visible light spot and the first visible light spot is within a specified angle threshold.
Step S380, determining pose information of the controller in the real environment based on the first light point set and the third light point set.
In this embodiment, the specific content in step 380 may refer to the content in the foregoing embodiment, which is not described herein.
In this embodiment, the distribution of the second visible light points in the second image is first obtained, then the second light points matched with the second visible light point distribution are obtained in a concentrated manner from the second light points in the first image, a third light point set is obtained, and finally pose information of the controller in the real environment is determined based on the first light point set and the third light point set. Therefore, only the second light spots in the second light spot set are matched with the distribution of the second visible light spots in the second image, the matching times are reduced, the efficiency of acquiring the third light spot set is improved, and the efficiency of positioning and tracking the controller in real time based on the light spot set is further improved.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a positioning method, a positioning device, a head-mounted display device and a storage medium for a controller according to another embodiment of the application. The positioning method of the controller according to the embodiment of the present application will be described in detail with reference to fig. 9. The positioning method of the controller may include the steps of:
Step S410, an image containing a controller in a real environment is acquired and used as a first image, the controller carries a plurality of light emitting units, and the first image comprises light spots corresponding to the light emitting units.
Step S420, a first number of light spots meeting preset distribution conditions are acquired from the first image, and a first light spot set is obtained.
And S430, determining a three-dimensional model matched with the first light point set from the three-dimensional models corresponding to the controller as a target three-dimensional model.
And S440, mapping the target three-dimensional model to a two-dimensional plane to obtain a second image, wherein the second image comprises preset light spots corresponding to all the light emitting units on the controller.
In this embodiment, the specific content in step S410 to step S440 may refer to the content in the foregoing embodiment, and will not be described herein.
Step S450, obtaining a light spot matching the preset light spot distribution in the second image from the second light spot set in the first image, and adding the light spot to the third light spot set.
In some embodiments, when there are multiple spots in the second spot set that match the preset spot distribution, one of the matched spots may be taken and added to the third spot set. Therefore, the speed of acquiring the concentrated light spots belonging to the third light spot is ensured, and the real-time performance of tracking and positioning the controller is ensured.
In other embodiments, when there are a plurality of spots in the second spot set that match the preset spot distribution, the spot that best matches the preset spot distribution may be acquired and added to the third spot set. The principle of obtaining the light spot that is most matched with the preset light spot distribution is similar to the principle of obtaining the light spot combination corresponding to the target light spot distribution information that best meets the preset distance distribution condition and the preset angle distribution condition in the above embodiment, and the most matched light spot can be determined according to the matching condition of the position distribution information and the preset position information, which is not described herein again. Therefore, the target three-dimensional model obtained each time can be more accurate, the accuracy of obtaining the light spots matched with the preset light spot distribution based on the target three-dimensional model is improved, and the third light spot set finally obtained is more accurate, so that the accuracy of real-time positioning and tracking of the controller is guaranteed.
Step S460, if the number of the light spots included in the third light spot set is smaller than the second number, acquiring a three-dimensional model matched with the first light spot set and the third light spot set as the target three-dimensional model, and repeatedly executing the mapping of the target three-dimensional model to a two-dimensional plane to obtain a second image to the second light spot set in the first image, acquiring one light spot matched with the preset light spot distribution in the second image, and adding the one light spot to the third light spot set until the number of the light spots included in the third light spot set is equal to the second number.
Based on this, it may be determined whether the number of light spots included in the third light spot set is smaller than the second number, and if the number of light spots included in the third light spot set is smaller than the number of light spots representing the first image and not all light spots have been acquired, a three-dimensional model corresponding to the controller is acquired based on the first light spot set and the third light spot set, the mapping of the three-dimensional model to the two-dimensional plane is repeatedly performed, so as to obtain a second image to the second light spot set from the first image, one light spot matching the preset light spot distribution in the second image is acquired, and the second image is added to the third light spot set until the number of light spots included in the third light spot set is equal to the second number. Therefore, only one light spot is obtained each time and added to the third light spot set, and then the corresponding target three-dimensional model is determined based on the third light spot set and the first light spot set, so that the accuracy of the determined target three-dimensional model is improved, and the accuracy of the third light spot set obtained according to the target three-dimensional model is further improved, namely, enough light spot sets with more accurate distribution are obtained, pose information of the controller determined based on the light spot sets is more accurate, and the accuracy of positioning and tracking of the controller is improved.
In some embodiments, if the number of light spots included in the third light spot set is still smaller than the second number, it is determined whether the number of times of repeatedly executing the steps reaches a preset number of times, or whether the duration of acquiring the third light spot set reaches a preset duration, and if the number of times of repeatedly executing the steps reaches a preset number of times, or the duration of acquiring the third light spot set reaches a preset duration, it means that some light spots in the first image may be blocked by other objects, and the second number of light spots cannot be acquired, and at this time, the step of determining pose information of the controller in the real environment based on the first light spot set and the third light spot set may be directly executed. Therefore, the problem of positioning delay caused by long time spent for acquiring the third light point set can be avoided, the head-mounted display device can acquire pose information of the controller in time, and real-time tracking and positioning of the controller are realized.
And step S470, determining pose information of the controller in the real environment based on the first light point set and the third light point set.
In this embodiment, the specific content in step S470 may refer to the content in the foregoing embodiment, which is not described herein.
In this embodiment, 1 light spot matching the preset light spot distribution in the second image is obtained from only the second light spot set in the first image at a time, added to the third light spot set, and then a corresponding target three-dimensional model is determined based on the third light spot set and the first light spot set. Therefore, the accuracy of the determined target three-dimensional model is improved, the distribution of the preset light points determined based on the target three-dimensional model is more accurate, the light point set matched with the distribution of the preset light points is more accurately identified from the first image, the pose information of the controller determined based on the light point set is more accurate, and the accuracy of positioning and tracking the controller is improved.
Referring to fig. 10, a block diagram of a positioning device 500 of a controller according to an embodiment of the application is shown. The apparatus 500 may include an image acquisition module 510, a first spot set acquisition module 520, a three-dimensional model acquisition module 530, a mapping module 540, a third spot set acquisition module 550, and a positioning module 560.
The image acquisition module 510 is configured to acquire an image including a controller in a real environment, where the controller carries a plurality of light emitting units as a first image, and the first image includes light points corresponding to the light emitting units.
The first light spot set obtaining module 520 is configured to obtain a first number of light spots that meet a preset distribution condition from the first image, so as to obtain a first light spot set.
The three-dimensional model obtaining module 530 is configured to determine, from the three-dimensional models corresponding to the controller, a three-dimensional model matched with the first light point set as a target three-dimensional model.
The mapping module 540 is configured to map the target three-dimensional model to a two-dimensional plane, so as to obtain a second image, where the second image includes preset light points corresponding to all the light emitting units on the controller.
The third light spot set obtaining module 550 is configured to obtain a third light spot set from a second light spot set in the first image, where the second light spot set includes light spots in the first image other than the first light spot set, where the second light spot set is a second number of light spots matching the preset light spot distribution in the second image.
The positioning module 560 is configured to determine pose information of the controller in the real environment based on the first set of light points and the third set of light points.
In some embodiments, the preset distribution conditions include a preset distance distribution condition and a preset angle distribution condition, and the first light spot set acquiring module 520 may include a light spot combination acquiring unit, a light spot distribution acquiring unit, and a first light spot set acquiring unit. The light spot combination acquiring unit may be configured to group all light spots in the first image to obtain a plurality of light spot combinations, where each light spot combination in the plurality of light spot combinations includes light spots of the first number. The light spot distribution acquisition unit may be configured to acquire, as the light spot distribution information corresponding to each of the plurality of light spot combinations, a distance between adjacent light spots in the each of the plurality of light spot combinations, and an angle between a plurality of line segments formed by the adjacent light spots. The first light spot set obtaining unit may be configured to obtain, as the first light spot set, a light spot combination in which the light spot distribution information simultaneously meets the preset distance distribution condition and the preset angle distribution condition, from the plurality of light spot combinations.
In this manner, the light spot combination acquiring unit may be specifically configured to, if a plurality of light emitting units carried on the controller are arranged into a plurality of circles, group all light spots corresponding to any one of the plurality of circles on the first image, and obtain the plurality of light spot combinations.
In some embodiments, the third spot set acquisition module 550 may include a visible spot acquisition unit and a third spot set acquisition unit. The visible light spot acquiring unit may be configured to acquire, according to an orientation of the three-dimensional model, a visible light spot in the preset light spot in the second image, where the visible light spot is a light spot included in an image obtained by capturing the controller from a direction opposite to the orientation. The third light spot set obtaining unit may be configured to obtain, from the second light spot set, a second number of light spots matching the visible light spot distribution in the second image, resulting in the third light spot set.
In this manner, the light spots in the first light spot set are used as first light spots, the light spots in the second light spot set are used as second light spots, the third light spot set acquisition unit is specifically configured to acquire relative position information between each second light spot in the second light spot set and each first light spot in the first light spot set, obtain position distribution information corresponding to each second light spot, wherein the position distribution information comprises relative position information between each second light spot and all first light spots in the first light spot set, acquire visible light spots corresponding to the first light spot set in the visible light spots as first visible light spots, acquire other visible light spots except for the first visible light spots in the visible light spots as second visible light spots, acquire preset position information between each second visible light spot and each first visible light spot, obtain preset distribution information corresponding to each second visible light spot, wherein the preset distribution information comprises preset position information between each second visible light spot and all first visible light spots, and the preset position information is matched with the third light spot set.
In some embodiments, the positioning device 500 of the controller may further include a size judgment module. The size judging module may be configured to obtain, from the second light spot set in the first image, a second number of light spots that match the preset light spot distribution in the second image, and after obtaining a third light spot set, judge whether a size of each light spot in the third light spot set meets a preset size condition, and if a size of a target light spot does not meet the preset size condition, reject the target light spot from the third light spot set, where the target light spot is any light spot in the third light spot set. The positioning module 560 may be specifically configured to determine pose information of the controller in the real environment based on the first set of light points and the third set of light points after the target light points are removed.
In other embodiments, the third light spot set obtaining module 550 may be specifically configured to obtain, from the second light spot set in the first image, a light spot matching the preset light spot distribution in the second image, and add the light spot to the third light spot set, and if the number of light spots included in the third light spot set is smaller than the second number, obtain, as the target three-dimensional model, a three-dimensional model matching the first light spot set and the third light spot, and repeatedly perform the mapping of the target three-dimensional model to a two-dimensional plane, to obtain a second image to the second light spot set in the first image, obtain, from the second light spot set in the first image, a light spot matching the preset light spot distribution in the second image, and add the light spot to the third light spot set until the number of light spots included in the third light spot set is equal to the second number.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In several embodiments provided by the present application, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
In summary, in the scheme provided by the embodiment of the application, an image including a controller in a real environment is obtained as a first image, the controller carries a plurality of light emitting units, the first image includes light points corresponding to the light emitting units, a first number of light points meeting a preset distribution condition are obtained from the first image to obtain a first light point set, a three-dimensional model matched with the first light point set is determined from the three-dimensional model corresponding to the controller and used as a target three-dimensional model, the target three-dimensional model is mapped to a two-dimensional plane to obtain a second image, the second image includes preset light points corresponding to all the light emitting units on the controller, a second number of light points matched with the preset light point distribution in the second image are obtained from a second light point set in the first image to obtain a third light point set, the second light point set includes other light points except the first light point set in the first image, and pose information of the controller in the real environment is determined based on the first light point set and the third light point set. Therefore, a sufficient number of light spot sets formed by the light emitting units of the controller can be accurately identified, so that pose information of the controller determined based on the light spot sets is more accurate, namely, the accuracy of positioning and tracking the controller is improved.
A head mounted display device provided by the present application will be described with reference to the drawings.
Referring to fig. 11, fig. 11 shows a block diagram of a head-mounted display device 600 according to an embodiment of the present application, and a positioning method of a controller according to an embodiment of the present application may be performed by the head-mounted display device 600. Wherein the head mounted display device 600 may be a device capable of running applications.
The head mounted display device 600 in embodiments of the application may include one or more of a processor 601, a memory 602, and one or more application programs, wherein the one or more application programs may be stored in the memory 602 and configured to be executed by the one or more processors 601, the one or more program(s) configured to perform the method as described in the method embodiments previously described.
Processor 601 may include one or more processing cores. The processor 601 connects the various parts within the overall head mounted display device 600 using various interfaces and lines, performs various functions of the head mounted display device 600 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 602, and invoking data stored in the memory 602. Alternatively, the processor 601 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 601 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing display contents, and the modem is used for processing wireless communication. It will be appreciated that the modem may also be integrated into the processor 901 and implemented solely by a communication chip.
The Memory 602 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Memory 602 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 602 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the head mounted display device 600 in use (such as the various correspondences described above), and so forth.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided by the present application, the illustrated or discussed coupling or direct coupling or communication connection of the modules to each other may be through some interfaces, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other forms.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 12, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 700 has stored therein program code which may be invoked by a processor to perform the methods described in the method embodiments above.
The computer readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 comprises a non-transitory computer-readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 700 has memory space for program code 710 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 710 may be compressed, for example, in a suitable form.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it will be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or replacements do not drive the essence of the corresponding technical solution to deviate from the spirit and scope of the technical solution of the embodiments of the present application.