CN115525139A - Method and device for acquiring gazing target in head-mounted display equipment - Google Patents
Method and device for acquiring gazing target in head-mounted display equipment Download PDFInfo
- Publication number
- CN115525139A CN115525139A CN202110703043.6A CN202110703043A CN115525139A CN 115525139 A CN115525139 A CN 115525139A CN 202110703043 A CN202110703043 A CN 202110703043A CN 115525139 A CN115525139 A CN 115525139A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- head
- mounted display
- right eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
The disclosure relates to a method and a device for acquiring a gazing target in a head-mounted display device, wherein the method comprises the following steps: determining distance information from at least two target objects in a user field of view to the user respectively; determining distance information from the user sight gaze point to the user; and determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user. The technical scheme provided by the embodiment of the disclosure can determine the gazing target from a plurality of target objects in the user field of view, and is particularly suitable for the situation that at least two target objects in the user field of view are located in the same direction of the user.
Description
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a method and an apparatus for obtaining a gazing target in a head-mounted display device.
Background
With the development of intelligent computer technology, intelligent products are continuously emerging, and after smart phones and tablet computers, augmented Reality (AR) has potential to become the next important general computing platform. The AR head-mounted display device is a wearable device which can realize AR technology and can be worn on the head of a user for displaying, virtual information can be superposed to the real world through computer technology, so that a real environment and a virtual object can be superposed to the same picture in real time, mutual supplement of the two kinds of information is realized, and picture displaying is carried out in front of the eyes of the user through devices such as helmets, glasses and the like, so that the reality sense of the user is enhanced.
For example, in a museum, an AR head mounted display device automatically identifies cultural relics in the field of view of a user and displays introduction information for all cultural relics in the field of view. However, in the prior art, when the number of the cultural relics in the visual field of the user is more than or equal to 2, the introduction information of all the cultural relics is mixed and displayed, so that the user cannot distinguish the introduction information, and the introduction information of the concerned cultural relic cannot be obtained.
Disclosure of Invention
To solve the technical problem or at least partially solve the technical problem, the present disclosure provides a gaze target recognition method, apparatus, electronic device, and storage medium.
In a first aspect, the present disclosure provides a method of acquiring a gaze target in a head-mounted display device, comprising:
determining distance information from at least two target objects in a user field of view to the user respectively;
determining distance information from the user sight gaze point to the user;
and determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
In a second aspect, the present disclosure also provides an apparatus for acquiring a gaze target in a head-mounted display device, including:
the first distance determining module is used for determining the distance information from at least two target objects in a user field of view to the user respectively;
the second distance determining module is used for determining the distance information from the user sight fixation point to the user;
the recognition module is used for determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
In a third aspect, the present disclosure also provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of acquiring a gaze target in a head mounted display device as described above.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of acquiring a gaze target in a head-mounted display device as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the technical scheme provided by the embodiment of the disclosure determines the distance information from at least two target objects in the user field of view to the user respectively; determining distance information from a user sight gaze point to a user; the gazing target is determined in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user, and the gazing target can be determined from a plurality of target objects in the user sight field.
The technical scheme provided by the embodiment of the disclosure is particularly suitable for the condition that at least two objects in the field of view of the user are positioned in the same direction of the user. This is because it is assumed that there are two objects, object a and object B, respectively. The two objects are simultaneously located directly in front of the user and arranged in tandem. In this case, if the user gazes at the target object a, it can be determined that the user is gazing at the front of the user only by the existing gaze tracking technology, and it is impossible to further determine whether the user is gazing at the target object a or the target object B. Through the technical scheme provided by the disclosure, whether the user gazes at all is the target object A or the target object B can be uniquely determined, and the accuracy of gazing target determination can be improved.
According to the technical scheme provided by the embodiment of the disclosure, when the gazing target is determined, even if the gazing target is uniquely located in a certain direction of a user, compared with the scheme for determining the gazing target through the sight line direction of the user, the technical scheme provided by the disclosure determines the gazing target through matching of two distances, so that the calculation amount is less, the consumed time for determining the gazing target is less, the energy consumption is low, the requirement on the performance of head-mounted display equipment is low, the weight of the equipment is favorably reduced, and the identification speed is increased.
The technical scheme provided by the embodiment of the disclosure can accurately determine the most probable target object watched by the user no matter how many target objects are in the user field of view, and display the related information of the most probable target object, so as to ensure that the head-mounted display equipment always displays the related information of one object, thereby avoiding the adverse phenomenon that the user cannot distinguish each information due to the fact that the introduction information of a plurality of objects is mixed and displayed, and the introduction information of the concerned object cannot be obtained, and improving the user satisfaction.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a method for acquiring a gaze target in a head-mounted display device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of binocular ranging provided by an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for S120 according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram for determining distance information from a gaze point of a user to the user according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a method for S121 according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an eyeball according to an embodiment of the present disclosure;
fig. 7 is a flowchart of another method for S121 according to an embodiment of the present disclosure;
fig. 8 is a flowchart of another method for obtaining a gaze target in a head-mounted display device according to an embodiment of the disclosure;
fig. 9 is a block diagram of an AR head-mounted display device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an apparatus for acquiring a gazing target in a head-mounted display device in an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Fig. 1 is a flowchart of a method for acquiring a gazing target in a head-mounted display device according to an embodiment of the present disclosure, where the embodiment may be applied to a case where a gazing target is identified from at least two targets in a field of view of a user, and the method may be performed by an apparatus for acquiring a gazing target in a head-mounted display device, where the apparatus may be implemented in software and/or hardware, and the apparatus may be configured in a head-mounted display device, for example, an AR helmet or AR glasses.
As shown in fig. 1, the method may specifically include:
s110, determining distance information from at least two target objects in the field of view of the user to the user respectively.
In this application, the user refers to the wearer of the head mounted display device.
The implementation method of this step is various, and exemplarily, the head-mounted display device determines distance information from at least two target objects in the user's field of view to the user by using a binocular ranging method or an infrared ranging method.
Further, if the head-mounted display device determines the distance information from at least two targets in the user's field of view to the user by using a binocular ranging method, the distance information may specifically be: the head-mounted display equipment acquires binocular image information of at least two targets by using the double-fisheye cameras; the field of view of the double-fisheye camera is larger than that of the user; the head-mounted display equipment determines the distance information from at least two target objects in the visual field of the user to the user by using a binocular ranging method based on binocular image information of the at least two target objects.
Fig. 2 is a schematic diagram of binocular ranging provided in an embodiment of the present disclosure. Referring to fig. 2, two fisheye cameras with depth information recognition capability are provided in the head-mounted display device. The connecting line of the two fisheye cameras and the connecting line of the left eye and the right eye of the user are positioned on the same straight line. Here, "the two fisheye camera lines are in the same line with the user's left and right eye lines" should be understood as being in the same line with the user's left and right eye lines within an error allowance range.
At this time, point O, see FIG. 2 R And point O T Representing the positions of two fisheye cameras, and taking a point P as a target object; the imaging points of the point P on the two cameras are respectively a point P1 and a point P2 (the imaging plane of the camera is placed in front of the lens after rotating), the focal lengths of the two cameras are both f, B is the distance between the two cameras, and Z isDistance of the target object P to the user. Given that the distance between point P1 and point P2 is dis, then
dis=B-(X R -X T ) (1)
According to the principle of similar triangle, there are
Further, it is possible to obtain,
in a certain head-mounted display device, if the two fisheye camera types are determined, the focal length f is determined to be a fixed value. The positions of the two fisheye cameras are fixed, and the distance B between the two cameras is determined and is a fixed value. Thus, in performing this step, X can be determined R -X T (i.e., parallax), the distance Z from the target object P to the user can be obtained by combining equation (3).
And S120, determining the distance information from the gaze point of the user to the user.
Since the gaze point of the user is the gaze target, determining the distance information from the gaze point of the user to the user is essentially determining the distance information from the gazed target to the user.
S130, determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
Optionally, the distance information from each target object to the user and the distance information from the user's gaze fixation point to the user are compared one by one, if the distance information from a certain target object to the user is consistent with the distance information from the user's gaze fixation point to the user, the target object is a gaze target, otherwise, the target object is not a gaze target.
It should be noted that, when the above technical solution is actually executed, because there are data acquisition errors, measurement errors, calculation errors, and the like, in the execution of this step, if an absolute value of a difference between a distance from a certain target object to the user and a distance from the gaze point of the user to the user is less than or equal to a preset value, the target object is a gaze target, and the preset value is determined according to one or more of the data acquisition errors, the measurement errors, and the calculation errors.
The technical scheme is that the distance information from at least two target objects in the user view field to the user is determined; determining distance information from a user sight gaze point to a user; the gazing target is determined in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user, and the gazing target can be determined from a plurality of target objects in the user sight field.
The technical scheme is particularly suitable for the condition that at least two objects in the field of view of the user are positioned in the same direction of the user. This is because it is assumed that there are two objects, object a and object B, respectively. The two objects are simultaneously located directly in front of the user and arranged in tandem. In this case, if the user gazes at the target object a, it can be determined that the user is gazing at the front of the user only by the existing gaze tracking technology, and it is impossible to further determine whether the user is gazing at the target object a or the target object B. The technical scheme provided by the disclosure can uniquely determine the gazing target of the user, and can improve the accuracy of determining the gazing target.
In addition, when the gazing target is determined, even if the gazing target is uniquely located in a certain direction of a user, compared with a scheme of determining the gazing target through the sight line direction of the user, the technical scheme provided by the disclosure determines the gazing target through matching of two distances, so that the calculation amount is less, the time consumption for determining the gazing target is less, the energy consumption is low, the requirement on the performance of the head-mounted display equipment is low, the weight of the equipment is favorably reduced, and the identification speed is increased.
In the above technical solution, there are various specific implementation methods of S120, and in an embodiment, fig. 3 is a flowchart of a method for S120 according to an embodiment of the present disclosure. Referring to fig. 3, the method includes:
s121, the head-mounted display device determines the sight line direction information of the left eye and the sight line direction information of the right eye of the user.
And S122, the head-mounted display equipment determines the position information of the gaze fixation point of the user based on the gaze direction information of the left eye and the gaze direction information of the right eye of the user.
S123, the head-mounted display device determines distance information from the user sight gaze point to the user based on the position information of the user sight gaze point.
Fig. 4 is a schematic diagram for determining distance information from a gaze point of a user to the user according to an embodiment of the present disclosure. Referring to fig. 4, it is assumed that the line of sight direction of the left eye of the user is Q1 and the line of sight direction of the right eye is Q2. The position information of the gaze fixation point N of the user's gaze can be obtained based on the gaze direction Q1 of the user's left eye and the gaze direction Q2 of the user's right eye, and the distance information M from the gaze fixation point N of the user to the user can be determined based on the position information of the gaze fixation point N of the user.
The method for realizing the S120 is simple in calculation method and easy to realize.
Further, in the above technical solution, there are various methods for implementing S121, and for example, the gaze direction information of the left eye and the gaze direction information of the right eye of the user may be determined by using one or more of a sclera-iris edge method, a biperchinian image method, and a pupil corneal reflection method.
The following describes a method for implementing S121 in detail, taking the pupillary-corneal reflex method as an example. Fig. 5 is a flowchart of a method for S121 according to an embodiment of the present disclosure. Referring to fig. 5, the method includes:
and S1211, controlling the left and right light sources to respectively irradiate the left eye and the right eye of the user by the head-mounted display device.
The left light source and the right light source are fixedly arranged in the head-mounted display equipment, and when the head-mounted display equipment is worn by a user, the left light source is fixed in position relative to the left eye of the user, and the right light source is fixed in position relative to the right eye of the user.
And S1212, the head-mounted display device acquires the image information of the left eye and the image information of the right eye of the user, which are acquired by the image acquisition device.
The image acquisition device comprises a first image acquisition device for acquiring a left eye image of a user and a second image acquisition device for acquiring a right eye image of the user. First image acquisition equipment and second image acquisition equipment are all fixed mounting in head-mounted display device. When the user wears the head-mounted display device, the first image acquisition device is fixed in position relative to the left eye of the person, and the second image acquisition device is fixed in position relative to the right eye of the person.
Alternatively, the left and right light sources in S1211 may be infrared light sources, and the image capturing device in S1212 may be an infrared camera. Due to the fact that the cornea of the user has larger reflectivity to infrared light, the arrangement can ensure that the calculation result of the subsequent sight line direction information has higher accuracy.
S1213, determining left eye corneal curvature center position information and left eye pupil center position information of the user by the head-mounted display device based on the left eye image information; and determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye.
Optionally, when the step is executed, the head-mounted display device determines left-eye purkinje spot position information based on left-eye image information; determining left eye corneal curvature center position information of the user based on the left eye purkinje spot position information and the position information of the left light source; and determining the central position information of the pupil of the left eye based on the image information of the left eye and the eyeball model structure data. The head-mounted display equipment determines the position information of the Purkinje spot of the right eye based on the image information of the right eye; determining the position information of the cornea curvature center of the right eye of the user based on the position information of the Purkinje spot of the right eye and the position information of the right light source; and determining the central position information of the pupil of the right eye based on the image information of the right eye and the eyeball model structure data.
When the light source irradiates the cornea, a glint spot, which is a Purkinje image, is generated, and the glint spot is formed by reflecting (CR) the light entering the pupil on the outer surface of the cornea. Because the eyeball is similar to a sphere, the position of the scintillation point irradiated on the eyeball basically cannot change along with the rotation of the eyeball.
S1214, the head-mounted display device determines the sight line direction information of the left eye of the user based on the left eye corneal curvature center position information and the left eye pupil center position information of the user.
Studies have shown that when a user is gazing at an object, the gaze of the user's eyes changes with the position of the gazed object. However, in this process, the absolute position of the cornea does not change with the rotation of the eyeball, but the position of the cornea changes with respect to the positions of the pupil and the eyeball. For example, when the user's eye looks in front, the cornea is below the pupil; when the user's eye is looking to the right, the cornea is to the left of the pupil; when the user's eye looks to the left, the corneal point is to the right of the pupil. Therefore, as long as the relative positions of the pupil and the cornea are determined, the direction of the vector formed therebetween can be taken as the direction of the user's eye line.
Therefore, alternatively, a direction pointing from the left-eye corneal curvature center position to the left-eye pupil center position may be taken as the line of sight direction of the left eye of the user. Exemplarily, fig. 6 is a schematic structural diagram of an eyeball according to an embodiment of the present disclosure. In FIG. 6, point O 1 Is the center of the pupil, point O 2 The center of curvature of the cornea. From the center of curvature O of the cornea 2 The direction pointing to the pupil center O1 is the eye's line of sight direction.
S1215, the head mounted display apparatus determines the sight line direction information of the right eye of the user based on the user right eye corneal curvature center position information and the right eye pupil center position information.
Similarly, a direction from the right-eye corneal curvature center position to the right-eye pupil center position may be referred to as a line of sight direction of the user's right eye.
The essence of the technical scheme is that according to the physiological characteristics of human eyes and the visual imaging principle, the image processing technology is utilized to process the collected image information of the eyes of the user, so that the characteristic parameters of the eyes of the user for sight estimation are obtained, the obtained characteristic parameters of the eyes of the user are used as reference points, and the corresponding mapping model is adopted to obtain the coordinates of the falling point of the sight, so that the sight tracking is realized. The method has high precision, no interference to the user and free rotation of the head of the user.
With continued reference to FIG. 6, the fovea maculae P2 points to the pupil center O according to the eye's physiology 1 The direction of (a) is the visual axis direction, i.e. the actual gazing direction of the eyes of the user. While using the corneal center O 2 Pointing to the center of the pupil O 1 Is the optical axis direction. Obviously, the optical axis direction is not the actual gaze direction of the user's eyes, but is close to the actual gaze direction of the user's eyes. Therefore, the final eye gaze direction information of the left eye and the right eye gaze direction information obtained by the technical solution in fig. 6 are slightly different from the actual gaze direction of the eyes of the user, and the difference may cause an error. Regarding the error, it may be adopted that the target of fixation is finally determined among the at least two targets by setting an appropriate error range when performing S130.
Alternatively, the optical axis direction may also be corrected so that the optical axis direction infinitely approaches the actual gaze direction of the user's eyes. There are various methods for correcting the optical axis direction, and for example, fig. 7 is a flowchart of another method for S121 according to the embodiment of the present disclosure. Referring to fig. 7, the method includes:
s1210, the head-mounted display device acquires left eye error compensation angle information and right eye error compensation angle information.
Referring to fig. 6, the error compensation angle is ≈ θ in fig. 6.
There are various implementation methods of this step, and for example, the implementation method of this step includes:
first, the head-mounted display device outputs a calibration instruction to make the user observe the calibration object determined by the position information with both eyes.
Here, the "calibration object whose position information is determined" means that the relative positional relationship of the calibration object and the head-mounted display device for performing the present gaze target recognition method is fixed, and the position information of the calibration object is known.
The calibration instruction is an instruction for prompting the user to gaze the calibration object through both eyes, for example, the user can be prompted to gaze the calibration object through a mode of sending voice prompting information.
Secondly, the head-mounted display device controls the left and right light sources to respectively illuminate the left eye and the right eye of the user.
And thirdly, under the condition that the user watches the calibration object through two eyes, the head-mounted display equipment acquires the image information of the left eye and the image information of the right eye of the user, which are acquired by the image acquisition equipment.
Thirdly, the head-mounted display equipment respectively determines left eye corneal curvature center position information and left eye pupil center position information of the user based on the image information of the left eye; and respectively determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye.
Thirdly, the head-mounted display equipment determines the optical axis direction information of the left eye of the user corresponding to the calibration object based on the left eye corneal curvature center position information and the left eye pupil center position information; and determining the optical axis direction information of the right eye of the user corresponding to the calibration object based on the corneal curvature center position information of the right eye and the pupil center position information of the right eye.
And thirdly, the head-mounted display equipment determines the visual axis direction information of the left eye and the visual axis direction information of the right eye of the user corresponding to the calibration object based on the position information of the calibration object.
Finally, the head-mounted display equipment determines left eye error compensation angle information based on the optical axis direction information of the left eye of the user and the left eye visual axis direction information corresponding to the calibration object; and determining right eye error compensation angle information based on the optical axis direction information of the right eye of the user corresponding to the calibration object and the visual axis direction information of the right eye.
It should be noted that the above process of implementing S1210 may be regarded as a calibration process, in which the user gazes at a calibration object determined by the position information. The calibration process may be understood as a parameter configuration process when a user first uses the head mounted display device. While the following S1211-S1215 do not belong to the calibration process, and are processes for identifying the gazing target in the user' S field of view when the user actually uses the head-mounted display device.
And S1211, controlling the left and right light sources to respectively irradiate the left eye and the right eye of the user by the head-mounted display device.
And S1212, the head-mounted display device acquires the image information of the left eye and the image information of the right eye of the user, which are acquired by the image acquisition device.
S1213, determining left eye corneal curvature center position information and left eye pupil center position information of the user by the head-mounted display device based on the left eye image information; and determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye.
S1214, the head-mounted display device determines the sight line direction information of the left eye of the user based on the left eye corneal curvature center position information and the left eye pupil center position information of the user.
S1215, the head mounted display apparatus determines the sight line direction information of the right eye of the user based on the user right eye corneal curvature center position information and the right eye pupil center position information.
S1216, the head-mounted display device corrects the information of the line of sight direction of the left eye of the user based on the left eye error compensation angle information.
And S1217, the head-mounted display device corrects the sight line direction information of the right eye of the user based on the right eye error compensation angle information.
According to the technical scheme, the left eye error compensation angle information is used for correcting the sight line direction information of the left eye of the user, the right eye error compensation angle information is used for correcting the sight line direction information of the right eye of the user, the inherent physiological deviation between the visual axis and the optical axis of the eye of the user can be eliminated, the direction of the real sight line and the position of the fixation point are obtained, and the accuracy of fixation target identification can be improved.
Fig. 8 is a flowchart of another method for acquiring a gaze target in a head-mounted display device according to an embodiment of the present disclosure. Fig. 8 is a specific example of fig. 1. Fig. 9 is a block diagram of an AR head-mounted display device according to an embodiment of the present disclosure. The AR head mounted display device may perform the method of acquiring a gaze target in a head mounted display device provided in fig. 8.
Referring to fig. 9, the head mounted display apparatus includes a Digital Signal Processing (DSP) module and a Central Processing Unit (CPU) connected to each other. The digital signal processing module is connected with the sensors and used for preprocessing raw data of the sensors. The sensor connected to the digital signal processing module includes, but is not limited to, an Inertial Measurement Unit (IMU), an RGB camera, a two-fish-eye camera, and an infrared camera. In addition, the digital signal processing module is also connected with an infrared light source. The central processing unit is connected with the microphone, the loudspeaker, the optical display module and the battery and is used for driving the microphone, the loudspeaker, the optical display module and the like, processing data, comparing algorithms and the like.
Referring to fig. 8 and 9, the head-mounted display device is worn on a user, and the method includes:
s210, the head-mounted display device acquires the image information of the user field of view and identifies the target object in the user field of view.
Illustratively, after the head-mounted display device is started, when the action change of the user is detected, the target object search is automatically carried out, and all target objects in the visual field of the user are acquired.
In this step, the user action change is detected by the inertial measurement unit and/or the infrared camera. User action changes include, but are not limited to, moving, rotating the head, or rotating the eye. And triggering and driving the camera to search the target object through the action change of the user.
Generally, the field of view of a binocular camera will be larger than the user's eye field of view. In practice, however, image recognition only needs to recognize the visual field range of the user's eyes for observing objects normally, which is usually smaller than the limit visual field and the double-fisheye visual field of the user's eyes. Thus, the user field of view is set to 130 ° horizontally and 90 ° vertically. Beyond, it is considered that the head needs to be turned or moved to adjust.
Specifically, the RGB camera collects RGB information of all target objects in the user field of view, and assists in completing target object identification. If the AR head mounted display device is used to introduce museum-exhibited cultural relics to the user. Object recognition is to be understood here as identifying a specific kind of cultural relic of an object.
S220, the head-mounted display equipment judges whether the field of view of the user only comprises one target object; if yes, go to S230; if not, go to S240.
And S230, controlling the head-mounted display equipment to display information related to the target object through a virtual image.
Illustratively, if the AR head mounted display device is used to introduce museum-exhibited cultural relics to the user. The "information corresponding to the object" is introduction information of the object.
Optionally, before this step, further comprising: the head-mounted display device matches the image of the target object with the images in the database to obtain information associated with the target object.
The database stores related information of a plurality of objects, such as images of the objects and introduction information of the objects. As only one target object exists in the user visual field, the target object is determined from the user visual field image information, then the target object image is matched with the images of all objects in the database, if the similarity between the target object image and the image of a certain object in the database is larger than a set threshold value, the target object image is determined to be the object, the introduction information of the object is used as the information related to the target object, and the information is displayed in a virtual image mode.
S240, determining the distance information between at least two target objects in the user field of view and the user.
Specifically, the double-fisheye camera completes environmental depth information acquisition and assists the central processing unit to obtain distance information from each target object to the user.
And S250, determining the distance information from the gaze point of the user to the user.
Specifically, the infrared camera and the infrared light source are used for completing cornea image acquisition, the central processing unit is assisted to detect the cornea center and the pupil center, the sight gaze point is calculated, and then the distance information from the user sight gaze point to the user is obtained.
And S260, determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
And S270, controlling the head-mounted display equipment to display only the information related to the gazing target through the virtual image.
Illustratively, if the AR head mounted display device is used to introduce museum-exhibited cultural relics to the user. The "information corresponding to the fixation target" is introduction information of the fixation target.
Optionally, before this step, further comprising: the head-mounted display device matches the image of the fixation target with the images in the database to obtain information associated with the fixation target.
The database stores related information of a plurality of objects, such as images of the objects and introduction information of the objects. After the gazing target is determined, firstly, a gazing target image is determined from the visual field image information of the user, then the gazing target image is matched with the images of all objects in the database, if the similarity between the gazing target image and the image of a certain object in the database is larger than a set threshold value, the gazing target image is determined to be the object, and the introduction information of the object is displayed in a virtual image mode as the information related to the gazing target.
By the technical scheme, the target object which is most probably watched by the user can be accurately determined no matter how many target objects are in the visual field of the user, the related information of the target object which is most probably watched is displayed, the head-mounted display equipment is ensured to always display the related information of one object, the phenomenon that the user cannot distinguish each piece of information due to the fact that the introduction information of a plurality of objects is mixed and displayed and the introduction information of the concerned object cannot be obtained is avoided, and the satisfaction degree of the user can be improved.
It should be noted that for simplicity of description, the above-mentioned method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Fig. 10 is a schematic structural diagram of an apparatus for acquiring a gazing target in a head-mounted display device in an embodiment of the present disclosure. The device for acquiring the gazing target in the head-mounted display equipment provided by the embodiment of the disclosure can be configured in the head-mounted display equipment. Referring to fig. 10, the apparatus for acquiring a gazing target in a head-mounted display device specifically includes:
a first distance determining module 310, configured to determine distance information from at least two objects in a user's field of view to the user;
a second distance determining module 320, configured to determine distance information from the gaze point of the user to the user;
an identifying module 330, configured to determine a gazing target among the at least two targets based on distance information from the at least two targets to the user and distance information from the user gaze fixation point to the user.
Further, the identifying module 330 is further configured to control the head-mounted display device to acquire image information of the field of view of the user and identify the target object in the field of view of the user before determining distance information of at least two target objects in the field of view of the user to the user.
Further, if the number of the target objects in the user view field is greater than or equal to 2, the first distance determining module 310 performs a step of determining distance information from at least two target objects in the user view field to the user;
the device further comprises a display module, wherein the display module is used for controlling the head-mounted display equipment to display only information related to the gazing target through a virtual image after the gazing target is determined in the at least two targets based on the distance information from the at least two targets to the user and the distance information from the user sight gazing point to the user.
Further, the identifying module 330 is further configured to control the head-mounted display device to match the image of the gazing target with an image in a database before the head-mounted display device displays only the information associated with the gazing target through a virtual image, so as to obtain the information associated with the gazing target.
Further, the first distance determining module 310 is configured to control the head-mounted display device to determine distance information from at least two target objects in the field of view of the user to the user by using a binocular ranging method.
Further, the first distance determining module 310 is configured to control the head-mounted display device to acquire binocular image information of at least two targets by using the two fisheye cameras; the field of view of the double-fisheye camera is larger than that of the user; and controlling the head-mounted display equipment to determine the distance information from the at least two target objects in the visual field of the user to the user by using a binocular ranging method based on the binocular image information of the at least two target objects.
Further, the second distance determining module 320 is configured to:
controlling the head-mounted display equipment to determine the sight line direction information of the left eye and the sight line direction information of the right eye of the user;
controlling the head-mounted display equipment to determine the position information of the gaze fixation point of the user based on the gaze direction information of the left eye and the gaze direction information of the right eye of the user;
and controlling the head-mounted display equipment to determine distance information between the user sight gaze point and the user based on the position information of the user sight gaze point.
Further, the second distance determining module 320 is configured to: and controlling the head-mounted display equipment to determine the sight line direction information of the left eye and the sight line direction information of the right eye of the user by using a pupil corneal reflection method.
Further, the second distance determining module 320 is configured to:
controlling the head-mounted display equipment to control the left light source and the right light source to respectively irradiate the left eye and the right eye of the user;
controlling the head-mounted display equipment to acquire image information of the left eye and image information of the right eye of the user, which are acquired by the image acquisition equipment;
controlling the head-mounted display equipment to determine left eye corneal curvature center position information and left eye pupil center position information of the user based on the image information of the left eye; determining the right cornea curvature center position information and the right pupil center position information of the right eye of the user based on the image information of the right eye;
controlling the head-mounted display equipment to determine the sight line direction information of the left eye of the user based on the left eye corneal curvature center position information and the left eye pupil center position information of the user;
and controlling the head-mounted display equipment to determine the sight direction information of the right eye of the user based on the corneal curvature center position information of the right eye of the user and the pupil center position information of the right eye.
Further, the second distance determining module 320 is configured to:
controlling the head-mounted display device to acquire left-eye error compensation angle information and right-eye error compensation angle information;
controlling the head-mounted display equipment to correct the sight direction information of the left eye of the user based on the left eye error compensation angle information;
controlling the head-mounted display equipment to correct the sight line direction information of the right eye of the user based on the right eye error compensation angle information;
and controlling the head-mounted display equipment to determine the position information of the gaze fixation point of the user based on the corrected sight line direction information of the left eye and the corrected sight line direction information of the right eye.
Further, the second distance determining module 320 is configured to:
controlling the head-mounted display equipment to output a calibration instruction so that the user can observe the calibration object determined by the position information through eyes;
controlling the head-mounted display equipment to control the left light source and the right light source to respectively irradiate the left eye and the right eye of the user;
under the condition that the user watches the calibration object through two eyes, controlling the head-mounted display equipment to acquire image information of the left eye and image information of the right eye of the user, which are acquired by the image acquisition equipment;
controlling the head-mounted display equipment to respectively determine left eye corneal curvature center position information and left eye pupil center position information of the user based on the image information of the left eye; respectively determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye;
controlling the head-mounted display equipment to determine the optical axis direction information of the left eye of the user corresponding to the calibration object based on the left eye corneal curvature center position information and the left eye pupil center position information; determining optical axis direction information of the right eye of the user corresponding to the calibration object based on the corneal curvature center position information of the right eye and the pupil center position information of the right eye;
controlling the head-mounted display equipment to determine visual axis direction information of left eyes and visual axis direction information of right eyes of a user corresponding to the calibration object based on the position information of the calibration object;
controlling the head-mounted display equipment to determine left-eye error compensation angle information based on the optical axis direction information and the left-eye visual axis direction information of the left eye of the user corresponding to the calibration object; and determining right eye error compensation angle information based on the optical axis direction information of the right eye of the user corresponding to the calibration object and the visual axis direction information of the right eye.
The apparatus for obtaining a gazing target in a head-mounted display device according to the embodiments of the present disclosure may perform steps performed by the head-mounted display device in the method for obtaining a gazing target in a head-mounted display device according to the embodiments of the present disclosure, and has the performing steps and beneficial effects, which are not described herein again.
Fig. 11 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now specifically to fig. 11, a schematic diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. Optionally, the electronic device is an AR head-mounted display device. Such as AR glasses or AR helmets. The electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 11, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003 to implement the method of acquiring a gaze target in a head-mounted display device according to the embodiments described in the present disclosure. In the RAM 1003, various programs and information necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, ROM1002, and RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communications apparatus 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information. While fig. 11 illustrates an electronic device 1000 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart, thereby implementing the method of acquiring a gaze target in a head mounted display device as described above. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 1009, or installed from the storage means 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include an information signal propagated in baseband or as part of a carrier wave, in which computer readable program code is carried. Such a propagated information signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital information communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
determining distance information from at least two target objects in a user field of view to the user respectively;
determining distance information from the user sight gaze point to the user;
and determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
Optionally, when the one or more programs are executed by the electronic device, the electronic device may further perform other steps described in the above embodiments.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (15)
1. A method of obtaining a gaze target in a head mounted display device, comprising:
determining distance information from at least two target objects in a user field of view to the user respectively;
determining distance information from the user sight gaze point to the user;
and determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
2. The method of claim 1, wherein prior to determining distance information from at least two respective objects within a user's field of view to the user, further comprising:
the head mounted display device acquires image information of a user field of view and identifies a target object within the user field of view.
3. The method according to claim 2, wherein if the number of the objects in the user's visual field is greater than or equal to 2, the step of determining the distance information from at least two objects in the user's visual field to the user is performed;
after determining a gazing target in the at least two objects based on the distance information from the at least two objects to the user and the distance information from the user gaze point to the user, the method further includes:
controlling the head mounted display device to display only information associated with the gaze target through the virtual image.
4. The method of claim 3, wherein controlling the head-mounted display device further comprises, before displaying only information associated with the gaze target via a virtual image:
and the head-mounted display equipment matches the image of the gazing target with the image in the database to obtain the information related to the gazing target.
5. The method of claim 1, wherein determining distance information of at least two respective objects in a user's field of view from the user comprises:
the head-mounted display equipment determines the distance information from at least two target objects in the field of view of the user to the user respectively by using a binocular ranging method.
6. The method of claim 5, wherein the head mounted display device determines distance information from at least two respective targets within the user's field of view to the user using a binocular ranging method, comprising:
the head-mounted display equipment acquires binocular image information of at least two targets by using the double-fisheye cameras; the field of view of the double fisheye cameras is larger than that of the user;
the head-mounted display equipment determines the distance information from at least two target objects in the visual field of the user to the user by using a binocular ranging method based on binocular image information of the at least two target objects.
7. The method of claim 1, wherein the determining distance information from the user gaze point to the user comprises:
the head-mounted display equipment determines the sight line direction information of the left eye and the sight line direction information of the right eye of a user;
the head-mounted display equipment determines the position information of a gaze point of a user based on the gaze direction information of the left eye and the gaze direction information of the right eye of the user;
the head-mounted display equipment determines distance information from the user sight gaze point to the user based on the position information of the user sight gaze point.
8. The method of claim 7, wherein the head mounted display device determines gaze direction information for a left eye and gaze direction information for a right eye of the user, comprising:
the head-mounted display device determines the sight line direction information of the left eye and the sight line direction information of the right eye of the user by using a pupil corneal reflection method.
9. The method of claim 8, wherein the head-mounted display device determines the gaze direction information of the left eye and the gaze direction information of the right eye of the user by using a pupillary corneal reflex method, comprising:
the head-mounted display equipment controls the left light source and the right light source to respectively irradiate the left eye and the right eye of a user;
the head-mounted display equipment acquires image information of a left eye and image information of a right eye of a user, which are acquired by the image acquisition equipment;
the head-mounted display equipment determines left eye corneal curvature center position information and left eye pupil center position information of a user based on the image information of the left eye; determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye;
the head-mounted display equipment determines the sight direction information of the left eye of the user based on the left eye corneal curvature center position information and the left eye pupil center position information of the user;
the head-mounted display equipment determines the sight line direction information of the right eye of the user based on the corneal curvature center position information of the right eye of the user and the pupil center position information of the right eye.
10. The method of claim 9, further comprising:
the head-mounted display equipment acquires left eye error compensation angle information and right eye error compensation angle information;
the head-mounted display equipment corrects the sight line direction information of the left eye of the user based on the left eye error compensation angle information;
the head-mounted display equipment corrects the sight line direction information of the right eye of the user based on the right eye error compensation angle information;
the head-mounted display device determines the position information of the gaze point of the user based on the gaze direction information of the left eye and the gaze direction information of the right eye of the user, including:
the head-mounted display device determines the position information of the gaze fixation point of the user based on the corrected sight line direction information of the left eye and the corrected sight line direction information of the right eye.
11. The method according to claim 10, wherein the head mounted display device acquires left-eye error compensation angle information and right-eye error compensation angle information, comprising:
the head-mounted display equipment outputs a calibration instruction so that the user can observe the calibration object determined by the position information with eyes;
the head-mounted display equipment controls the left light source and the right light source to respectively irradiate the left eye and the right eye of a user;
under the condition that the user stares at the calibration object through two eyes, the head-mounted display equipment acquires image information of the left eye and image information of the right eye of the user, which are acquired by the image acquisition equipment;
the head-mounted display equipment respectively determines left eye corneal curvature center position information and left eye pupil center position information of the user based on the image information of the left eye; respectively determining the corneal curvature center position information and the pupil center position information of the right eye of the user based on the image information of the right eye;
the head-mounted display equipment determines the optical axis direction information of the left eye of the user corresponding to the calibration object based on the left eye corneal curvature center position information and the left eye pupil center position information; determining optical axis direction information of the right eye of the user corresponding to the calibration object based on the corneal curvature center position information of the right eye and the pupil center position information of the right eye;
the head-mounted display equipment determines visual axis direction information of the left eye and the visual axis direction information of the right eye of the user corresponding to the calibration object based on the position information of the calibration object;
the head-mounted display equipment determines left-eye error compensation angle information based on the optical axis direction information and the left-eye visual axis direction information of the left eye of the user corresponding to the calibration object; and determining right eye error compensation angle information based on the optical axis direction information of the right eye of the user corresponding to the calibration object and the visual axis direction information of the right eye.
12. An apparatus for obtaining a gaze target in a head mounted display device, comprising:
the first distance determining module is used for determining the distance information from at least two target objects in a user field of view to the user respectively;
the second distance determining module is used for determining the distance information from the user sight gaze point to the user;
the recognition module is used for determining a gazing target in the at least two target objects based on the distance information from the at least two target objects to the user and the distance information from the user sight gazing point to the user.
13. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
14. The electronic device of claim 13, wherein the electronic device is an AR head mounted display device.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110703043.6A CN115525139A (en) | 2021-06-24 | 2021-06-24 | Method and device for acquiring gazing target in head-mounted display equipment |
PCT/CN2022/099421 WO2022267992A1 (en) | 2021-06-24 | 2022-06-17 | Method and apparatus for acquiring target of fixation in head-mounted display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110703043.6A CN115525139A (en) | 2021-06-24 | 2021-06-24 | Method and device for acquiring gazing target in head-mounted display equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115525139A true CN115525139A (en) | 2022-12-27 |
Family
ID=84545125
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110703043.6A Pending CN115525139A (en) | 2021-06-24 | 2021-06-24 | Method and device for acquiring gazing target in head-mounted display equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115525139A (en) |
WO (1) | WO2022267992A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036169B (en) * | 2014-06-06 | 2017-10-10 | 北京智谷睿拓技术服务有限公司 | Biological authentication method and biological authentication apparatus |
CN105866949B (en) * | 2015-01-21 | 2018-08-17 | 成都理想境界科技有限公司 | The binocular AR helmets and depth of field adjusting method of the depth of field can be automatically adjusted |
WO2016115872A1 (en) * | 2015-01-21 | 2016-07-28 | 成都理想境界科技有限公司 | Binocular ar head-mounted display device and information display method thereof |
CN108227914B (en) * | 2016-12-12 | 2021-03-05 | 财团法人工业技术研究院 | Transparent display device, control method using the same, and controller thereof |
CN108592865A (en) * | 2018-04-28 | 2018-09-28 | 京东方科技集团股份有限公司 | Geometric measurement method and its device, AR equipment based on AR equipment |
CN109558012B (en) * | 2018-12-26 | 2022-05-13 | 北京七鑫易维信息技术有限公司 | Eyeball tracking method and device |
-
2021
- 2021-06-24 CN CN202110703043.6A patent/CN115525139A/en active Pending
-
2022
- 2022-06-17 WO PCT/CN2022/099421 patent/WO2022267992A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022267992A1 (en) | 2022-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12008723B2 (en) | Depth plane selection for multi-depth plane display systems by user categorization | |
US20220003994A1 (en) | Method and device for eye tracking using event camera data | |
US20220269344A1 (en) | Eye-tracking using images having different exposure times | |
CN110321773B (en) | Neural network training for three-dimensional (3D) gaze prediction using calibration parameters | |
US12050727B2 (en) | Systems and techniques for estimating eye pose | |
US9779512B2 (en) | Automatic generation of virtual materials from real-world materials | |
US10558895B2 (en) | Deep learning for three dimensional (3D) gaze prediction | |
US9256987B2 (en) | Tracking head movement when wearing mobile device | |
US8958599B1 (en) | Input method and system based on ambient glints | |
WO2020139915A1 (en) | Head mounted display calibration using portable docking station with calibration target | |
EP3453317B1 (en) | Pupil radius compensation | |
US10671890B2 (en) | Training of a neural network for three dimensional (3D) gaze prediction | |
EP3368963A1 (en) | Tracking of wearer's eyes relative to wearable device | |
CN114391117A (en) | Eye tracking delay enhancement | |
US20160247322A1 (en) | Electronic apparatus, method and storage medium | |
US20200125169A1 (en) | Systems and Methods for Correcting Lens Distortion in Head Mounted Displays | |
US20220280035A1 (en) | Device and method for mapping of visual scene onto projection surface | |
US20220365342A1 (en) | Eyeball Tracking System and Method based on Light Field Sensing | |
JP2019215688A (en) | Visual line measuring device, visual line measurement method and visual line measurement program for performing automatic calibration | |
US11200713B2 (en) | Systems and methods for enhancing vision | |
JP7558373B2 (en) | Map-Aided Inertial Odometry Using Neural Networks for Augmented Reality Devices | |
KR101817436B1 (en) | Apparatus and method for displaying contents using electrooculogram sensors | |
CN117372475A (en) | Eyeball tracking method and electronic equipment | |
CN111479104A (en) | Method for calculating line-of-sight convergence distance | |
CN115525139A (en) | Method and device for acquiring gazing target in head-mounted display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |