CN110968182A - Positioning tracking method and device and wearable equipment thereof - Google Patents

Positioning tracking method and device and wearable equipment thereof Download PDF

Info

Publication number
CN110968182A
CN110968182A CN201811159998.4A CN201811159998A CN110968182A CN 110968182 A CN110968182 A CN 110968182A CN 201811159998 A CN201811159998 A CN 201811159998A CN 110968182 A CN110968182 A CN 110968182A
Authority
CN
China
Prior art keywords
marker
position information
image display
image
relative spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811159998.4A
Other languages
Chinese (zh)
Inventor
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201811159998.4A priority Critical patent/CN110968182A/en
Publication of CN110968182A publication Critical patent/CN110968182A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a positioning tracking method, a positioning tracking system and wearable equipment. The positioning and tracking method is applied to the non-wearable image display equipment and comprises the following steps: acquiring an image containing a marker, the marker being disposed on the wearable device; identifying markers in the image, and determining relative spatial position information between the wearable device and the image display device according to the markers; and rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display device. In the above positioning and tracking method, the image display device may acquire the position information of the marker by acquiring an image including the marker integrated on the wearable device, so as to track the wearable device.

Description

Positioning tracking method and device and wearable equipment thereof
Technical Field
The present disclosure relates to the field of image display, and in particular, to a positioning and tracking method and device and a wearable device.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and the technology of identifying user images through image acquisition devices such as machine vision or virtual vision and the like to realize human-computer interaction is more and more important. Augmented Reality (AR) technology builds virtual objects that do not exist in a real environment by means of computer graphics technology and visualization technology, accurately fuses the virtual objects into a real environment by means of image recognition and positioning technology, fuses the virtual objects and the real environment into a whole by means of display equipment, and displays the virtual objects and the real environment to a user for real sensory experience. The first technical problem to be solved by the augmented reality technology is how to accurately fuse a virtual object into the real world, that is, to make the virtual object appear at the correct position of the real scene with the correct angular pose, so as to generate strong visual reality. In the conventional technology, the rendering angle of the virtual object is usually fixed, and it is inconvenient for the user to change the display viewing angle of the virtual object only after the user manipulates the virtual object through the controller.
Disclosure of Invention
The embodiment of the application aims to provide a positioning tracking method and device and wearable equipment thereof.
In one aspect, an embodiment of the present application provides a localization tracking method, which is applied to a non-wearable image display device, and the localization tracking method includes: acquiring an image containing a marker, the marker being disposed on the wearable device; identifying markers in the image, and determining relative spatial position information between the wearable device and the image display device according to the markers; and rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display device.
Wherein, in some embodiments, the wearable device is glasses, and the marker is disposed on a frame of the glasses; determining relative spatial position information between the glasses and the image display device according to the marker when the glasses are worn, including: and determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the markers.
Wherein, in some embodiments, determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the markers comprises: determining the sub-markers comprised by the marker from the image of the marker; and positioning the eye region of the user wearing the glasses according to the sub-markers, and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment.
Wherein, in some embodiments, prior to rendering the virtual object according to the relative spatial position information, the method further comprises: acquiring an eye image of a user; the eye features of the eye image are extracted, and the relative spatial position information between the eyes of the user wearing the glasses and the image display device is determined according to the eye features.
Wherein, in some embodiments, determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the eye features comprises: comparing eye features between adjacent frames of the eye image to obtain eye position changes of a user wearing the glasses; calculating the motion data of the eyes according to the change of the positions of the eyes; and determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the movement data of the eyes.
Wherein, in some embodiments, after determining the relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the eye features, the method further comprises: correcting the estimated position information according to the calibration position information to obtain relative spatial position information between the eyes of the user wearing the glasses and the image display equipment; the estimated position information is relative spatial position information determined according to the eye image, and the calibration position information is relative spatial position information determined according to the marker.
Wherein, in some embodiments, the method further comprises: acquiring an eye image acquired in real time through a first thread, and obtaining estimated position information according to the eye image; acquiring an image containing a marker through a second thread, and acquiring calibration position information according to the marker; and comparing the estimated position information with the calibration position information, and correcting the estimated position information according to the calibration position information when the estimated position information is inconsistent with the calibration position information.
In another aspect, the present application further provides a localization tracking device, including: the image acquisition module is used for acquiring an image containing a marker, and the marker is arranged on the wearable device; the position relation determining module is used for identifying the marker in the image and determining relative spatial position information between the wearable device and the image display device according to the marker; and a display module for rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display device.
In another aspect, the present application further provides a wearable device for assisting localization tracking, which includes a frame and a marker disposed on the frame, where the marker is identified by a terminal device and is used to determine a relative position relationship between eyes of a user and the terminal device.
In some embodiments, the mirror frame comprises a left frame and a right frame which are parallel, the marker comprises a first sub-marker and a second sub-marker, and the first sub-marker and the second sub-marker are respectively arranged on two sides of the left frame;
in some embodiments, the mirror frame comprises a left frame and a right frame which are parallel, the marker further comprises a third sub-marker and a fourth sub-marker, and the third sub-marker and the fourth sub-marker are respectively arranged on two sides of the right frame;
wherein, in some embodiments, the picture frame includes left frame and right frame side by side, and wearable equipment still includes nose bridge portion, and nose bridge portion connects between left frame and right frame, and the marker still includes the fifth sub-marker that sets up in nose bridge portion.
Wherein, in some embodiments, the wearable device further comprises a filter layer, the filter layer covering the marker.
In the positioning and tracking method provided by the embodiment of the application, the non-wearable image display device can acquire the position information of the marker by acquiring the image containing the marker integrated on the wearable device, so as to track the wearable device. The image display equipment can determine the relative spatial position relation between the image display equipment and the wearable equipment according to the image containing the marker, and displays the constructed virtual object at the corresponding visual angle according to the relative spatial position relation, so that the display visual angle of the virtual object can be changed along with the position relation between the wearable equipment and the image display equipment, a user can conveniently observe the virtual object from multiple angles, and the interactivity between the user and virtual content is improved.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a localization tracking method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a wearable device provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a wearable device provided in another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a wearable device provided in another embodiment of the present application;
fig. 5 is a schematic flowchart of a localization tracking method according to an embodiment of the present application;
FIG. 6 is a functional block diagram of a localization tracking device according to an embodiment of the present disclosure;
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, an interactive system 10 for virtual content provided by an embodiment of the present application is shown, where the interactive system 10 for virtual content includes: an image display device 300 and a wearable device 400. In the present embodiment, the wearable device 400 is provided with a marker 450 (see fig. 2). In use, the marker 450 on the wearable device 400 may be within the field of view of the image display device 300 so that the image display device 300 may capture an image of the marker 450 and identify the marker 450.
In the embodiment of the present application, the image display apparatus 300 may be a mobile apparatus such as a mobile phone and a tablet, or a desktop virtual reality/augmented reality display device. When the image display apparatus 300 is a desktop display device, it may be an integrated display device, or may be an external display device, for example, the image display apparatus 300 may be externally connected to an intelligent terminal such as a mobile phone, that is, the image display apparatus 300 may be inserted into or connected to an external display device (such as a mobile phone, a tablet computer, etc.), and displays a virtual object in the image display apparatus 300.
In the embodiment of the present application, the image display apparatus 300 is a desktop display device, and includes an image capturing device 301 and a display device 303. The image capturing device 301 and the display device 303 may together form a desktop display device. Specifically, the display device 303 includes a control center 3031 and a display 3033, the display 3033 may be a transflective lens, and the control center 3031 is configured to display a virtual object on the display 3033 so that a user can observe the virtual object on the display 3033. Since the user can observe the environment in front through the display 3033 while seeing the virtual object on the display 3033, the image obtained by the eyes of the user is a virtual reality superimposition scene in which the virtual content and the environment in front are superimposed. The image acquisition device 301 is electrically connected to the display device 303, and the image acquisition device 301 is further configured to acquire environmental information within a field of view thereof.
The wearable device 400 is configured to be worn by a user, so that the image display device 300 can determine a display angle of the virtual object according to the position information of the wearable device 400. The marker 450 is integrated into the wearable device 400. The marker 450 may be within the field of view of the image capture device 301 of the image display device 300, i.e., the image capture device 301 may capture an image of the marker 450. The image of the marker 450 is stored in the image display apparatus 300, and is used for the image display apparatus 300 to determine the relative position information with the marker 450 according to the image of the marker 450, so as to render and display the virtual object.
The marker 450 may be a marker image including at least one shaped sub-marker. Of course, the specific marker 450 is not limited in the embodiment of the present application, and it is only necessary that the marker 450 be recognized by the image display apparatus 300.
As shown in fig. 1, when the marker 450 on the wearable device 400 is located in the visual field of the image capturing device 301 of the image display device 300, the image capturing device 301 can capture an image of the marker 450, and according to at least one collected sub-marker distributed on the marker 450, information such as a relative positional relationship and a rotational relationship between the marker 450 and the image display device 300 can be determined, so as to render and display a virtual object, such as the building model 600 shown in fig. 1, that is, a virtual object displayed by the corresponding marker 450, so that a user can observe the virtual object based on the marker and can observe the virtual object at different positions to present different viewing angles. As shown in fig. 1, the virtual object 600 presented by the image display device 300 is a simulated architectural model, which can be viewed by a user wearing the wearable device 400 at a viewing position a to view the virtual object 600 on the image display device 300 at a first viewing angle (e.g., a predetermined axonometric view), and can be viewed by a user moving to B to view a left side of the simulated architectural model relative to the user.
In view of the foregoing scenario, an execution subject of the localization tracking method provided in the embodiments of the present application may be the image display device described above, and in the localization tracking method, the image display device may acquire an image including a marker integrated on the wearable device to obtain position information of the marker, so as to track the wearable device. The image display device can determine the relative spatial position relationship between the image display device and the wearable device according to the image containing the marker, and render the virtual object at the corresponding visual angle according to the relative spatial position relationship, so that the display visual angle of the virtual object can be changed along with the position relationship between the wearable device and the image display device, and a user can conveniently observe the virtual object from multiple angles.
Referring to fig. 4, in an embodiment, the present application provides a positioning and tracking method applied to the image display apparatus, the method including: s101 to S105.
Step S101: an image containing a marker is acquired, wherein the marker is disposed on a wearable device.
Step S103: the method includes identifying markers in the image and determining relative spatial position information between the wearable device and the image display device according to the markers.
Step S105: rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display device.
In an embodiment, based on the above localization tracking method, the present application further provides a localization tracking method applied to the above image display apparatus, the method including: s101 to S105.
Step S101: an image containing a marker is acquired, wherein the marker is disposed on a wearable device.
Further, an image containing a marker is captured by an image capturing device of the image display device, wherein the marker may be integrated in the wearable device, for example, a pattern fixedly presented on the surface of the wearable device, or a pattern selectively presented on the surface of the wearable device (e.g., a pattern displayed after the wearable device is powered on).
Further, the wearable device may be glasses for wearing by a user. At least two markers can be arranged on the glasses and are respectively arranged on the left glass frame and the right glass frame of the glasses so as to be used for marking the positions of the eyes of a user. In some embodiments, at least three markers may be disposed on the glasses, and the at least three markers are disposed at the left frame, the right frame, and the bridge of the nose of the glasses, so as to identify a plane where the frame of the glasses substantially lies, facilitate fitting of a contour of the glasses, and obtain a spatial angle and a rotational posture of the glasses. It is understood that the one or more markers may be provided on the wearable device, and the wearable device may be other devices, such as but not limited to a hat with markers, a necklace with markers, a watch, a shirt, etc.
Step S103: the method includes identifying markers in the image and determining relative spatial position information between the wearable device and the image display device according to the markers.
In some embodiments, the wearable device may include at least one marker, and in this case, step S103 may include: the method comprises the steps of identifying at least one marker contained in a collected image, calculating the relative position and orientation relation between the at least one marker and the image display device, and determining the relative spatial position relation between the image display device and the wearable device.
Further, according to the display state, size and display angle of the marker in the image, the position information and orientation information between the marker relative to the image display device are calculated, so that the relative spatial position relationship between the wearable device and the image display device is determined. The image display apparatus can directly take the positional information and orientation information of the marker with respect to the image display apparatus and the like as the relative spatial positional relationship between the wearable apparatus and the image display apparatus.
In one embodiment, the wearable device may be glasses, and when the glasses are worn, step S103 may include: and determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the markers.
In some embodiments, the markers disposed on the glasses may include a plurality of sub-markers, each of which may be a pattern having a shape, and each of the sub-markers may include one or more feature points, wherein the shape of the feature points is not limited, and may be a circular point, a circular ring, a triangle, or other shapes. The multiple sub-markers can be arranged at different positions on the glasses frame of the glasses respectively, the multiple sub-markers can jointly form one marker, the image display device collects images containing the multiple sub-markers, the sub-markers are identified, the characteristic information of each sub-marker and the arrangement position relationship between the sub-markers are obtained, and therefore the relative space position relationship between the glasses and the image display device is obtained.
At this time, step S103 may include:
step S1031: determining the sub-markers comprised by the marker from the image of the marker;
step S1032: and positioning the eye region of the user wearing the glasses according to the sub-markers, and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment.
Therefore, by extracting the sub-markers of the image of the marker and tracking the sub-markers to determine the eye region of the user, the relative spatial position information between the eyes of the user and the image display device can be determined more accurately. The sub-markers may be understood as images of markers or partial images of markers at specific positions, for example, the images or partial images of markers at the outer sides of the eyes of the user are regarded as sub-markers to facilitate identification of the eye region of the user; alternatively, the image or portion of the image of the marker that is between the eyes of the user is treated as a sub-marker to facilitate locating the eye region of the user.
Further, the multiple sub-markers set on the glasses worn by the user may be the same sub-marker or different sub-markers, and different sub-markers may have different feature information, which may include, but is not limited to, the shape, color, number of included feature points, and the like of the sub-markers. The image display device identifies a plurality of sub-markers arranged on the glasses, and can acquire the arrangement position relationship among the sub-markers, wherein the arrangement position relationship refers to the relative position, the arrangement sequence and the like among the sub-markers, so that the posture, the position and the like of the markers can be determined according to the arrangement position relationship among the sub-markers, the size and the like of the sub-markers, the relative spatial position relationship between the glasses and the image display device is obtained, or the relative spatial position information between the eyes of a user wearing the glasses and the image display device is obtained.
In one embodiment, in addition to the sub markers of one marker being respectively disposed at different positions on the frame of the glasses, a complete marker including a plurality of sub markers may be disposed on the frame of the glasses, for example, a marker may be disposed at a middle position of the glasses, or a plurality of markers may be respectively disposed at different positions on the frame of the glasses, so that a relative spatial position relationship between the plurality of markers and the image display device or relative spatial position information between the eyes of a user wearing the glasses and the image display device may be obtained.
In one embodiment, relative spatial position information between the user's eyes and the image display device can be determined according to the user's eye images, so that the speed of tracking the user's eyes is increased, and the fluency is improved. At this time, step S103 may further include:
step S1035: acquiring an eye image of a user;
step S1036: the eye features of the eye image are extracted, and the relative spatial position information between the eyes of the user wearing the glasses and the image display device is determined according to the eye features.
Furthermore, the eye region of the user can be tracked in real time according to the eye image of the user, so that the motion data of the eyes of the user can be calculated conveniently, the motion of the eyes of the user can be predicted, the motion trend of the eyes of the user can be predicted, and the acquisition efficiency of the relative spatial position information can be improved. At this time, step S1036 may include: acquiring eye images of a user wearing glasses in real time, comparing eye characteristics between adjacent frames of the eye images to obtain eye position changes of the user wearing the glasses, and calculating eye movement data according to the eye position changes; and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment according to the motion data of the eyes.
Furthermore, the increment of the eye position of the user wearing the glasses is obtained by comparing the eye features between the adjacent frames of the eye images, and the motion data of the eyes is estimated. For example, when the current frame N is, the coordinates of the eye position are already determined to be (X, Y, Z), and the same eye features between the eye images of the frame N and the frame N +1 are compared to obtain the incremental change of the eye position of (X1, Y1, Z1), the coordinates of the eye position of the frame N +1 can be determined to be (X + X1, Y + Y1, Z + Z1), and by analogy, the eye position coordinates of the frame N, the frame N +1, the frame N +2, and the frame N +2 … … N + m can be calculated, and the motion data of the eye can be calculated. In the current nth frame, the coordinates of the eye position are (X, Y, Z), which may be determined in step S103, that is, the coordinates of the eye position are (X, Y, Z) according to the relative spatial position information between the wearable device and the image display device determined by the marker, the coordinates of the eye position are used as a reference amount, and the eye position coordinates are calculated by comparing the eye features between adjacent frames of the eye image on the reference amount, so as to calculate the eye position coordinates in the N +1 th frame, the N +2 th frame, and the N +2 th frame … … N + m frame one by one. Thus, the steps of data processing can be simplified, the speed of tracking the eyes of the user can be increased, and the fluency can be improved.
Step S1037: the relative spatial position information determined in step S1038 is corrected based on the relative spatial position information determined in step S1035, and accurate relative spatial position information between the eyes of the user wearing the glasses and the image display apparatus is acquired. Further, correcting the estimated position information according to the calibration position information, and acquiring relative spatial position information between the eyes of the user wearing the glasses and the image display equipment; the estimated position information is relative spatial position information determined according to the eye image, and the calibration position information is relative spatial position information determined according to the marker.
Specifically, in some embodiments, step S1037 described above may be performed in a dual-thread form, and at this time, step S1037 may include: acquiring an eye image acquired in real time through a first thread, and obtaining estimated position information according to the eye image; acquiring an image containing a marker through a second thread, and acquiring calibration position information according to the marker; and comparing the estimated position information with the calibration position information, and correcting the estimated position information according to the calibration position information when the estimated position information is inconsistent with the calibration position information.
In one embodiment, the image display device may fuse the position information obtained by the two-thread process, so as to obtain the relative spatial position information between the eyes of the user and the image display device, wherein the fusing manner may be various, and is not limited herein. The image display device may obtain the relative spatial position information (i.e., the estimated position information) from the latest frame through the first thread, obtain the relative spatial position information (i.e., the calibration position information) with the same number of frames as the latest frame through the second thread, and fuse the two, for example, may obtain an average value of the relative spatial position information obtained by the first thread and the relative spatial position information obtained by the second thread, or perform weighting and calculation according to different weights, and the like, to obtain the final relative spatial position information.
In one embodiment, since the second thread obtains the relative spatial position information at a lower frame rate and a lower speed, there may not be relative spatial position information obtained directly with the same frame number as the latest frame, and the second thread may estimate the relative spatial position information according to the relative spatial position information obtained from the previous frame, so as to obtain the relative spatial position information with the same frame number as the latest frame.
Therefore, the relative spatial position information (recorded as calibration position information) between the eyes of the user and the image display device is determined by acquiring the image of the marker, so that the relative spatial position information can be more accurate, meanwhile, the relative spatial position information (recorded as estimated position information) between the eyes of the user and the image display device is determined by acquiring the image of the eyes of the user, the speed of acquiring the relative spatial position information can be increased, the estimated position information is calibrated according to the calibration position information, more accurate relative spatial position information is acquired, both the speed and the accuracy can be taken into consideration, and the fluency of the positioning and tracking method can be improved.
It is understood that, in some specific embodiments, the execution time of the above steps is not limited, for example, steps S1031 to S1033 may be executed first, and then steps S1035 to S1037 may be executed; alternatively, steps S1031 to S1033 and steps S1035 to S1036 may be performed at the same time, and then step S1037 may be performed; alternatively, steps S1035 to S1036 may be performed first, and then steps S1031 to S1033 and S1037 may be performed.
Further, the relative spatial position information between the eye of the user and the image display device obtained in the above steps may be, but is not limited to: relative position information, relative orientation information, relative angle information, and relative rotation information and attitude information.
Step S105: rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display device.
In some embodiments, after the relative spatial position information is obtained, model rendering data corresponding to the relative spatial position information may be obtained, and the virtual object may be rendered according to the model rendering data, and the model data may include rendering coordinates for rendering, color data, texture data, a rendering perspective, and the like. Specifically, after the image display device obtains the relative spatial position relationship with the marker, the rendering coordinate of the virtual object may be determined according to the relative spatial position relationship, and then the virtual object may be rendered and displayed according to the rendering coordinate, where the rendering coordinate may be used to represent the relative spatial position relationship between the virtual object and the image display device in the virtual space. The image display device may convert the relative spatial positional relationship in the real space into relative coordinate data in the virtual space, and calculate rendering coordinates of the virtual object in the virtual space according to the relative coordinate data, so that the virtual object may be accurately displayed.
In some embodiments, an angular relationship between the wearable device and the image display device is determined according to the relative spatial position information, and a display perspective of the virtual object is determined according to the angular relationship, then step S105 may include:
step S1051: acquiring a relative space angle between the wearable device and the image display device according to the relative space position information;
step S1052: and determining a display visual angle of the virtual object according to a relative space angle between the wearable device and the image display device and a preset corresponding rule, rendering the virtual object, and displaying the virtual object in the image display device.
The preset correspondence rule is a correspondence relationship between display viewing angles of the virtual objects corresponding to the relative spatial angle, when the virtual object is a stereoscopic virtual model, for example, as shown in fig. 1, the virtual object 600 presented by the image display device 300 is a simulated building model, and when a user wearing the wearable device 400 observes the virtual object 600 on the image display device 300 at a, a relative spatial angle between the wearable device 400 and the image display device 300 is a first angle, the display viewing angle of the virtual object is determined to be the first viewing angle according to the relative spatial angle and the preset correspondence rule, so that the user can observe a view (for example, a preset axial view) of the simulated building model at the first viewing angle; when the user moves to the position B, and the relative spatial angle between the wearable device 400 and the image display device 300 is a second angle, the display viewing angle of the virtual object is determined to be the second viewing angle (for example, a preset northwest view) according to the relative spatial angle and a preset corresponding rule, so that the user can observe a structural view (for example, a preset northwest view) of the simulated architectural model relative to the left side of the user.
Furthermore, when the display view angle of the virtual object is determined according to the relative space angle and the virtual object is rendered, the relative space angle can be calculated in real time, and the virtual object is rendered in real time, so that a user can observe that the virtual object is changed at the corresponding view angle in the moving process, and the improvement of the ornamental value of image display is facilitated.
Further, in some embodiments, the display perspective of the virtual object may be triggered to switch directly by setting a threshold value of the relative spatial angle. Specifically, for example, when the relative spatial angle between the wearable device and the image display device does not fall within a threshold range of the relative spatial angle (e.g., below a lower threshold or above an upper threshold), then the display perspective of the virtual object is determined to be the opposite perspective so that the user can observe the view of the opposite side of the virtual object. This is because the display of the image display device usually has a viewing angle, such as 5 to 175 degrees, and if the user views the display outside the range of 5 to 175 degrees, it is difficult to see the image displayed on the display, which may result in the user, when viewing the virtual object, at best only viewing the left and right views of the virtual object relative to the user, and the back view of the virtual object relative to the user is difficult to observe, therefore, aiming at the defect, the display visual angle of the virtual object is triggered to be directly switched by setting a threshold value of the relative spatial angle, when the relative spatial angle between the worn device and the image display device is below a lower threshold or above an upper threshold, the display perspective of the virtual object is directly determined to be a back perspective relative to the user, and rendering the virtual object, so that the virtual object is presented to the user at a view angle after 180-degree rotation, and the observation experience of the user can be improved. Therefore, in short, the step S1052 of the above positioning and tracking method may include:
step S1053: and judging whether the relative space angle between the wearable device and the image display device falls into a preset threshold range of the relative space angle, if so, executing step S1054, and if not, executing step S1055.
Step S1054: determining the display visual angle of the virtual object as a front visual angle, determining the specific display visual angle of the virtual object according to the relative space angle between the wearable device and the image display device and a preset corresponding rule, rendering the virtual object, and displaying the virtual object in the image display device.
It should be understood that the front viewing angle is a viewing angle that can be observed within a maximum area in front of a display screen of the image display device by a user wearing the wearable device, that is, a viewing angle that can be observed by the user wearing the wearable device when the user moves within a visible range of the display screen. For a specific virtual object, if the virtual object is a three-dimensional virtual model (such as the simulation building model in fig. 1), the front view angle thereof can be understood as: a northeast viewing angle, a southeast viewing angle, a south viewing angle, a southwest viewing angle, a west viewing angle, and a northwest viewing angle. Accordingly, the virtual object may have a back view, which is a view of the virtual model other than the front view, and for a specific virtual object, if the virtual object is a stereoscopic virtual model (such as the simulated building model in fig. 1), the back view may be understood as a north view.
Step S1055: and determining the display visual angle of the virtual object as a back visual angle, rendering the virtual object, and displaying the virtual object in the image display equipment.
It is understood that, in the above step S105, the angle relationship between the wearable device and the image display device is determined according to the relative spatial position information, and the display viewing angle of the virtual object is determined according to the angle relationship, which is described by taking the change of the horizontal angle between the wearable device and the image display device as an example, it should be understood that, in other embodiments, when the angle between the wearable device and the image display device is changed in the vertical direction, similarly, according to the relative spatial angle between the wearable device and the image display device and a preset corresponding rule, the display viewing angle of the virtual object is determined, and the virtual object is rendered, so that the user can conveniently observe the bottom view or the top view of the virtual object.
In the positioning and tracking method provided by the embodiment of the application, the image display device can acquire the position information of the marker by acquiring the image containing the marker integrated on the wearable device so as to track the wearable device. The image display device can determine the relative spatial position relationship between the image display device and the wearable device according to the image containing the marker, and displays the constructed virtual object at the corresponding visual angle according to the relative spatial position relationship, so that the display visual angle of the virtual object can be changed along with the position relationship between the wearable device and the image display device, and a user can conveniently observe the virtual object from multiple angles.
Referring to fig. 5, in one embodiment, the present application provides a virtual content localization and tracking device 100 for performing the above localization and tracking method. The device 100 for localization and tracking of virtual content includes an image acquisition module 101, a position relation determination module 103, and a display module 105. The image acquisition module 201 is configured to acquire an image of a marker, the position relationship determination module 103 is configured to determine a relative spatial position relationship between the image display device and the wearable device according to the image containing the marker, and the display module 105 is configured to render and display a virtual object according to the relative spatial position relationship. It is to be appreciated that the modules described above can be program modules that execute on computer-readable storage media. In an embodiment of the present application, the localization tracking device 100 is stored in a memory of the image display apparatus 300 and configured to be executed by one or more processors of the image display apparatus 300. The operation of each module is as follows:
the image acquisition module 101 is used for acquiring an image containing a marker, wherein the marker is disposed on the wearable device. Further, the image capturing module 101 captures an image containing the marker through an image capturing device of the image display apparatus.
The position relation determining module 103 is configured to identify a marker in the image, and determine relative spatial position information between the wearable device and the image display device according to the marker. The positional relationship determination module 103 includes a first positional information determination unit 1031, a second positional information determination unit 1033, and a calibration unit 1035.
The first position information determination unit 1031 is configured to determine relative spatial position information between the eyes of the user wearing the glasses and the image display apparatus according to the markers. Specifically, the first position information determination unit 1031 is configured to identify at least one marker included in the captured image, calculate a relative position and orientation relationship between the at least one marker and the image display device, and determine a relative spatial position relationship between the image display device and the wearable device. The first position information determination unit 1031 is further configured to determine sub-markers included in the markers according to the images of the markers; and positioning the eye region of the user wearing the glasses according to the sub-markers, and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment.
The second position information determination unit 1033 is configured to determine relative spatial position information between the user's eyes and the image display device according to the user's eye image, so as to improve the speed of tracking the user's eyes and improve fluency. Specifically, the second position information determination unit 1033 is configured to extract an eye feature of the eye image from the eye image of the user, and determine relative spatial position information between the eyes of the user wearing the glasses and the image display device from the eye feature.
The correcting unit 1035 is configured to correct the relative spatial position information determined by the second position information determining unit 1033, based on the relative spatial position information determined by the first position information determining unit 1031. Specifically, the correction unit 1035 is configured to: recording relative spatial position information determined according to the image of the eye region as estimated position information, and recording relative spatial position information determined according to the marker as calibration position information; and correcting the estimated position information according to the calibration position information to acquire relative spatial position information between the eyes of the user wearing the glasses and the image display equipment. Further, the correcting unit 1035 is configured to compare the estimated position information with the calibrated position information, and correct the estimated position information according to the calibrated position information when the estimated position information is inconsistent with the calibrated position information.
The display module 105 is configured to render the virtual object according to the relative spatial position information and display the virtual object in the image display device. In some embodiments, the display module 105 is configured to determine an angular relationship between the wearable device and the image display device according to the relative spatial position information, and determine a display angle of the virtual object according to the angular relationship, where the display module 105 may include an angle determination unit 1051, an angle determination unit 1053, a rendering unit 1055, and a display unit 1057.
In some embodiments, after the position relation determining module 103 obtains the relative spatial position information, the rendering unit 1055 is configured to obtain model rendering data corresponding to the relative spatial position information, and render the virtual object according to the model rendering data. Specifically, after the position relation determining module 103 obtains the relative spatial position relation with the marker, the rendering unit 1055 is configured to determine rendering coordinates of the virtual object according to the relative spatial position relation, and render and display the virtual object according to the rendering coordinates, where the rendering coordinates may be used to represent the relative spatial position relation between the virtual object in the virtual space and the image display device. The display unit 1057 is used to display the rendered virtual objects in the image display apparatus.
Further, when the rendering unit 1055 determines the display view angle of the virtual object according to the relative space angle and renders the virtual object, the relative space angle may be calculated in real time, and the virtual object may be rendered in real time, and the display unit 1057 may display the rendered virtual object in real time, so that the user may observe the virtual object to change the virtual object at the corresponding view angle during the moving process, which is beneficial to improving the view of image display.
The angle determining unit 1051 is configured to obtain a relative spatial angle between the wearable device and the image display device according to the relative spatial position information;
the view angle determining unit 1053 is configured to determine a display view angle of the virtual object according to a relative spatial angle between the wearable device and the image display device and a preset correspondence rule. Further, in some embodiments, the viewing angle determining unit 1053 is further configured to trigger the display viewing angle of the virtual object to be directly switched by setting a threshold value of the relative spatial angle.
Referring again to fig. 2, in one embodiment, the present application provides a wearable device 400 for assisting in location tracking. The wearable device 400 of the present application is eyewear for wearing by a user and includes a frame 410 and a marker 450 disposed on the frame 410. The marker 450 is recognized by a terminal device (e.g., the image display device 300) and used to determine the relative position between the eyes of the user and the terminal device.
In the present embodiment, frame 410 includes left temple 411, right temple 413, and frame 415 is disposed between left temple 411 and right temple 413. In other embodiments, the wearable device 400 may be clip-on glasses, and the wearable device 400 does not include a temple, but includes a clip connected to the frame 415, and the wearable device 400 is directly clipped on the user's myopic glasses through the clip, which is very convenient for the myopic user.
The frame 415 includes a left frame 4151, a right frame 4153, and a nose bridge 4155, which are arranged in parallel, the left frame 4151 is connected to the left temple 411, the right frame 4153 is connected to the right temple 413, and the nose bridge 4155 is connected between the left frame 4151 and the right frame 4153.
Marker 450 is disposed on frame 415. In particular, marker 450 is disposed on an exterior side of frame 415, which is understood to mean the side of frame 415 that faces away from the user's eyes when wearable device 400 is worn by the user. In the present embodiment, the plurality of markers 450 are a plurality of markers, and the plurality of markers 450 can be divided into a plurality of marker sets according to the arrangement, for example, the plurality of markers 450 includes a first set 451, a second set 453, and a third set 455, wherein the first set 451 is disposed on the left frame 4151, the second set 453 is disposed on the right frame 4153, and the third set 455 is disposed on the nose bridge 4155. The third group 455 is different from the first group 451 and the second group 453 so that the terminal device (e.g., the image display device 300) can determine the spatial location of the wearable device 400 by identifying the three markers 450. Alternatively, in other real-time modes, the third group 455 is different from both the first group 451 and the second group 453.
Further, to facilitate fitting the contours of wearable device 400, markers 450 may be evenly distributed across the contours of frame 415. At this time, the first group 451 may include one or more sub-markers, for example, may include a first sub-marker 4511 and a second sub-marker 4513, and the first sub-marker 4511 and the second sub-marker 4513 are respectively disposed on both sides of the left frame 4151. Specifically, in the embodiment shown in fig. 2 and 3, the first sub-marker 4511 and the second sub-marker 4513 are disposed on a side of the left frame 4151 away from the nose bridge portion 4155, the first sub-marker 4511 is disposed on an upper end of the left frame 4151, and the second sub-marker 4513 is disposed on a lower end of the left frame 4151. The upper end, as described above, should be understood to mean that the frame 415 is near one end of the user's eyebrows when the user wears the wearable device 400, and correspondingly, the lower end, as described above, should be understood to mean that the frame 415 is away from one end of the user's eyebrows when the user wears the wearable device 400.
Accordingly, the second set 453 can include one or more sub-markers, for example, a third sub-marker 4531 and a fourth sub-marker 4533, with the third sub-marker 4531 and the fourth sub-marker 4533 disposed on opposite sides of the right frame 4153. Specifically, in the embodiment shown in fig. 2 and 3, the third sub-marker 4531 and the fourth sub-marker 4533 are disposed on a side of the right frame 4153 away from the nose bridge 4155, the third sub-marker 4531 is disposed on an upper end of the right frame 4153, and the fourth sub-marker 4533 is disposed on a lower end of the right frame 4153.
Third set 455 may include fifth subtabel 4551, which fifth subtabel 4551 is disposed on nose bridge 4155. Fifth sub-marker 4551 is identical to first sub-marker 4511, second sub-marker 4513, third sub-marker 4531, and fourth sub-marker 4533, which may be different (as shown in fig. 2), identical (as shown in fig. 3), or at least two of which are identical (as shown in fig. 4).
Referring to fig. 4, in some embodiments, the first sub-marker 4511, the second sub-marker 4513, the third sub-marker 4531, and the fourth sub-marker 4533 are identical, and the fifth sub-marker 4551 is different from the other four sub-markers, such that the five sub-markers collectively form the outline of the wearable device 400. Thus, by setting the four sub-markers located at the periphery as the same markers and setting the sub-marker located approximately in the middle (the fifth sub-marker 4551) as a different marker from the other sub-markers, it is advantageous for the image display apparatus 300 to recognize the outline of the wearable device 400 and to locate the approximately middle position of the wearable device 400, so as to facilitate location tracking of the wearable device 400. Each of the first sub-marker 4511, the second sub-marker 4513, the third sub-marker 4531 and the fourth sub-marker 4533 includes a background 457 and a feature point 459, and the feature point 459 is different from the background so as to be easily recognized by the image display apparatus 300. The fifth marker 4551 includes a background 427 and two feature points 459. In the present embodiment, the characteristic point 459 is substantially circular. By providing a smaller number of feature points 429 on a sub-marker, the area of feature points 429 may be relatively large (e.g., 1/3 or more occupying the area of the sub-marker), thereby making the sub-marker more readily recognizable by image display device 300.
In this embodiment, marker 450 is a planar marker integrated into frame 415, which may be a predetermined symbol or pattern. The markers 450 are used to determine the relative spatial relationship between the wearable device 400 and the image display device 300 after being recognized by the image display device 300. In this embodiment, the marker 450 is a solid structure provided on the display panel 430. In other embodiments, the marker 430 may also be a predetermined symbol or graphic that is displayed when energized.
It will be appreciated that the specific pattern displayed by the marker 450 is not limited and may be any pattern that can be captured by the image capture device 301 of the image display apparatus 300. For example, the specific pattern of the marker 450 may be a combination of one or more of any of the following: circular, triangular, rectangular, oval, wavy, straight, curved, etc., and are not limited to what is described in this specification. It will be appreciated that in other embodiments, the marker 450 may be in other types of patterns, which may make the marker 450 more effective to be recognized by the image capture device 301. For example, the specific pattern of the marker 450 may be a geometric figure (e.g., a circle, a triangle, a rectangle, an ellipse, a wavy line, a straight line, a curved line, etc.), a predetermined pattern (e.g., an animal head, a common schematic symbol such as a traffic sign, etc.), or other patterns that can be resolved by the image capturing device 301 to form the marker, and is not limited to the description in this specification. It will also be appreciated that in other embodiments, the marker 450 may be a bar code, two-dimensional code, or the like.
Further, in some embodiments, wearable device 400 may also include a filter layer (not shown) that may be laminated to a side of marker 450 facing away from frame 415.
The filter layer is used for filtering light rays other than the light rays emitted to the marker 450 by the illumination device of the image display apparatus 300, so as to prevent the marker 450 from being influenced by ambient light rays when the light rays are reflected, and thus, the marker 450 can be more easily identified. In some embodiments, the filtering performance of the filter layer may be set according to actual needs. For example, when the marker 450 enters the field of view of the image capturing device 301 to be recognized, in order to improve the recognition efficiency, the image capturing device 301 usually uses an auxiliary light source to assist in capturing the image, for example, when an infrared light source is used for assistance, the filter layer is used to filter out light (such as visible light, ultraviolet light, etc.) other than infrared light, so that light other than infrared light cannot pass through the filter layer and infrared light can pass through and reach the marker 450. When the auxiliary light source projects infrared light to the marker 450, the filter layer filters ambient light except the infrared light, so that only the infrared light reaches the marker 450 and is reflected by the marker 450 to the near-infrared image capturing device, thereby reducing the influence of the ambient light on the identification process.
In one embodiment, the present application further provides a computer readable storage medium having stored therein program code that can be invoked by a processor to perform the method described in the method embodiment above. It should be noted that, in the embodiments provided in the present specification, the above-mentioned embodiments may be combined with each other, and features of the embodiments may also be combined with each other without conflict, and the embodiments are not limited to the embodiments.
According to the positioning and tracking method and the wearable device, the virtual object is displayed according to the marker, so that the virtual object can be visually displayed in front of the user, the virtual object can be controlled in real time by the wearable device, interaction between the user and the virtual object is facilitated, information carried by the virtual object is easier to obtain, and the use experience of the user can be improved.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A positioning and tracking method is applied to a non-wearable image display device and comprises the following steps:
acquiring an image containing a marker, the marker being disposed on a wearable device;
identifying a marker in the image, and determining relative spatial position information between the wearable device and the image display device according to the marker; and
rendering a virtual object according to the relative spatial position information, and displaying the virtual object in the image display device.
2. The method of claim 1, wherein the wearable device is eyeglasses, the marker being disposed on a frame of the eyeglasses;
when the glasses are worn, the determining relative spatial position information between the glasses and the image display device according to the markers comprises:
and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment according to the markers.
3. The method of claim 2, wherein the determining relative spatial position information between the eyes of the user wearing the eyewear and the image display device from the markers comprises:
determining the sub-markers comprised by the marker from the image of the marker;
and positioning the eye region of the user wearing the glasses according to the sub-markers, and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment.
4. The method of claim 2, wherein prior to said rendering virtual objects according to said relative spatial position information, the method further comprises:
acquiring an eye image of the user;
and extracting eye features of the eye image, and determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the eye features.
5. The method of claim 4, wherein determining relative spatial position information between the eyes of the user wearing the glasses and the image display device according to the ocular features comprises:
comparing eye features between adjacent frames of the eye image to obtain eye position changes of a user wearing the glasses;
calculating the movement data of the eyes according to the eye position change; and
and determining relative spatial position information between the eyes of the user wearing the glasses and the image display equipment according to the motion data of the eyes.
6. The method of any of claims 4 or 5, further comprising, after said determining relative spatial position information between the eyes of the user wearing the glasses and the image display device as a function of the ocular features:
correcting the estimated position information according to the calibration position information to acquire relative spatial position information between the eyes of the user wearing the glasses and the image display equipment;
the estimated position information is relative spatial position information determined according to the eye image, and the calibration position information is relative spatial position information determined according to the marker.
7. The method of claim 6, wherein the method further comprises:
acquiring an eye image acquired in real time through a first thread, and acquiring the estimated position information according to the eye image;
acquiring an image containing the marker through a second thread, and obtaining the calibration position information according to the marker;
and comparing the estimated position information with the calibration position information, and correcting the estimated position information according to the calibration position information when the estimated position information is inconsistent with the calibration position information.
8. A position tracking apparatus, comprising:
the image acquisition module is used for acquiring an image containing a marker, and the marker is arranged on the wearable device;
the position relation determining module is used for identifying a marker in the image and determining relative space position information between the wearable device and the image display device according to the marker; and
and the display module is used for rendering the virtual object according to the relative spatial position information and displaying the virtual object in the image display equipment.
9. The utility model provides a wearing formula equipment for assisting localization tracking, includes the picture frame, its characterized in that still including set up in the marker of picture frame, the marker is discerned the back by terminal equipment for confirm the relative position relation between user's eye and the terminal equipment.
10. A wearable device as claimed in claim 9, wherein the frame comprises a left frame and a right frame juxtaposed, the markers comprise a first sub-marker and a second sub-marker, the first sub-marker and the second sub-marker are respectively disposed on both sides of the left frame;
the mark also comprises a third sub-mark and a fourth sub-mark, and the third sub-mark and the fourth sub-mark are respectively arranged on two sides of the right frame;
and the wearable device further comprises a nose bridge part, the nose bridge part is connected between the left frame and the right frame, and the marker further comprises a fifth sub-marker arranged on the nose bridge part.
11. The wearable device of any of claims 9-10, further comprising a filter layer covering the marker.
CN201811159998.4A 2018-09-30 2018-09-30 Positioning tracking method and device and wearable equipment thereof Pending CN110968182A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811159998.4A CN110968182A (en) 2018-09-30 2018-09-30 Positioning tracking method and device and wearable equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811159998.4A CN110968182A (en) 2018-09-30 2018-09-30 Positioning tracking method and device and wearable equipment thereof

Publications (1)

Publication Number Publication Date
CN110968182A true CN110968182A (en) 2020-04-07

Family

ID=70029116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811159998.4A Pending CN110968182A (en) 2018-09-30 2018-09-30 Positioning tracking method and device and wearable equipment thereof

Country Status (1)

Country Link
CN (1) CN110968182A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522441A (en) * 2020-04-09 2020-08-11 北京奇艺世纪科技有限公司 Space positioning method and device, electronic equipment and storage medium
CN112292688A (en) * 2020-06-02 2021-01-29 焦旭 Motion detection method and apparatus, electronic device, and computer-readable storage medium
CN112489082A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Position detection method, position detection device, electronic equipment and readable storage medium
CN112733840A (en) * 2020-12-28 2021-04-30 歌尔光学科技有限公司 Earphone positioning method, terminal equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522441A (en) * 2020-04-09 2020-08-11 北京奇艺世纪科技有限公司 Space positioning method and device, electronic equipment and storage medium
CN111522441B (en) * 2020-04-09 2023-07-21 北京奇艺世纪科技有限公司 Space positioning method, device, electronic equipment and storage medium
CN112292688A (en) * 2020-06-02 2021-01-29 焦旭 Motion detection method and apparatus, electronic device, and computer-readable storage medium
WO2021243572A1 (en) * 2020-06-02 2021-12-09 焦旭 Motion detection method and apparatus, electronic device and computer readable storage medium
CN112489082A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Position detection method, position detection device, electronic equipment and readable storage medium
CN112733840A (en) * 2020-12-28 2021-04-30 歌尔光学科技有限公司 Earphone positioning method, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6095763B2 (en) Gesture registration device, gesture registration program, and gesture registration method
CN110968182A (en) Positioning tracking method and device and wearable equipment thereof
CN108603749B (en) Information processing apparatus, information processing method, and recording medium
JP6177872B2 (en) I / O device, I / O program, and I / O method
US9678349B2 (en) Transparent type near-eye display device
JP6333801B2 (en) Display control device, display control program, and display control method
CN110825234A (en) Projection type augmented reality tracking display method and system for industrial scene
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
CN106959759A (en) A kind of data processing method and device
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN208722146U (en) Wearable device for auxiliary positioning tracking
JP6250025B2 (en) I / O device, I / O program, and I / O method
JP6061334B2 (en) AR system using optical see-through HMD
CN111491159A (en) Augmented reality display system and method
JP3050808B2 (en) Positioning device
CN116797643A (en) Method for acquiring user fixation area in VR, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination