CN117237436A - Method and system for determining a relative pose relationship between at least two users - Google Patents

Method and system for determining a relative pose relationship between at least two users Download PDF

Info

Publication number
CN117237436A
CN117237436A CN202311021703.8A CN202311021703A CN117237436A CN 117237436 A CN117237436 A CN 117237436A CN 202311021703 A CN202311021703 A CN 202311021703A CN 117237436 A CN117237436 A CN 117237436A
Authority
CN
China
Prior art keywords
display device
mounted display
user
head mounted
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311021703.8A
Other languages
Chinese (zh)
Inventor
李江亮
周硙
李科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yimu Technology Co ltd
Original Assignee
Beijing Yimu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yimu Technology Co ltd filed Critical Beijing Yimu Technology Co ltd
Priority to CN202311021703.8A priority Critical patent/CN117237436A/en
Publication of CN117237436A publication Critical patent/CN117237436A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a method and an interaction system for determining a relative pose relationship between at least two users, the method comprising: marking, by the controller, a spatial location of a second user wearing a second head mounted display device in a field of view of a first user wearing the first head mounted display device; marking, by the controller, a spatial position of the object or a portion of the object aimed by the second user through a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device; and determining a relative positional and attitude relationship between a first user wearing the first head mounted display device and a second user wearing the second head mounted display device based on a spatial position of the second user wearing the second head mounted display device and a spatial position of the object or a portion of the object aligned with the sight line.

Description

Method and system for determining a relative pose relationship between at least two users
Technical Field
The present application relates to the field of information interaction, and in particular, to a method and system for determining a relative pose relationship between at least two users.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art to the present disclosure.
In recent years, augmented Reality (AR) and Virtual Reality (VR) technologies have gained increasing popularity. In AR/VR applications, the display interaction devices currently in common use are hand-held smart devices (e.g., cell phones) and head-mounted display devices (e.g., AR/VR glasses, smart glasses, AR/VR helmets). Virtual content to be presented to the user may be presented on the display screen of the handset or head mounted display device and the user may interact with the virtual content. The virtual content may be, for example, an icon, a picture, text, an emoticon, a virtual person, a virtual three-dimensional object, a three-dimensional model, an animation, a video, or the like.
In an AR/VR application, a user of a head mounted display device needs to determine its position and/or pose information (which may be collectively referred to hereinafter as "pose information") and determine from the position and/or pose information which virtual content to present and how to present the virtual content. In some cases, two or more users may be required to experience an AR/VR application together, such as multiplayer gaming, multiplayer social networking, multiplayer teaching, doctor-patient communication, etc., which may require binding multiple head mounted display devices or users of multiple head mounted display devices to one coordinate system. One conventional positioning and attitude determination method is to construct a three-dimensional point cloud for a scene in advance, and determine position and attitude information of a head-mounted display device in the scene based on the three-dimensional point cloud of the scene. Another conventional positioning and attitude determination method is to deploy one or more positioning base stations or positioning markers in advance in a scene, and determine the position and attitude information of the head-mounted display device in the scene through the positioning base stations or the positioning markers. However, these methods are complicated and have high cost, and require preparation or deployment in advance, for example, three-dimensional point clouds are built for scenes in advance, or positioning base stations or positioning marks are deployed in advance, and these methods lack flexibility and cannot be implemented anytime and anywhere.
Disclosure of Invention
One aspect of the invention relates to a method for determining a relative pose relationship between at least two users, wherein a first user wears a first head mounted display device and has a controller configured for the first head mounted display device, a second user wears a second head mounted display device, the second head mounted display device being capable of presenting a sight on one or more display screens, the sight being capable of being used to aim at any position in space, the method comprising: marking, by the controller, a spatial location of a second user wearing the second head mounted display device in a field of view of a first user wearing the first head mounted display device; marking, by the controller, a spatial position of the object or a portion of the object aimed by the second user through a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device; and determining a relative positional and attitude relationship between a first user wearing the first head mounted display device and a second user wearing the second head mounted display device based on a spatial position of the second user wearing the second head mounted display device and a spatial position of the object or a portion of the object aligned with the sight line.
One aspect of the application relates to a storage medium in which a computer program is stored which, when being executed by a processor, can be used to implement the method according to the application.
One aspect of the application relates to an electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method of the application.
One aspect of the application relates to an interactive system comprising a controller and a head mounted display device that can be used to implement the method of the application.
The scheme of the application can rapidly determine the relative pose relationship among a plurality of users wearing the head-mounted display equipment without depending on scene three-dimensional point clouds, positioning base stations, positioning marks and the like, so that the method has extremely low implementation cost, and in addition, the method can be implemented anytime and anywhere, and has extremely great convenience and flexibility.
Drawings
Embodiments of the application are further described below with reference to the accompanying drawings, in which:
FIG. 1 illustrates an interactive system implemented based on a handheld smart device (e.g., a cell phone) and a head mounted display device, according to one embodiment;
FIG. 2 illustrates an interactive system implemented based on a handheld smart device (e.g., a cell phone) and a head mounted display device, according to one embodiment;
FIG. 3 illustrates an interactive system implemented based on a handheld smart device (e.g., a cell phone) and a head mounted display device, according to one embodiment;
FIG. 4 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment;
FIG. 5 illustrates a method for determining a relative pose relationship between at least two users, according to one embodiment
FIG. 6 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment;
FIG. 7 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment; and
FIG. 8 illustrates a realistic effects diagram in accordance with one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 illustrates an interactive system implemented based on a handheld smart device (e.g., a cell phone) and a head mounted display device, including a cell phone 102 of a user 101 and a head mounted display device (e.g., AR/VR glasses) 103, according to one embodiment. The user 101 is located in a scene holding a mobile phone 102 in a hand, wearing a head mounted display device 103 on the head, and can experience various augmented reality or virtual reality applications through the mobile phone 102 and the head mounted display device 103. In this system, the handset 102 may act as a controller for the head mounted display device 103 for implementing interactive functions in various augmented reality or virtual reality applications, such as selecting virtual objects, manipulating virtual objects, inputting information, and so forth. The head mounted display device 103 has a display screen for presenting various virtual objects in an augmented reality or virtual reality application to the user 101.
The real-time position and/or attitude of the handset and the head mounted display device in space may be determined by measuring or tracking its position and/or attitude changes through respective sensors (e.g., acceleration sensors, magnetic sensors, orientation sensors, gravitational sensors, gyroscopes, cameras, etc.) of the handset 102 and the head mounted display device 103 by methods known in the art (e.g., inertial navigation, visual odometer, SLAM, VSLAM, SFM, etc.). In one embodiment, one or more of the handset 102 and the head mounted display device 103 are provided with 6 degrees of freedom (6 DoF) position and attitude tracking capabilities.
In one embodiment, various virtual objects in an augmented reality or virtual reality application may also be presented on the handset 102 (e.g., on a display screen of the handset 102). In one embodiment, the user 101 may also implement information interaction functionality in an augmented reality or virtual reality application through the head mounted display device 103, such as information interaction through voice, keys, touch bars, gestures, eyes, etc.
In the system, the mobile phone 102 is mainly used for performing interactive operation, and the head-mounted display device 103 is mainly used for performing information presentation, so that the respective advantages of the mobile phone 102 and the head-mounted display device are fully utilized, and the mobile phone 102 and the head-mounted display device are cooperatively matched to realize convenient information display and interaction functions.
In some cases, two or more users may be required to experience an AR/VR application together, such as multiplayer games, multiplayer social interactions, multiplayer teaching, doctor-patient communications, etc., where each participating user wears a head-mounted display device to enable interaction between different users, e.g., in a multiplayer AR/VR game, one user is able to observe virtual bullets issued by other users; in multi-person AR/VR social contact, one user can observe virtual labels set by other users for the user or messages or graffiti of other users at specific positions in space; in multi-person AR/VR teaching, students want to be able to simultaneously observe operations performed by a teacher on virtual three-dimensional objects in space; in AR/VR doctor-patient communication, the patient is presented with a three-dimensional model of the tissue that the doctor presents and various manipulations performed by the doctor.
Fig. 2 illustrates an interactive system implemented based on a handheld smart device (e.g., a cell phone) and a head mounted display device, including a cell phone 102 and a head mounted display device 103 of a user 101, and a head mounted display device 105 of a user 104, according to one embodiment. User 101 may be, for example, a doctor and user 104 may be, for example, a patient. The user 101 may place a three-dimensional model of the tissue of the user 104 (e.g., a three-dimensional model of the skull) into space and view through the head-mounted display device 103, while the user 101 may perform various operations on the three-dimensional model of the tissue, such as moving, zooming, rotating, labeling, pointing, etc., using the cell phone 102. The user 104 can observe the tissue three-dimensional model placed by the doctor at the corresponding position in space through the head-mounted display device worn by the user, and can observe various operations performed by the doctor on the tissue three-dimensional model in real time.
In order to achieve the above-mentioned multi-person AR/VR interaction, it is necessary to bind all of the head-mounted display devices worn by a plurality of users to the same spatial coordinate system, that is, to be able to determine the relative position and posture relationship between the plurality of head-mounted display devices or the users of the plurality of head-mounted display devices. In the present application, the position and posture may also be referred to as "pose". Fig. 3 illustrates an interactive system implemented based on a handheld smart device (e.g., a mobile phone) and a head-mounted display device, and portions thereof are similar to fig. 2 and are not described in detail herein. The sight may be presented on one or more display screens of the head mounted display device 105 of the user 104, which the user 104 may aim at any object in space, or some portion thereof, or any position, by moving or shaking the head. The sight may be any shape, such as a cross, a dot, a circle, etc., which may be presented, for example, in a central position of the display screen. In one embodiment, the sight may also be presented at other locations on the display screen. The system shown in fig. 3 also includes an object 108, and the object 108 may be any type of object. The object 108 may be an object in a fixed position or may be a movable object, such as a finger of the user 101.
In determining the relative positional and attitude relationship between the head mounted display device 103 of the user 101 and the head mounted display device 105 of the user 104, the user 104 may be caused to aim the sight on the display screen of the head mounted display device 105 at one position in space, such as the tip of the object 108, by rotating the head. The user 101 may bind the user 104 or his head mounted display device 105 to a coordinate system by operating the handset 102 to mark the spatial position 106 of the user 104 or his head mounted display device 105 in the field of view of the head mounted display device 103 and mark the spatial position 107 at which the sight is aimed, and by determining the relative position and posture relationship between the head mounted display device 103 and the head mounted display device 105, i.e. the relative position and posture relationship between the user 101 and the user 104, from these two spatial positions.
In fig. 3, the marked spatial position 106 of the user 104 or his head mounted display device 105 is shown as the spatial position of the right display screen of the head mounted display device 105, but it will be appreciated that it may also be the spatial position of any other component of the head mounted display device 105, or any other spatial position that can be used to derive or calculate the spatial position of the user 104 or his head mounted display device 105, such as the position of the left display screen of the head mounted display device 105, the position of the left or right eye of the user 104 of the head mounted display device 105, the position of the left and right display screens of the head mounted display device 105, the position of the center of the head mounted display device 105, the head position of the user 104, etc.
In one embodiment, the sight is presented on one display screen of the head mounted display device 105, and the marked spatial position 106 is the position of the display screen on which the sight is presented, or the position of the user's eyes corresponding to the display screen.
FIG. 4 illustrates a method for determining a relative positional and gestural relationship between at least two users by a handheld smart device (e.g., a cell phone) to determine a relative positional and gestural relationship between head mounted display devices of the plurality of users, according to one embodiment. The method comprises the following steps:
step 401: and marking, by the handheld smart device, a spatial location of a second user wearing the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Taking fig. 3 as an example, a user 101 may use a mobile phone 102 (i.e., a handheld smart device) to mark a spatial location of a head mounted display device 105 (i.e., a second head mounted display device) of a user 104 in a field of view of his head mounted display device 103 (i.e., a first head mounted display device), the spatial location representing a spatial location of the user 104, shown at 106 in fig. 3.
The spatial location of the user 104 or his head mounted display device 105 may be marked in the field of view of the head mounted display device 103 by the handset 102 using a variety of possible ways. In one embodiment, movement of a virtual object (e.g., cursor, pointer, etc.) in the field of view of the head mounted display device 103 may be controlled by a change in the spatial position of the handset 102 until it moves to the spatial position of the head mounted display device 105, i.e., the spatial position of the virtual object coincides with the spatial position of the head mounted display device 105. It should be noted here that a certain virtual object in the field of view of the head-mounted display device 103 has a specific three-dimensional coordinate in the spatial coordinate system of the head-mounted display device 103, and if the three-dimensional coordinate of the virtual object is changed, its position in the field of view of the head-mounted display device 103 is changed accordingly, and vice versa. A user wearing the head mounted display device 103 may control the virtual object to move in their field of view through the cell phone 102 to coincide with an object or a portion of an object in space. In one embodiment, in determining whether the spatial position of the virtual object coincides with the spatial position of the head mounted display device 105, the user 101 may move the body right and left, or move or shake the head right and left, as appropriate, to ensure that the virtual object coincides with the position of the head mounted display device 105 in three-dimensional space, rather than only in one line of sight direction. Because if the two coincide in only one line of sight they may have different depths relative to the observer and not in a spatial position sense.
When the virtual object moves to the spatial location of the head mounted display device 105, the user 101 may mark the spatial location of the head mounted display device 105 by recording the current location of the virtual object by sending an instruction (e.g., by pressing a key of the cell phone 102, a screen, or by voice, etc.). When the spatial position of the head-mounted display device 105 is marked, only the positional information of the spatial position may be recorded, or a virtual object (e.g., a sphere) may be further arranged at the spatial position to visually present the spatial position to the user 101.
The recorded spatial position information of the user 104 or the head mounted display device 105 may be its position information relative to the head mounted display device 103 of the user 101, for example, three-dimensional coordinate information of the user 104 or the head mounted display device 105 in a spatial coordinate system of the head mounted display device 103.
Step 402: and marking, by the handheld smart device, a spatial position in which a second user is aimed by a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Taking fig. 3 as an example, the user 101 may use the mobile phone 102 to mark a spatial position (i.e., a spatial position of a sight-aligned object or a portion thereof) on the display screen of the user 104 through the head-mounted display device 105 in the field of view of his head-mounted display device 103, which spatial position is shown by 107 in fig. 3.
The spatial position of the aiming at the display screen of the head mounted display device 105 may be marked in the field of view of the head mounted display device 103 by the cell phone 102 using various possible ways. In one embodiment, movement of a virtual object (e.g., cursor, pointer, etc.) in the field of view of the head mounted display device 103 may be controlled by a change in the spatial position of the handset 102 until it moves to, i.e., coincides with, the spatial position of the aiming at the display screen of the head mounted display device 105. In one embodiment, in determining whether the spatial position of the virtual object coincides with the spatial position at which the sight line is aligned, the user 101 may move the body right and left appropriately, or move the head right and left appropriately, or shake the head appropriately, to ensure that the two coincide in three-dimensional space, rather than only in one line of sight direction, but with different depths.
When the virtual object moves to a spatial position of the aiming at the display screen of the head-mounted display device 105, the user 101 can record the current position of the virtual object by sending an instruction (for example, by pressing a key of the mobile phone 102, a screen, or by voice, etc.), thereby marking the spatial position of the aiming at. When marking the spatial position at which the sight is aimed, only the position information of the spatial position may be recorded, or a virtual object (e.g., a sphere) may be further arranged at the spatial position to visually present the spatial position to the user 101.
The recorded spatial position information of the aiming alignment may be its position information relative to the head mounted display device 103 of the user 101, for example, three-dimensional coordinate information of the spatial position of the aiming alignment in a spatial coordinate system of the head mounted display device 103.
In one embodiment, an object or some portion thereof that is aligned with a sight on the display screen of the head mounted display device 105 may be pre-defined by the user 101 and the user 104, or the user 101 may be informed of an object or some portion thereof that is to be aligned with a sight by the user 101, or the user 104 may be informed of an object or some portion thereof that is to be aligned with a sight by the user 101.
Step 403: based on the spatial position of the second user wearing the second head mounted display device and the spatial position in which the sight line is aligned, a relative positional and attitude relationship between the first user wearing the first head mounted display device and the second user wearing the second head mounted display device is determined.
After marking the spatial position 106 of the user 104 or his head mounted display device 105 and the spatial position 107 of the aiming at the display screen of the head mounted display device 105, the relative position and posture relation between the user 101 and the user 104, which is in fact also the relative position and posture relation between the head mounted display device 103 and the head mounted display device 105, can be determined based on these two spatial positions.
Specifically, based on the marked spatial position 106 of the user 104 or his head mounted display device 105, position information of the user 104 or his head mounted display device 105 relative to the head mounted display device 103 may be determined; based on the marked spatial position 106 of the user 104 or his head mounted display device 105 and the spatial position 107 of the aiming at the head mounted display device 105 display screen, the gaze direction information or orientation information of the user 104 or his head mounted display device 105 may be determined. Since the user typically has both eyes substantially on the same horizontal line when communicating with the head mounted display device (i.e., the user typically does not rotate the head substantially about the line of sight), the pose information of the user 104 or his head mounted display device 105 may be determined based on the line of sight information or the orientation information of the user 104 or his head mounted display device 105. Thus, the relative positional and posture relationship between the user 101 wearing the head-mounted display device 103 and the user 104 wearing the head-mounted display device 105 can be determined.
In one embodiment, the mobile phone 102 or other handheld smart device may not be used, but any other form of controller may be used, as long as it is capable of marking a spatial position, when marking a spatial position of the second user or the second head mounted display device, or when marking a spatial position where the second user is aimed by a sight on a display screen of the second head mounted display device. In one embodiment, the controller may be a user speech based controller that marks the space by user speech. In one embodiment, the controller may be a user gesture based controller that marks the space through a user gesture. In one embodiment, the controller may be a user gaze direction based controller that marks the space by a user gaze (i.e., a user gaze direction). The controller may be connected to the first head mounted display device either by wire or wirelessly, or may be integrated with or part of the first head mounted display device.
FIG. 5 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment, the method comprising (in part similar to the steps of FIG. 4, not repeated):
step 501: and marking, by the handheld smart device, a spatial location of a second user wearing the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Step 502: and marking, by the handheld smart device, a spatial position in which a second user is aimed by a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Step 503: based on the spatial position of the second user wearing the second head mounted display device and the spatial position in which the sight line is aligned, a relative positional and attitude relationship between the first user wearing the first head mounted display device and the second user wearing the second head mounted display device is determined.
Step 504: a change in position and/or a change in pose of the first head mounted display device and the second head mounted display device is tracked to enable interaction between the first user and the second user.
After determining the relative positional and attitude relationship between the user 101 wearing the head mounted display device 103 and the user 104 wearing the head mounted display device 105 (i.e., the head mounted display device 103 and the head mounted display device 105), the head mounted display device 103 and the head mounted display device 105 may measure or track their positional changes and/or attitude changes by various sensors (e.g., acceleration sensors, magnetic force sensors, direction sensors, gravity sensors, gyroscopes, cameras, etc.) therein, respectively, by methods known in the art (e.g., inertial navigation, visual odometry, SLAM, VSLAM, SFM, etc.), to thereby determine the real-time relative position and/or attitude between the head mounted display device 103 and the head mounted display device 105 to enable interaction between the first user and the second user.
It should be noted that, according to different application scenarios or different application requirements, in some embodiments, six degrees of freedom (6 DoF) tracking of the position and the pose of the first head-mounted display device and/or the second head-mounted display device is not required, but only the position information or only the pose information may be tracked. For example, in one embodiment, considering that there are a large number of mid-low end head mounted display devices that currently have only 3DOF pose tracking capabilities, it is possible for the head mounted display device to track only its pose changes, regardless of the position changes of the head mounted display device, or ignore the position changes thereof. Although the change in position of the head mounted display device is not considered, for some augmented reality applications where the user typically experiences in place or in a particular small area (i.e., the user typically only needs to turn his head or body to experience), this does not have a significant adverse effect on the user experience.
In one embodiment, the change in position and/or the change in pose of the handheld smart device, the first head mounted display device, the second head mounted display device may be tracked to enable interaction therebetween. For example, in multi-person AR/VR teaching, a teacher may perform operations on virtual three-dimensional objects in the field of view of his head-mounted display device through a handheld smart device, and students may view the virtual three-dimensional objects and the operations performed by the teacher in real-time and in synchronization through the respective worn head-mounted display devices; in AR/VR doctor-patient communication, a doctor can perform operations on a three-dimensional model of tissue in the field of view of his head-mounted display device through a handheld smart device, and a patient can see the three-dimensional model of tissue and various operations performed by the doctor through his head-mounted display device.
In one embodiment, after a period of use, the relative position and pose between the first head mounted display device and the second head mounted display device may be redetermined in the manner described above to eliminate errors that may occur.
By the method, a plurality of users can wear respective head-mounted display devices to experience the augmented reality or virtual reality application together. Meanwhile, the user may perform various operations required in the augmented reality or virtual reality application using the mobile phone, such as clicking an operation menu, clicking an operation button, placing a virtual object, deleting a virtual object, aiming a virtual object, clicking a virtual object, moving a virtual object, rotating a virtual object, drawing a virtual work, inputting information, and the like.
In one embodiment, the spatial position of the second head mounted display device may be marked first, followed by the spatial position of the alignment of the aiming at the display screen of the second head mounted display device. When the spatial position in which the aiming at is marked, it is not necessarily required that the aiming at the display screen of the second head mounted display device is now aligned to this spatial position. For example, the spatial position of the second head mounted display device may be marked at a first moment when the aiming at a certain spatial position on the display screen of the second head mounted display device, and then the second head mounted display device may change its position or attitude and mark the spatial position at which the previous (i.e. first moment) aiming was at the second moment. This approach requires that the first head mounted display device and the second head mounted display device be able to track or record their position and attitude (pose) change information from the first moment to the second moment. Thereafter, based on the marked spatial position of the second head-mounted display device and the spatial position at which the sight line is aimed, a relative position and posture relationship between the first head-mounted display device and the second head-mounted display device at the first time may be determined, and further based on position and posture change information of the first head-mounted display device and the second head-mounted display device from the first time to the second time, a relative position and posture relationship between the first head-mounted display device and the second head-mounted display device at the second time may be determined. After the second time, the position and/or attitude changes of the first and second head mounted display devices may continue to be tracked to enable interaction therebetween.
In one embodiment, the spatial position of the alignment of the aiming at the display screen of the second head mounted display device may be marked first, and then the spatial position of the second head mounted display device may be marked. When the spatial position in which the aiming at is marked, it is not necessarily required that the aiming at the display screen of the second head mounted display device is now aligned to this spatial position. For example, the spatial position to which the aiming at is to be aligned may be marked in advance at a first time, and then the spatial position of the second head mounted display device may be marked at a second time when the aiming at the spatial position on the display screen of the second head mounted display device. Thereafter, based on the marked spatial position of the second head mounted display device and the spatial position at which the sight line is aimed, a relative positional and attitude relationship between the first head mounted display device and the second head mounted display device at the second moment in time may be determined. After the second time, the position and/or attitude changes of the first and second head mounted display devices may continue to be tracked to enable interaction therebetween.
In one embodiment, to reduce or eliminate errors in determining the pose information of the head mounted display device 105, gravity direction information detected by sensors in the head mounted display device 105 may be further considered and based on the gravity direction information, whether the head mounted display device 105 is tilted in the gravity direction may be determined and the pose information of the head mounted display device 105 may be corrected or corrected in accordance with the amount of tilt.
FIG. 6 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment, the method comprising (in part similar to the steps of FIG. 4, not repeated):
step 601: and marking, by the handheld smart device, a spatial location of a second user wearing the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Step 602: and marking, by the handheld smart device, a spatial position in which a second user is aimed by a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device.
Step 603: and obtaining the gravity direction information detected by the sensor in the second head-mounted display device.
The head-mounted display device 105 may have therein a device capable of detecting the direction of gravity, such as a gravity sensor, or the like. From the detected gravitational direction information, it may be determined whether, how, and in particular how much the head mounted display device 105 is currently tilted.
Step 604: based on the spatial position of the second user wearing the second head-mounted display device, the spatial position at which the sight line is aimed, and the gravitational direction information, a relative positional and attitude relationship between the first user wearing the first head-mounted display device and the second user wearing the second head-mounted display device is determined.
In the case where the gravitational direction information of the head mounted display device 105 is obtained, the posture information of the user 104 or the head mounted display device 105 can be determined more accurately based on the spatial position of the user 104 or the head mounted display device 105 thereof, the spatial position of the alignment of the gaze of the head mounted display device 105, and the gravitational direction information even in the case where the user 104 of the head mounted display device 105 makes some irregular action (e.g., rotates the head by a large margin with the line of sight direction as an axis).
In one embodiment, a common spatial coordinate system may be set such that one of the first and second head mounted display devices determines its position and pose in the common spatial coordinate system and determines the position and pose of the other of the first and second head mounted display devices in the common spatial coordinate system based on the position and pose and the relative position and pose between the first and second head mounted display devices. The common spatial coordinate system refers to a spatial coordinate system that may be used by the first head mounted display device in conjunction with the second head mounted display device, which may be, for example, a location coordinate system (e.g., a coordinate system established for a room, building, campus, etc.) or a world coordinate system. The position and pose information of the first head mounted display device or the second head mounted display device in a certain spatial coordinate system may be determined using various possible localization and pose techniques (e.g., by means of image recognition, a scene three-dimensional point cloud, visual markers, optical communication means or optical markers, optical signals, wireless signals, satellite signals, etc.).
FIG. 7 illustrates a method for determining a relative pose relationship between at least two users according to one embodiment, the method comprising (in part similar to the steps of FIG. 4, not repeated):
step 701: position and pose information of one of the first head mounted display device and the second head mounted display device in a certain spatial coordinate system is determined.
Various possible localization and pose techniques may be used to determine the position and pose information of the head mounted display device in a certain spatial coordinate system.
In one embodiment, the position and pose of the head mounted display device in the spatial coordinate system may be determined based on visual markers in space. A visual sign refers to a sign that can be recognized by an electronic device, which can take a variety of forms. In some embodiments, the visual indicia may be used to communicate information that can be obtained by the smart device. For example, the visual indicia may be an optical communication device capable of emitting coded light information, or the visual indicia may be a graphic with coded information, such as a two-dimensional code (e.g., QR code, applet code), bar code, or the like. The head-mounted display device can acquire an image containing the visual mark through image acquisition of the visual mark through an image acquisition device on the head-mounted display device, and can identify information transmitted by the visual mark and determine the position or posture information of the head-mounted display device relative to the visual mark through analyzing imaging of the visual mark in the image, so that the position and posture of the head-mounted display device in a space coordinate system are determined.
Step 702: the spatial location of the second head mounted display device is marked in the field of view of the first head mounted display device by the handheld smart device.
Step 703: and marking, by the handheld smart device, the spatial position of the aiming at the display screen of the second head-mounted display device in the field of view of the first head-mounted display device.
Step 704: based on the spatial position of the second head mounted display device and the spatial position aligned with the sight line, a relative positional and attitude relationship between the first head mounted display device and the second head mounted display device is determined.
Step 705: position and orientation information of one of the first and second head mounted display devices in the spatial coordinate system is determined based on the position and orientation information of the other of the first and second head mounted display devices in the spatial coordinate system and a relative position and orientation relationship between the first and second head mounted display devices.
Step 706: a change in position and/or a change in pose of the first head mounted display device and the second head mounted display device is tracked to enable interaction between the first user and the second user.
FIG. 8 illustrates a realistic effects diagram of an embodiment of the present application in a doctor-patient communication scenario. In this figure, a first user (doctor) and a second user (patient) wear head-mounted display devices, respectively, and both head-mounted display devices are bound to the same coordinate system. The doctor places the three-dimensional model of the patient's skull at a certain position in space through the head-mounted display device and performs various operations on the three-dimensional model of the skull, such as moving, zooming, rotating, labeling, pointing, etc., through the handheld smart device (e.g., a cell phone). The patient can observe the three-dimensional model of the skull placed by the doctor at the corresponding position in space through the head-mounted display device worn by the patient, and can observe various operations performed on the three-dimensional model of the skull by the doctor in real time.
The above description has been given by taking a mobile phone as an example, but it is to be understood that the present application is not limited to mobile phones, but can be applied to other handheld smart devices, such as smart handles, controllers, etc.
The head mounted display device of the present application may be AR/VR glasses, smart glasses, AR/VR helmets, or any other glasses or helmets that can be used to present information to a user. The head-mounted display device in the present application also includes glasses formed by attaching a member or an insert to ordinary optical glasses, for example, glasses formed by attaching a display device to ordinary optical glasses.
In one embodiment of the application, the application may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g. hard disk, optical disk, flash memory, etc.), which, when executed by a processor, can be used to carry out the method of the application.
In another embodiment of the application, the application may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory, in which a computer program is stored which, when being executed by the processor, can be used to carry out the method of the application.
Reference herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment" or the like, means that a particular feature, structure, or property described in connection with the embodiments is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment" in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic described in connection with or illustrated in one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without limitation, provided that the combination is not logically or otherwise inoperable. The expressions appearing herein like "according to a", "based on a", "by a" or "using a" are meant to be non-exclusive, i.e. "according to a" may cover "according to a only" as well as "according to a and B", unless the meaning of "according to a only" is specifically stated. In the present application, some exemplary operation steps are described in a certain order for clarity of explanation, but it will be understood by those skilled in the art that each of these operation steps is not essential and some of them may be omitted or replaced with other steps. The steps do not have to be performed sequentially in the manner shown, but rather, some of the steps may be performed in a different order, or concurrently, as desired, provided that the new manner of execution is not non-logical or non-operational.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. While the invention has been described in terms of several embodiments, the invention is not limited to the embodiments described herein, but encompasses various changes and modifications that may be made without departing from the scope of the invention.

Claims (13)

1. A method for determining a relative pose relationship between at least two users, wherein a first user wears a first head mounted display device and has a controller configured for the first head mounted display device, a second user wears a second head mounted display device capable of presenting a sight on one or more display screens of the second head mounted display device, the sight being for aiming at any object or part of an object in space, the method comprising:
marking, by the controller, a spatial location of a second user wearing the second head mounted display device in a field of view of a first user wearing the first head mounted display device;
marking, by the controller, a spatial position of the object or a portion of the object aimed by the second user through a sight on a display screen of the second head mounted display device in a field of view of a first user wearing the first head mounted display device; and
Based on the spatial position of the second user wearing the second head mounted display device and the spatial position of the object or a part of the object aligned with the sight line, a position and posture relation of the second user wearing the second head mounted display device relative to the first user wearing the first head mounted display device is determined.
2. The method of claim 1, wherein,
the marking of the spatial location of a second user wearing the second head mounted display device includes recording location information of the spatial location of the second user wearing the second head mounted display device relative to the first user or the first head mounted display device worn by the first user; and
the marking the spatial position of the object or a part of an object aimed by the second user through a sight on a display screen of the second head mounted display device comprises recording positional information of the spatial position of the object or a part of an object aimed by the sight relative to the first user or the first head mounted display device it wears.
3. The method of claim 1, further comprising:
acquiring gravity direction information detected by a sensor in the second head-mounted display device; and
The determining a positional and attitude relationship of the second user wearing the second head mounted display device relative to the first user wearing the first head mounted display device based on the spatial position of the object or a portion of the object in which the spatial position of the second user wearing the second head mounted display device is aligned with the sight line comprises: based on the spatial position of a second user wearing the second head mounted display device, the spatial position of the object or a portion of the object to which the sight line is aimed, and the gravitational direction information, a position and posture relationship of the second user wearing the second head mounted display device with respect to a first user wearing the first head mounted display device is determined.
4. The method of claim 1, further comprising:
a change in position and/or a change in pose of the first head mounted display device and the second head mounted display device is tracked to enable interaction between the first user and the second user.
5. The method of claim 1, wherein the controller is a handheld smart device, and wherein,
controlling movement of a virtual object in the field of view of the first user by means of the change in spatial position of the handheld smart device to mark the spatial position of a second user wearing the second head mounted display device in the field of view of the first user; and/or
And controlling the movement of the virtual object in the field of view of the first user through the spatial position change of the handheld intelligent device so as to mark the spatial position of the object or a certain part of the object aligned by the second user through the sight on the display screen of the second head-mounted display device in the field of view of the first user.
6. The method of claim 1, wherein,
firstly marking the space position of a second user wearing the second head-mounted display device, and then marking the space position of the object or a certain part of the object aligned by the second user through a sight glass on a display screen of the second head-mounted display device; or alternatively
The spatial position of the object or a part of the object aligned by the second user through the sight glass on the display screen of the second head-mounted display device is marked first, and then the spatial position of the second user wearing the second head-mounted display device is marked.
7. The method of claim 1, wherein the controller comprises:
a handheld intelligent device;
a controller based on user speech;
a controller based on a user's gaze direction; or alternatively
A controller based on user gestures.
8. The method of claim 1, wherein,
The sight is presented on one display screen of the second head mounted display device, and the marked spatial position of the second user wearing the second head mounted display device is the position of the display screen presenting the sight or the position of the eyes of the user corresponding to the display screen.
9. The method of claim 1, wherein the spatial location of the second user wearing the second head mounted display device comprises:
a position of a left or right eye of a second user wearing the second head mounted display device;
the position of a left display screen or a right display screen of the second head-mounted display device;
the positions of the centers of a left display screen and a right display screen of the second head-mounted display device;
a position of a head of a second user wearing the second head-mounted display device; or alternatively
The second head mounted display device has a center position.
10. The method of claim 1, wherein the object or a portion of an object that the second user is aiming at through a sight on a display screen of the second head mounted display device is determined by:
the first user and the second user are contracted in advance;
informing the second user by the first user; or alternatively
The first user is notified by the second user.
11. A storage medium having stored therein a computer program which, when executed by a processor, is operable to carry out the method of any one of claims 1-10.
12. An electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method of any of claims 1-10.
13. An interactive system comprising a controller and a head mounted display device, the controller and head mounted display device being operable to implement the method of any one of claims 1-10.
CN202311021703.8A 2023-08-14 2023-08-14 Method and system for determining a relative pose relationship between at least two users Pending CN117237436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311021703.8A CN117237436A (en) 2023-08-14 2023-08-14 Method and system for determining a relative pose relationship between at least two users

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311021703.8A CN117237436A (en) 2023-08-14 2023-08-14 Method and system for determining a relative pose relationship between at least two users

Publications (1)

Publication Number Publication Date
CN117237436A true CN117237436A (en) 2023-12-15

Family

ID=89085222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311021703.8A Pending CN117237436A (en) 2023-08-14 2023-08-14 Method and system for determining a relative pose relationship between at least two users

Country Status (1)

Country Link
CN (1) CN117237436A (en)

Similar Documents

Publication Publication Date Title
KR102038638B1 (en) System for tracking handheld devices in virtual reality
US10942585B2 (en) Trackability enhancement of a passive stylus
JP5525923B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
WO2019153824A1 (en) Virtual object control method, device, computer apparatus, and storage medium
CN117356116A (en) Beacon for locating and delivering content to a wearable device
JP4679661B1 (en) Information presenting apparatus, information presenting method, and program
CN102265242B (en) Motion process is used to control and access content on the mobile apparatus
CN108604123B (en) Position instrument in virtual reality
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
US20150070274A1 (en) Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
CN109313500A (en) The passive optical and inertia of very thin form factor track
JP7382994B2 (en) Tracking the position and orientation of virtual controllers in virtual reality systems
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
CN107533373A (en) Via the input of the sensitive collision of the context of hand and object in virtual reality
US20200193677A1 (en) 3d digital painting
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
JP5878438B2 (en) Display control device, display control system, and program
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
US20230256297A1 (en) Virtual evaluation tools for augmented reality exercise experiences
CN111736689B (en) Virtual reality device, data processing method, and computer-readable storage medium
KR102587645B1 (en) System and method for precise positioning using touchscreen gestures
KR101983233B1 (en) Augmented reality image display system and method using depth map
CN112788443B (en) Interaction method and system based on optical communication device
CN117237436A (en) Method and system for determining a relative pose relationship between at least two users
CN110134230A (en) A kind of input system based on hand finger tip force feedback in virtual reality scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination