CN117270675A - User virtual image loading method based on mixed reality remote collaborative environment - Google Patents

User virtual image loading method based on mixed reality remote collaborative environment Download PDF

Info

Publication number
CN117270675A
CN117270675A CN202210676274.7A CN202210676274A CN117270675A CN 117270675 A CN117270675 A CN 117270675A CN 202210676274 A CN202210676274 A CN 202210676274A CN 117270675 A CN117270675 A CN 117270675A
Authority
CN
China
Prior art keywords
space
virtual
virtual object
heterogeneous
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210676274.7A
Other languages
Chinese (zh)
Inventor
沈旭昆
卢亚光
胡勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202210676274.7A priority Critical patent/CN117270675A/en
Publication of CN117270675A publication Critical patent/CN117270675A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a user avatar loading method in a mixed reality-based remote collaborative environment. One embodiment of the method comprises the following steps: mapping the user position coordinates of the target user into a virtual space to obtain target point space coordinates; determining the space coordinates of the virtual objects to obtain a space coordinate set of the virtual objects; determining a heterogeneous space; mapping each virtual object to a heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set; generating heterogeneous space position information of the target point according to the space coordinates of the target point, the space coordinate set of the virtual object, the number of virtual objects included in the virtual space and the heterogeneous space coordinate set of the virtual object; and loading the avatar model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point. The embodiment improves the convenience of interaction of users in heterogeneous space.

Description

User virtual image loading method based on mixed reality remote collaborative environment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a user virtual image loading method in a remote collaborative environment based on mixed reality.
Background
Mixed reality technology ties the physical world to the digital world by superimposing computer-generated virtual information into the real scene. Common mixed reality devices include smartphones, smart glasses, mixed reality head mounted displays, and the like. A user may accomplish remote collaborative tasks by using these devices to achieve interactions with other users in a heterogeneous space (a virtual space for the user to interact with other users). Currently, users often need to load user avatars (e.g., avatar models) into a heterogeneous space before interacting with other users in the heterogeneous space. For example, a user's position in heterogeneous space may be defined based on a CollaboVR and a user avatar loaded.
However, when loading a user avatar in the above manner, there are often the following technical problems:
first, defining the position of the user in the heterogeneous space based on the collaboVR can not realize loading more than two user avatars into the heterogeneous space at the same time, which causes inconvenient interaction of the user in the heterogeneous space.
Second, the orientation of the user avatar loaded in the manner described above changes relative to the orientation of the user itself, resulting in less efficient interactions of the individual users in heterogeneous space.
Third, the user's position change in the real scene is not considered, further resulting in inconvenience of the user's interaction in heterogeneous space.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, electronic device, and computer-readable medium for user avatar loading in a mixed reality-based remote collaborative environment to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a user avatar loading method in a mixed reality-based remote collaborative environment, the method comprising: mapping the user position coordinates of the target user into a virtual space to obtain mapped coordinates serving as target point space coordinates; determining virtual object space coordinates of each virtual object included in the virtual space to obtain a virtual object space coordinate set; determining a heterogeneous space corresponding to the virtual space, wherein the heterogeneous space is a virtual space corresponding to each user to be interacted; mapping each virtual object to the heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set; generating target point heterogeneous space position information according to the target point space coordinates, the virtual object space coordinate set, the number of virtual objects included in the virtual space and the virtual object heterogeneous space coordinate set, wherein the target point heterogeneous space position information represents the position of the target user in the heterogeneous space; and loading an avatar model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point.
In a second aspect, some embodiments of the present disclosure provide a user avatar loading device in a mixed reality-based remote collaborative environment, the device comprising: the first mapping unit is configured to map the user position coordinates of the target user into the virtual space, and obtain mapped coordinates as target point space coordinates; a first determining unit configured to determine virtual object space coordinates of each virtual object included in the virtual space, and obtain a virtual object space coordinate set; a second determining unit configured to determine a heterogeneous space corresponding to the virtual space, where the heterogeneous space is a virtual space corresponding to each user to be interacted with; a second mapping unit configured to map the respective virtual objects to the heterogeneous space, and obtain respective coordinates of the respective virtual objects in the heterogeneous space as a virtual object heterogeneous space coordinate set; a generation unit configured to generate target point heterogeneous space position information according to the target point space coordinates, the virtual object space coordinate set, the number of virtual objects included in the virtual space, and the virtual object heterogeneous space coordinate set, wherein the target point heterogeneous space position information characterizes a position of the target user in the heterogeneous space; and a loading unit configured to load an avatar model corresponding to the target user in the heterogeneous space according to the target point heterogeneous space position information.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the method for loading the user virtual image in the mixed reality-based remote collaborative environment, which is disclosed by the embodiment of the invention, the convenience of interaction of the user in the heterogeneous space is improved. Specifically, the inconvenience of user interaction in heterogeneous space is caused by: based on the CollaboVR, the position of the user in the heterogeneous space is defined, and it is impossible to load more than two user avatars into the heterogeneous space simultaneously. Based on this, in some embodiments of the present disclosure, a user position coordinate of a target user is mapped into a virtual space to obtain a mapped coordinate as a target point space coordinate. Thereby, target point space coordinates characterizing the position of the user (target point) in the virtual space can be obtained. Then, virtual object space coordinates of each virtual object included in the virtual space are determined, and a virtual object space coordinate set is obtained. Thereby, virtual object space coordinates characterizing the position of the virtual object in the virtual space can be obtained. Next, a heterogeneous space corresponding to the virtual space is determined. Therefore, a virtual space for each user to be interacted to jointly perform remote collaboration tasks through the mixed reality technology can be obtained. And then mapping each virtual object to the heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set. Thus, a set of virtual object heterogeneous space coordinates characterizing the position of each virtual object in heterogeneous space can be obtained. And secondly, generating heterogeneous space position information of the target point according to the space coordinates of the target point, the space coordinate set of the virtual object, the number of virtual objects included in the virtual space and the heterogeneous space coordinate set of the virtual object. Thus, target point heterogeneous space position information representing the heterogeneous space position of the target user can be obtained. And finally, loading the virtual image model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point. Thus, the avatar model for exhibiting the target user in the heterogeneous space can be realized. Therefore, through the user avatar loading method based on the mixed reality remote collaborative environment in some embodiments of the present disclosure, loading of the user avatar of any target user into the heterogeneous space can be achieved, so that interaction of multiple users in the heterogeneous space can be achieved, and convenience of interaction of the users in the heterogeneous space is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a method of user avatar loading in a mixed reality-based remote collaborative environment in accordance with the present disclosure;
fig. 2 is a schematic structural diagram of some embodiments of a user avatar loading device in a mixed reality-based remote collaborative environment according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a method for user avatar loading in a mixed reality-based remote collaborative environment in accordance with the present disclosure. The method for loading the user virtual image in the remote collaborative environment based on the mixed reality comprises the following steps:
And step 101, mapping the user position coordinates of the target user into a virtual space, and obtaining mapped coordinates as target point space coordinates.
In some embodiments, an execution subject (e.g., a computing device) of a user avatar loading method in a mixed reality remote collaborative environment may map user position coordinates of a target user into a virtual space, resulting in mapped coordinates as target point space coordinates. The target user may be a user performing a remote collaboration task through a Mixed Reality technology (MR). The remote collaboration task may be a task performed by a remote collaboration technique. For example, the remote collaboration task may be a medical surgical task. The remote collaboration task may also be a teleconference. The user position coordinates may be coordinates representing the actual position of the user. The user position coordinates may include a user position abscissa and a user position ordinate. For example, the user position coordinates may be the abscissa and ordinate of the user position in the national geodetic coordinate system. The virtual space may be a virtual space constructed by a mixed reality technique. Here, the virtual space corresponds to the target user. In practice, the execution body may map the user position coordinates of the target user to the virtual space according to a predefined mapping relationship, so as to obtain mapped coordinates as the target point space coordinates. Thereby, target point space coordinates characterizing the position of the user (target point) in the virtual space can be obtained.
The computing device may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein. It should be appreciated that there may be any number of computing devices as desired for an implementation.
And 102, determining the virtual object space coordinates of each virtual object included in the virtual space to obtain a virtual object space coordinate set.
In some embodiments, the execution body may determine virtual object space coordinates of each virtual object included in the virtual space, to obtain a virtual object space coordinate set. The virtual object may be an object in the virtual space. Here, whether or not the virtual object is a real object is not limited. For example, the virtual object may be a virtual object obtained by mapping an object in a real environment to a virtual space. The virtual object may be a virtual object defined in a virtual space. The virtual object space coordinate may be an abscissa and an ordinate of the virtual object in the virtual space. In practice, the execution body may determine the virtual object space coordinates of each virtual object included in the virtual space by various methods, to obtain a virtual object space coordinate set. As an example, for each virtual object, the execution body may map the position of the virtual object in the real environment to the virtual space, resulting in virtual object space coordinates. For example, the position of the virtual object in the real environment may be the abscissa and ordinate of the virtual object in the national geodetic coordinate system in the real environment. As yet another example, for each virtual object, the coordinates of the virtual object may be defined when the virtual object is defined in the virtual space, resulting in virtual object space coordinates. Thereby, virtual object space coordinates characterizing the position of the virtual object in the virtual space can be obtained.
Step 103, determining a heterogeneous space corresponding to the virtual space.
In some embodiments, the execution body may determine a heterogeneous space corresponding to the virtual space. The heterogeneous space may be a virtual space corresponding to each user to be interacted with. The users to be interacted can be all users which commonly perform remote collaboration tasks through a mixed reality technology. The user to be interacted with may include the target user. In practice, heterogeneous spaces corresponding to the users to be interacted can be determined through mixed reality technology. Therefore, a virtual space for each user to be interacted to jointly perform remote collaboration tasks through the mixed reality technology can be obtained.
Optionally, before performing step 103, for each virtual object included in the virtual space, the following generation operation may be performed:
first, the execution body may generate target point subspace coordinates from virtual object space coordinates of the virtual object and the target point space coordinates. In practice, the target point subspace coordinates may be generated by:
p k =(x-x k ,y-y k )。
wherein p is k And representing the target point subspace coordinates of the corresponding kth virtual object. x represents the user position abscissa included in the user position coordinates. y denotes the ordinate of the user position included in the user position coordinates. X is x k Representing the abscissa in the virtual object space coordinates of the kth virtual object. y is k Representing the ordinate in the virtual object space coordinates of the kth virtual object.
Then, virtual object potential energy may be determined according to the potential energy constant of the virtual object and the distance between the virtual object space coordinates of the virtual object and the target point space coordinates. In practice, virtual object potential energy may be determined according to the following equation:
wherein E is k Representing virtual object potential energy of the kth virtual object. Zeta type toy k Representing the gravitational potential energy constant of the kth virtual object. d represents the distance between the virtual object space coordinates of the virtual object and the target point space coordinates. The gravitational potential energy constant may be a product of gravitational constant, a mass of the kth virtual object, and a weight of the target user. The mass of the kth virtual object and the weight of the target user can be obtained from the associated terminal through a wired connection or a wireless connection.
And 104, mapping each virtual object to the heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set.
In some embodiments, the execution body may map the respective virtual objects to the heterogeneous space, to obtain respective coordinates of the respective virtual objects in the heterogeneous space as a virtual object heterogeneous space coordinate set. In practice, the execution subject may map the respective virtual objects to the heterogeneous space according to the predefined mapping relationship. Thus, a set of virtual object heterogeneous space coordinates characterizing the position of each virtual object in heterogeneous space can be obtained.
Step 105, generating heterogeneous space position information of the target point according to the space coordinates of the target point, the space coordinate set of the virtual object, the number of virtual objects included in the virtual space and the heterogeneous space coordinate set of the virtual object.
In some embodiments, the execution body may generate the target point heterogeneous spatial position information according to the target point spatial coordinates, the virtual object spatial coordinate set, the number of virtual objects included in the virtual space, and the virtual object heterogeneous spatial coordinate set. Wherein the target point heterogeneous space position information characterizes a position of the target user in the heterogeneous space. In practice, the above-described target point heterogeneous spatial position information may be generated in various ways. Thus, target point heterogeneous space position information representing the heterogeneous space position of the target user can be obtained.
In some optional implementations of some embodiments, for each virtual object included in the virtual space described above, the following determination operations may be performed:
first, the virtual object rotation angle may be determined according to the virtual object space coordinates and the virtual object heterogeneous space coordinates of the virtual object. In practice, the virtual object rotation angle can be determined by using an atan2 function according to the virtual object space coordinates and the virtual object heterogeneous space coordinates of the virtual object.
Then, the virtual object rotation inverse matrix may be determined according to the above-described virtual object rotation angle. First, a virtual object rotation matrix may be determined according to the following equation:
wherein C is k Representing a virtual object rotation matrix corresponding to the kth virtual object. θ k Representing the virtual object rotation angle corresponding to the kth virtual object.
And a second step, determining the inverse matrix of the virtual object rotation matrix as the virtual object rotation inverse matrix. Thus, data support can be provided for generating target point heterogeneous spatial location information.
In some optional implementations of some embodiments, the executing body may generate the target point heterogeneous spatial position information according to a number of virtual objects included in the virtual space, the generated potential energy of each virtual object, the generated subspace coordinates of each target point, the determined rotational inverse matrix of each virtual object, and the heterogeneous spatial coordinate set of the virtual object. In practice, first, the execution subject may generate the target point heterogeneous spatial coordinates by:
where p' represents the target point heterogeneous spatial coordinates. m represents the number of virtual objects in the virtual space. E (E) k Representing virtual object potential energy of the kth virtual object. E (E) p Representing the total potential energy of the target point space. P is p k And representing the target point subspace coordinates of the corresponding kth virtual object. o' k And the virtual object heterogeneous space coordinates of the kth virtual object are represented. C (C) k A virtual object rotation matrix representing the kth virtual object.Representing a rotation matrix C k I.e. the inverse matrix of the virtual object rotation. Wherein E is p May be obtained by summing the virtual object potential energies of the respective virtual objects included in the virtual space.
Then, the above-described target point heterogeneous spatial coordinates may be determined as target point heterogeneous spatial position information. Thus, target point heterogeneous space position information characterizing the position of the target user in the virtual space can be obtained.
Alternatively, first, for each virtual object, the execution body may acquire the target point direction vector from the virtual object. The target point direction vector may represent a direction of the target user relative to the virtual object. For example, the target point direction vector may characterize the target user's direction relative to the virtual object as 45 degrees southeast. In practice, for each virtual object, the execution body may acquire, by means of wired connection or wireless connection, a direction of the target user with respect to the virtual object from the acceleration sensor, and convert the direction into a direction vector, to obtain a target point direction vector. For example, the direction may be encoded as a vector by means of one-hot encoding, and the target point direction vector may be obtained. Then, a target point orientation vector may be generated according to the number of virtual objects included in the virtual space, the total potential energy of the target point space, the generated potential energy of each virtual object, and the acquired direction vector of each target point. In practice, the target point orientation vector may be generated by:
Wherein,representing the target point orientation vector. />Representing the target point direction vector corresponding to the kth virtual object.
Finally, the target point heterogeneous space position information may be generated according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object, the total potential energy of the target point space, the generated subspace coordinates of each target point, the heterogeneous space coordinate set of the virtual objects, the determined rotational inverse matrix of each virtual object, and the target point orientation vector. In practice, in the first step, the heterogeneous space coordinates of the target point may be generated according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object, the total potential energy of the space of the target point, the generated subspace coordinates of each target point, the heterogeneous space coordinate set of the virtual object, and the determined rotation inverse matrix of each virtual object. In the second step, the target point heterogeneous spatial coordinates and the target point orientation vector may be determined as target point heterogeneous spatial position information. Thus, target point heterogeneous spatial position information characterizing the position and orientation of the target user in heterogeneous space can be obtained.
The foregoing is an invention point of the embodiments of the present disclosure, and solves the second technical problem mentioned in the background art, that "the orientation of the user avatar loaded in the foregoing manner changes relative to the orientation of the user itself, resulting in lower efficiency of interaction between the users in heterogeneous spaces. Factors that lead to a lower efficiency of interaction of the individual users in heterogeneous space are as follows: the orientation of the user avatar loaded in the manner described above changes relative to the orientation of the user itself, resulting in less efficient interactions of the individual users in heterogeneous space. If the above factors are solved or alleviated, the effect of improving the convenience of user interaction in heterogeneous space can be achieved. To achieve this effect, the present disclosure generates a target point heterogeneous space coordinate and a target point orientation vector, and determines the target point heterogeneous space coordinate and the target point orientation vector as target point heterogeneous space position information. Wherein, the target point orientation vector is generated according to the obtained target point orientation vector and virtual object potential energy of each virtual object by considering each virtual object when generating the target point orientation vector. Therefore, according to the target point heterogeneous space position information, when the avatar model of the corresponding target user is loaded in the heterogeneous space, the orientation of the avatar model can be made to be the orientation characterized by the target point orientation vector. Therefore, due to the limitation of the virtual object, the directions of the avatars (virtual image models) of the users are determined by the virtual object, so that the offset between the directions of the avatars of the users and the directions of the users is basically the same, the direction relation of the users in interaction is kept as much as possible, and the interaction efficiency of the users in heterogeneous space is improved.
And step 106, loading the virtual image model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point.
In some embodiments, the execution body may load an avatar model corresponding to the target user in the heterogeneous space according to the target point heterogeneous spatial location information. Wherein the avatar model may be a model representing an avatar of the target user. For example, the avatar model may be a two-dimensional character image. In practice, the execution subject may place the avatar model corresponding to the target user at the target point heterogeneous space coordinates characterized by the target point heterogeneous space position information in the heterogeneous space. Thus, the avatar model for exhibiting the target user in the heterogeneous space can be realized.
Alternatively, first, the execution body may determine the distance between the target point space coordinate and each of the virtual object space coordinates to obtain a distance set. Then, in response to the distance set having a distance less than the preset distance threshold, the too-close distance prompt message may be played. The preset distance threshold may be a preset distance threshold. The above-mentioned too-close distance prompt information may be information for prompting the user that the distance from the virtual object is too close. For example, the above-mentioned too-close distance prompt information may be: "please note that too close to the object-! ". Thus, the user may be prompted when the user is too close to the virtual object such that the user is slightly away from the virtual object.
Alternatively, the execution body may generate the target point global coordinates according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object, and the generated target point subspace coordinates. In practice, the global coordinates of the target point may be generated in various manners according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object, and the generated subspace coordinates of each target point. In this way, global coordinates of the target point, which characterize the position of the target user in the virtual space, can be obtained, limited by the respective virtual object.
Alternatively, first, the execution body may generate the total potential energy of the target point space according to the number of virtual objects included in the virtual space and the generated potential energy of each virtual object. In practice, the total potential energy E of the target point space can be generated by the following formula p
Then, the global coordinates of the target point may be generated according to the number of virtual objects included in the virtual space, the total potential energy of the target point space, the generated potential energy of each virtual object, and the generated subspace coordinates of each target point. In practice, the target point global coordinate p may be generated by:
Thereby, global coordinates of the target point, which characterize the position of the target user in the virtual space, limited by the respective virtual object can be further obtained.
Alternatively, first, the execution body may acquire the user position change information in response to the user position coordinate update of the target user. Wherein, the user position change information may include at least one user position change coordinate. The user position change information may be information indicating that the position of the user is changed. The user position change coordinates may be coordinates that are passed when the user position changes. In practice, in response to the updating of the user position coordinates of the target user, the user position can be positioned by the positioning device at intervals of a preset time length, and at least one user position change coordinate is obtained as user position change information. The positioning device may be a device capable of positioning the user's position. For example, the positioning device may be a GPS positioner. Second, target point heterogeneous position change information may be generated from the user position change information. Wherein, the target point heterogeneous position change information may include at least one target point heterogeneous position change coordinate. In practice, each user position change coordinate in the user position change information may be mapped to a virtual space, so as to obtain at least one target point heterogeneous position change coordinate as target point heterogeneous position change information. Finally, the avatar model may be controlled to move according to the various target point heterogeneous position change coordinates included in the target point heterogeneous position change information. In practice, the avatar model may be controlled to traverse each target point heterogeneous position change coordinate included in the target point heterogeneous position change information, so as to implement movement of the avatar model.
The above-mentioned matters serve as an invention point of the embodiments of the present disclosure, and solve the third technical problem mentioned in the background art, which is that the position change of the user in the real scene is not considered, and further, the user interaction in the heterogeneous space is inconvenient. Factors that further cause inconvenience in user interaction in heterogeneous space are as follows: the position change of the user in the real scene is not considered, which further causes inconvenience in user interaction in heterogeneous space. If the above factors are solved, the effect of improving the convenience of the user interaction in the heterogeneous space can be achieved. To achieve this effect, the present disclosure controls the avatar model to move according to the target point heterogeneous position variation information. Therefore, when the position of the user changes, the avatar model in the virtual space can synchronously move, so that the convenience of interaction of the user in the heterogeneous space is improved.
The above embodiments of the present disclosure have the following advantageous effects: by the method for loading the user virtual image in the mixed reality-based remote collaborative environment, which is based on some embodiments of the present disclosure, convenience of user interaction in heterogeneous space is improved. Specifically, the inconvenience of user interaction in heterogeneous space is caused by: based on the CollaboVR, the position of the user in the heterogeneous space is defined, and it is impossible to load more than two user avatars into the heterogeneous space simultaneously. Based on this, in some embodiments of the present disclosure, a user position coordinate of a target user is mapped into a virtual space to obtain a mapped coordinate as a target point space coordinate. Thereby, target point space coordinates characterizing the position of the user (target point) in the virtual space can be obtained. Then, virtual object space coordinates of each virtual object included in the virtual space are determined, and a virtual object space coordinate set is obtained. Thereby, virtual object space coordinates characterizing the position of the virtual object in the virtual space can be obtained. Next, a heterogeneous space corresponding to the virtual space is determined. Therefore, a virtual space for each user to be interacted to jointly perform remote collaboration tasks through the mixed reality technology can be obtained. And then mapping each virtual object to the heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set. Thus, a set of virtual object heterogeneous space coordinates characterizing the position of each virtual object in heterogeneous space can be obtained. And secondly, generating heterogeneous space position information of the target point according to the space coordinates of the target point, the space coordinate set of the virtual object, the number of virtual objects included in the virtual space and the heterogeneous space coordinate set of the virtual object. Thus, target point heterogeneous space position information representing the heterogeneous space position of the target user can be obtained. And finally, loading the virtual image model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point. Thus, the avatar model for exhibiting the target user in the heterogeneous space can be realized. Therefore, through the user avatar loading method based on the mixed reality remote collaborative environment in some embodiments of the present disclosure, loading of the user avatar of any target user into the heterogeneous space can be achieved, so that interaction of multiple users in the heterogeneous space can be achieved, and convenience of interaction of the users in the heterogeneous space is improved.
With continued reference to fig. 2, as an implementation of the method illustrated in the above figures, the present disclosure provides some embodiments of a user avatar loading device in a mixed reality-based remote collaborative environment, which correspond to those illustrated in fig. 1, and which may be particularly applicable in a variety of electronic devices.
As shown in fig. 2, the user avatar loading device 200 in the mixed reality-based remote collaborative environment of some embodiments includes: a first mapping unit 201, a first determination unit 202, a second determination unit 203, a second mapping unit 204, a generation unit 205, and a loading unit 206. Wherein the first mapping unit 201 is configured to map the user position coordinates of the target user into the virtual space, and obtain the mapped coordinates as target point space coordinates; the first determining unit 202 is configured to determine virtual object space coordinates of each virtual object included in the above virtual space, resulting in a set of virtual object space coordinates; the second determining unit 203 is configured to determine a heterogeneous space corresponding to the virtual space, where the heterogeneous space is a virtual space corresponding to each user to be interacted with; the second mapping unit 204 is configured to map the respective virtual objects to the heterogeneous space, so as to obtain respective coordinates of the respective virtual objects in the heterogeneous space as a virtual object heterogeneous space coordinate set; the generating unit 205 is configured to generate target point heterogeneous space position information according to the target point space coordinates, the virtual object space coordinate set, the number of virtual objects included in the virtual space, and the virtual object heterogeneous space coordinate set, wherein the target point heterogeneous space position information characterizes a position of the target user in the heterogeneous space; the loading unit 206 is configured to load an avatar model corresponding to the target user in the heterogeneous space according to the target point heterogeneous space position information.
It will be appreciated that the elements described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above for the method are equally applicable to the apparatus 200 and the units contained therein, and are not described in detail herein.
Referring now to FIG. 3, a schematic diagram of an electronic device (e.g., computing device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: mapping the user position coordinates of the target user into a virtual space to obtain mapped coordinates serving as target point space coordinates; for the virtual object space coordinates of each virtual object included in the virtual space, determining a virtual object space coordinate with the shortest distance from the target point space coordinate in the virtual object space coordinates as a target virtual object space coordinate, wherein the target virtual object space coordinate represents the position of the target virtual object in the virtual space; determining a heterogeneous space corresponding to the virtual space, wherein the heterogeneous space is a virtual space corresponding to each user to be interacted; mapping the target virtual object to the heterogeneous space to obtain the coordinate of the target virtual object in the heterogeneous space as the target virtual object heterogeneous space coordinate; generating target point heterogeneous space position information according to the target point space coordinates, the target virtual object space coordinates, the number of virtual objects included in the virtual space and the target virtual object heterogeneous space coordinates, wherein the target point heterogeneous space position information represents the position of the target user in the heterogeneous space; and loading an avatar model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a first mapping unit, a first determination unit, a second mapping unit, a generation unit, and a loading unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the first mapping unit may also be described as "a unit that maps the user position coordinates of the target user into the virtual space, resulting in the mapped coordinates as target point space coordinates".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A method for loading a user virtual image based on a mixed reality remote collaborative environment comprises the following steps:
mapping the user position coordinates of the target user into a virtual space to obtain mapped coordinates serving as target point space coordinates;
determining virtual object space coordinates of each virtual object included in the virtual space to obtain a virtual object space coordinate set;
determining a heterogeneous space corresponding to the virtual space, wherein the heterogeneous space is a virtual space corresponding to each user to be interacted;
mapping each virtual object to the heterogeneous space to obtain each coordinate of each virtual object in the heterogeneous space as a virtual object heterogeneous space coordinate set;
generating target point heterogeneous space position information according to the target point space coordinates, the virtual object space coordinate set, the number of virtual objects included in the virtual space and the virtual object heterogeneous space coordinate set, wherein the target point heterogeneous space position information represents the position of the target user in the heterogeneous space;
and loading an avatar model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point.
2. The method of claim 1, wherein the method further comprises:
determining the distance between the space coordinate of the target point and each virtual object space coordinate in the space coordinates of each virtual object to obtain a distance set;
and playing the too-close distance prompt information in response to the fact that the distance smaller than the preset distance threshold exists in the distance set.
3. The method of claim 1, wherein prior to the determining the heterogeneous space to which the virtual space corresponds, the method further comprises:
for each virtual object included within the virtual space, performing the generating operation of:
generating target point subspace coordinates according to the virtual object space coordinates of the virtual object and the target point space coordinates;
and generating virtual object potential energy according to the potential energy constant of the virtual object and the distance between the virtual object space coordinates of the virtual object and the target point space coordinates.
4. The method of claim 3, wherein the generating target point heterogeneous spatial location information comprises:
for each virtual object included within the virtual space, performing the following determination operations:
determining a virtual object rotation angle according to the virtual object space coordinates of the virtual object and the virtual object heterogeneous space coordinates;
And determining a virtual object rotation inverse matrix according to the virtual object rotation angle.
5. The method of claim 4, wherein the generating target point heterogeneous spatial location information further comprises:
and generating target point heterogeneous space position information according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object, the generated subspace coordinates of each target point, the determined rotation inverse matrix of each virtual object and the virtual object heterogeneous space coordinate set.
6. The method of claim 5, wherein the method further comprises:
and generating global coordinates of the target point according to the number of virtual objects included in the virtual space, the generated potential energy of each virtual object and the generated subspace coordinates of each target point.
7. The method of claim 6, wherein the generating the target point global coordinates from the number of virtual objects included in the virtual space, the generated respective virtual object potential energies, and the generated respective target point subspace coordinates comprises:
generating total potential energy of a target point space according to the number of virtual objects included in the virtual space and the generated potential energy of each virtual object;
And generating target point global coordinates according to the number of virtual objects included in the virtual space, the total potential energy of the target point space, the generated potential energy of each virtual object and the generated subspace coordinates of each target point.
8. A user avatar loading device in a mixed reality-based remote collaborative environment, comprising:
the first mapping unit is configured to map the user position coordinates of the target user into the virtual space, and obtain mapped coordinates as target point space coordinates;
a first determining unit configured to determine virtual object space coordinates of each virtual object included in the virtual space, resulting in a virtual object space coordinate set;
the second determining unit is configured to determine a heterogeneous space corresponding to the virtual space, wherein the heterogeneous space is a virtual space corresponding to each user to be interacted;
a second mapping unit configured to map the respective virtual objects to the heterogeneous space, and obtain respective coordinates of the respective virtual objects in the heterogeneous space as a virtual object heterogeneous space coordinate set;
a generation unit configured to generate target point heterogeneous space position information according to the target point space coordinates, the virtual object space coordinate set, the number of virtual objects included in the virtual space, and the virtual object heterogeneous space coordinate set, wherein the target point heterogeneous space position information characterizes a position of the target user in the heterogeneous space;
And the loading unit is configured to load the avatar model corresponding to the target user in the heterogeneous space according to the heterogeneous space position information of the target point.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-7.
CN202210676274.7A 2022-06-15 2022-06-15 User virtual image loading method based on mixed reality remote collaborative environment Pending CN117270675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210676274.7A CN117270675A (en) 2022-06-15 2022-06-15 User virtual image loading method based on mixed reality remote collaborative environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210676274.7A CN117270675A (en) 2022-06-15 2022-06-15 User virtual image loading method based on mixed reality remote collaborative environment

Publications (1)

Publication Number Publication Date
CN117270675A true CN117270675A (en) 2023-12-22

Family

ID=89199600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210676274.7A Pending CN117270675A (en) 2022-06-15 2022-06-15 User virtual image loading method based on mixed reality remote collaborative environment

Country Status (1)

Country Link
CN (1) CN117270675A (en)

Similar Documents

Publication Publication Date Title
US11268822B2 (en) Method and system for navigation using video call
CN109754464B (en) Method and apparatus for generating information
US20150186571A1 (en) Methods and systems of providing items to customers via a network
US20130096906A1 (en) Methods and Systems for Providing Items to Customers Via a Network
CN114494328B (en) Image display method, device, electronic equipment and storage medium
WO2023202358A1 (en) Virtual object motion control method and device
WO2023103999A1 (en) 3d target point rendering method and apparatus, and device and storage medium
CN111652675A (en) Display method and device and electronic equipment
WO2023174087A1 (en) Method and apparatus for generating special effect video, and device and storage medium
CN110069195B (en) Image dragging deformation method and device
JP2022061452A (en) Human Interface Device Latency Determination
CN114067030A (en) Dynamic fluid effect processing method and device, electronic equipment and readable medium
CN109816791B (en) Method and apparatus for generating information
CN117270675A (en) User virtual image loading method based on mixed reality remote collaborative environment
CN110070479B (en) Method and device for positioning image deformation dragging point
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN111275799B (en) Animation generation method and device and electronic equipment
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN111460334A (en) Information display method and device and electronic equipment
CN112925593A (en) Method and device for scaling and rotating target layer
CN113515201B (en) Cursor position updating method and device and electronic equipment
CN114357348B (en) Display method and device and electronic equipment
WO2023216971A1 (en) Special effect video generation method and apparatus, electronic device, and storage medium
WO2023246468A1 (en) Visual positioning parameter updating method and apparatus, and electronic device and storage medium
CN118131910A (en) Virtual object interaction method, head-mounted display device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination