CN115317907A - Multi-user virtual interaction method and device in AR application and AR equipment - Google Patents
Multi-user virtual interaction method and device in AR application and AR equipment Download PDFInfo
- Publication number
- CN115317907A CN115317907A CN202210927365.3A CN202210927365A CN115317907A CN 115317907 A CN115317907 A CN 115317907A CN 202210927365 A CN202210927365 A CN 202210927365A CN 115317907 A CN115317907 A CN 115317907A
- Authority
- CN
- China
- Prior art keywords
- virtual
- user
- application scene
- users
- virtual object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a multi-user virtual interaction method and device in Augmented Reality (AR) application and AR equipment. In the embodiment of the application, the 3D virtual parts corresponding to different users are determined by using the structural information of the designated parts of the different users at the same time, the operation behaviors of the users in the AR application scene are identified through the 3D virtual parts corresponding to the users, the variable quantity of the 3D virtual object changing based on the operation behaviors of the users in the AR application scene is determined and returned to AR equipment corresponding to the different users, and the AR equipment draws and displays the 3D virtual object according to the received variable quantity, so that even if multiple users have precedence relations with the interaction, such as editing and the like, of the same 3D virtual object in the AR application scene, the same 3D virtual object can be operated by the multiple users at the same time.
Description
Technical Field
The present application relates to the field of Augmented Reality technologies, and in particular, to a method and an apparatus for multi-user virtual interaction in Augmented Reality (AR) application, and an AR device.
Background
Augmented reality, also known as augmented reality, is a technology that facilitates integration between real world information and virtual world information content. The augmented reality can implement analog simulation processing on entity information which is difficult to experience in the space range of the real world originally on the basis of scientific technologies such as computers and the like, superimpose virtual information content in the real world for effective application, and can be perceived by human sense in the process, so that the sense experience beyond reality is realized. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
However, in the current AR application scene, the interaction, such as editing, of multiple users on the same 3D virtual object in the AR application scene has precedence, and the multiple users can only interact in sequence, and cannot operate the same 3D virtual object at the same time.
Disclosure of Invention
The embodiment of the application discloses a multi-user virtual interaction method and device in Augmented Reality (AR) application and AR equipment, so that multiple users can interact with the same 3D virtual object simultaneously in an AR application scene.
The embodiment of the application provides a multi-user virtual interaction method in Augmented Reality (AR) application, which comprises the following steps:
acquiring structural information of specified parts of different users at the same time in the same AR application scene;
driving a 3D virtual part template corresponding to the specified part to deform according to the acquired structural information of the specified part of each user to obtain a 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user;
identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which is changed based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene;
and returning the variable quantity to AR equipment corresponding to different users, and drawing and displaying the 3D virtual object by the AR equipment according to the received variable quantity.
The embodiment of the application provides a multi-person virtual interaction device in Augmented Reality (AR) application, which is applied to a cloud server, and the method comprises the following steps:
the obtaining unit is used for obtaining the structural information of the specified parts of different users at the same time under the same AR application scene;
the processing unit is used for driving the 3D virtual part template corresponding to the specified part to deform according to the acquired structural information of the specified part of each user to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user;
the determining unit is used for identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which changes based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene;
and the sending unit is used for returning the variable quantity to AR equipment corresponding to different users, and the AR equipment draws and displays the 3D virtual object according to the received variable quantity.
An embodiment of the present application provides an AR device, which includes: an AR scene generator, an interaction device, a processor, and a display;
the AR scene generator is used for generating an AR application scene;
the interaction device is used for interacting with an external device to obtain the structural information of the designated parts of different users at the same time in the same AR application scene;
the processor is used for driving the 3D virtual part template corresponding to the designated part to deform according to the acquired structural information of the designated part of each user, so as to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user; and (c) a second step of,
identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which changes based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene; and returning the variable quantity to AR equipment corresponding to different users;
and the display is used for drawing and displaying the 3D virtual object according to the received variable quantity.
According to the technical scheme, the cloud server determines the 3D virtual parts corresponding to the users by using the structural information of the designated parts of the different users at the same time, identifies the operation behaviors of the users in the AR application scene through the 3D virtual parts corresponding to the users to determine the variable quantity of the 3D virtual objects which change based on the operation behaviors of the users in the AR application scene and return the variable quantity to the AR equipment corresponding to the different users, and the AR equipment draws and displays the 3D virtual objects according to the received variable quantity, so that even if multiple users have a precedence relationship in interaction, such as editing and the like, of the same 3D virtual object in the AR application scene, the same 3D virtual object can be operated by the multiple users at the same time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with this specification and, together with the description, serve to explain the principles of the specification.
FIG. 1 is a flow chart of a method provided by an embodiment of the present application;
fig. 2 is a schematic diagram of interaction between an AR device and a cloud server according to an embodiment of the present application;
fig. 3a to 3c are schematic diagrams of a 3D virtual hand provided in the present application;
FIG. 4 is a schematic diagram of an operation gesture provided in an embodiment of the present application;
FIG. 5 is a block diagram of an apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to realize that multiple users perform virtual interaction with the same 3D virtual object in an AR application scene at the same time, the embodiment breaks through the existing operation of identifying a user-specified part in the AR application scene by a terminal (taking an AR device as an example), and places the operation of identifying the user-specified part and the like in a cloud server, which is completed by the mutual cooperation of the cloud server and the AR device.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method provided in an embodiment of the present application. As an embodiment, the method is applied to a cloud server or an AR device having a function similar to a server, and the embodiment is not particularly limited. As shown in fig. 1, the process may include the following steps:
As one embodiment, the specified site structured information may be: coordinate information of left-hand and/or right-hand top-related nodes. Optionally, the coordinate information here is 3D coordinate information or 2D coordinate information.
As another example, the structural information of the designated area may also be information of length, thickness, and joint angle of each knuckle on the left hand and/or the right hand, and this embodiment is not particularly limited.
In this embodiment, the structural information of the specified portion in the same AR application scene at the same time is reported by the AR devices corresponding to different users, which will be described below by way of example and is not described herein again. Fig. 2 illustrates, by taking an AR device as an AR glasses as an example, that different AR glasses report, to a cloud server (fig. 2 is labeled as the cloud server 20), structured information of specified portions of different users at the same time in the same AR application scene.
In the present embodiment, the 3D virtual location template is constructed in advance, and the present embodiment does not specify how to construct the 3D virtual location template.
In this embodiment, when the above-mentioned structural information of the designated part is 2D coordinate information of the left-hand and/or right-hand upper node, the obtained 2D coordinate information may be further converted into 3D coordinate information before executing the step 102. As to how to convert the 2D coordinate information into the 3D coordinate information, which is similar to the existing coordinate conversion, the present embodiment is not particularly limited.
For example, based on this step 103, the variation such as speed, acceleration, movement direction, position, and the like of the 3D basketball can be calculated in real time according to the virtual 3D basketball and the collision between the 3D virtual parts of the users, so as to realize that in the AR game scene, the different movement states of the basketball can be triggered by the users when the users flap the basketball at different speeds and at different contact positions, and ensure that the interactive experience approaches to the real experience.
For another example, a plurality of persons perform an operation together, or a plurality of persons construct a multi-person activity scene such as a ceramic product, a multi-person shooting game, etc., which are similar to the above basketball shooting AR game scene, and the specific application scene is not specifically limited in this embodiment.
And step 104, returning the variable quantity to AR equipment corresponding to different users, and drawing and displaying the 3D virtual object by the AR equipment according to the received variable quantity.
When the AR device receives the variation, it redraws the 3D virtual object based on the variation and displays it. Still taking the AR game scene applied to different users playing basketball as an example, if the variation is the updated basketball position, the AR device draws a 3D basketball at the updated basketball position.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the process shown in fig. 1, in this embodiment, the cloud server determines the 3D virtual location corresponding to each user by using the designated location structured information of different users at the same time, identifies the operation behavior of each user in the AR application scene through the 3D virtual location corresponding to each user, determines the variation of the 3D virtual object that changes based on the operation behavior of each user in the AR application scene, and returns the variation to the AR device corresponding to different users, and the AR device draws and displays the 3D virtual object according to the received variation, so that even if there is a precedence relationship between interactions, such as editing and the like, of multiple users on the same 3D virtual object in the AR application scene, the multiple users can operate the same 3D virtual object at the same time.
The method provided by the embodiment of the present application is described below by taking the specified part structured information as the hand joint point information as an example:
for the AR device side:
the AR equipment executes the following operations to realize that different AR equipment sends hand joint point information of different users in the same AR application scene at the same time to the cloud server:
image acquisition:
collected images such as color images, grayscale images, infrared images, depth images, etc., and the format and number of the collected images are not particularly limited in this embodiment.
Hand detection:
a hand region (such as a left-hand region and/or a right-hand region) is detected from the acquired image, and pixel coordinates of a center point of the hand in the hand region are acquired (3D coordinates are obtained if the input image is a depth map). For example, if a left-hand area or a right-hand area is detected in the acquired image, the pixel coordinates of the center point of the hand in the left-hand area or the right-hand area are obtained (if the input image is a depth map, the 3D coordinates are obtained). If a left-hand area and a right-hand area are detected in the acquired image, pixel coordinates of a center point of a hand in the left-hand area and the right-hand area are respectively acquired (if the input image is a depth map, 3D coordinates are acquired).
3D joint point detection:
and intercepting the detected hand area from the acquired image, and detecting joint points from the intercepted hand area to obtain hand joint point coordinate information (such as 3D joint point coordinate information). Taking a hand area as a left-hand area as an example, joint points are detected from the intercepted left-hand area to obtain left-hand joint point coordinate information (such as 3D joint point coordinate information); taking the hand area as the right hand area as an example, joint points are detected from the cut right hand area to obtain right hand joint point coordinate information (such as 3D joint point coordinate information).
And then, the AR equipment sends the hand joint point information to the cloud server. The hand joint point information at least includes the hand joint point coordinate information, such as left hand joint point coordinate information (e.g., 3D joint point coordinate information) and/or right hand joint point coordinate information (e.g., 3D joint point coordinate information).
As an embodiment, the above-mentioned hand joint point information may further include time information. The time information here may be the time at which the image was acquired as described above. And if the AR device is an AR device corresponding to a user who initiates or constructs the AR application scene, the hand joint point information further includes 3D coordinate information of a 3D virtual object in the AR application scene.
Finally, the fact that different AR devices send the hand joint point information of different users in the same AR application scene at the same time to the cloud server is achieved through the above description.
For the cloud server side:
the cloud server receives the hand joint point information sent by the AR device, and then executes the following operations:
hand three-dimensional reconstruction:
in this embodiment, according to the hand joint point coordinate information (e.g. 21 pieces of 3D coordinate information on the left hand or the right hand), as shown in fig. 3a, a basic hand skeleton can be constructed, as shown in fig. 3b, and according to the skeleton, the 3D virtual hand template can be driven to deform, so as to obtain a 3D virtual hand in the current hand posture, as shown in fig. 3 c.
And (3) natural interaction:
the interaction of the same 3D virtual object is realized by utilizing simulation attributes of a 3D game engine such as collision, elasticity and friction and based on physical attributes of a plurality of 3D virtual hands and the 3D virtual object such as physical collision, movement speed and friction force of a contact surface.
For example, still taking an AR game scene applied to different users playing basketball as an example, the 3D game engine calculates the speed, acceleration, movement direction, and position of the 3D basketball in real time according to the virtual 3D basketball and the collision before the 3D virtual hand recovered from the real hand of the user. Different speeds and different contact positions of the user for beating the basketball trigger different motion states of the basketball, and the interactive experience approaches the real experience.
As another embodiment, when the interaction with the same 3D virtual object is implemented based on physical attributes such as physical collisions, motion speeds, and friction of the contact surface between the multiple 3D virtual hands and the 3D virtual object, the obtained operation attributes of the operation behavior performed by each user in the AR application scene may also be used for reference. Here, the operation attribute, such as the force, direction, etc. of performing the operation action, can ensure that the calculated change amount of the 3D virtual object is more accurate, such as the basketball changes caused by the user beating the basketball, such as speed, acceleration, movement direction, position, etc., and ensure that the interactive experience approaches the real experience.
Optionally, in this embodiment, after the natural interaction, the position and the speed of the 3D virtual object may change, and the change amount (including but not limited to the position, the speed, and the like) is sent to the AR device in the AR application scene. And any AR equipment receives the variable quantity, redraws the 3D virtual object according to the variable quantity and displays the 3D virtual object. For example, if the variation is the updated position of the 3D virtual object, the AR device draws the corresponding 3D virtual object at the updated position. For another example, the variation is the speed of the 3D virtual object, and there may be a frame rate at which the rendering frame rate is higher than that of the cloud for natural interaction in the application, so that before the state of the 3D virtual object is not updated, a new position of the 3D virtual object may be locally calculated by using the speed and the acceleration, and the 3D virtual object is rendered and displayed at the new position. Finally, even if the interaction, such as editing and the like, of the same 3D virtual object in the AR application scene by multiple users has precedence relation, the same 3D virtual object can be operated by the multiple users at the same time.
It should be noted that, in this embodiment, the cloud server may further identify an action corresponding to the specified portion according to the obtained structural information of the specified portion of each user; responding to the action in the AR application scenario;
as an embodiment, when the action indicates a menu click in the AR application scene, such as the click shown in fig. 4, the menu click is responded to provide a clicked menu in the AR application scene.
As another embodiment, when the action indicates a grab operation under the AR application scene and the joint point of the designated part is located inside the 3D virtual object, the grabbed 3D virtual object is provided in response to the grab operation.
Through the action of recognizing the corresponding designated parts of different users, even if the interaction, such as editing and the like, of the same 3D virtual object in the AR application scene by multiple users has precedence, the same 3D virtual object can be operated by the multiple users at the same time on the cloud server side.
The method provided by the embodiment of the present application is described above, and the apparatus provided by the embodiment of the present application is described below:
referring to fig. 5, fig. 5 is a structural diagram of an apparatus provided in an embodiment of the present application. The device is applied to a cloud server or AR equipment with a function similar to the server. As shown in fig. 5, the apparatus may include:
the obtaining unit is used for obtaining the structural information of the specified parts of different users at the same time under the same AR application scene;
the processing unit is used for driving the 3D virtual part template corresponding to the specified part to deform according to the acquired structural information of the specified part of each user to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the appointed part of the user;
the determining unit is used for identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which changes based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene;
and the sending unit is used for returning the variable quantity to the AR equipment corresponding to different users, and the AR equipment draws and displays the 3D virtual object according to the received variable quantity.
As an embodiment, the obtaining of the specified part structured information of different users at the same time in the same AR application scene includes:
receiving the structural information of the specified part of the user at the first moment, which is reported by the AR equipment corresponding to each user in the AR application scene; the structural information of the designated part reported by the AR equipment corresponding to different users is obtained from the images collected by different AR equipment at the first moment.
As an embodiment, the driving, according to the obtained structural information of the designated part of each user, a 3D virtual part template corresponding to the designated part to deform, and obtaining the 3D virtual part corresponding to the user includes:
aiming at the obtained structural information of the appointed part of each user, constructing a part skeleton according to the structural information;
searching a 3D virtual part template matched with the specified part in the constructed 3D virtual part template;
and deforming the searched 3D virtual part template according to the part skeleton to obtain a 3D virtual part corresponding to the user.
As an embodiment, the determining, according to the operation behaviors of the users in the AR application scene, a variation of the 3D virtual object that varies based on the operation behaviors of the users in the AR application scene includes:
simulating, by a deployed 3D game engine, operating behaviors of each user in the AR application scene using the configured physical attributes to determine a variation that causes a change in the 3D virtual object; or simulating the operation behavior of each user in the AR application scene by using the configured physical attributes through the deployed 3D game engine according to the obtained operation attributes of the operation behavior executed by each user in the AR application scene so as to determine the variation causing the change of the 3D virtual object;
wherein the variation includes at least one of: speed, acceleration, direction of motion, position; the physical attributes include at least: impact, friction of the contact surfaces, elasticity.
According to one embodiment, the processing unit further identifies an action corresponding to the designated part according to the obtained structured information of the designated part of each user; responding to the action in the AR application scenario;
when the action indicates the menu click under the AR application scene, responding to the menu click to provide a clicked menu under the AR application scene; when the action indicates a grabbing operation in the AR application scene and the joint point of the specified part is located inside the 3D virtual object, providing the grabbed 3D virtual object in response to the grabbing operation.
As an embodiment, the designated site structured information includes at least: coordinate information of left-hand and/or right-hand upper-relevant nodes;
the coordinate information is 3D coordinate information or 2D coordinate information;
when the coordinate information is 2D coordinate information, before driving the 3D virtual location template corresponding to the specified location to deform according to the obtained structural information of the specified location of each user, further comprising: and converting the obtained 2D coordinate information into 3D coordinate information.
Thus, the description of the structure of the device shown in fig. 5 is completed.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 5. Referring to fig. 6, the hardware structure may include an AR scene generator, an interactive device, a processor, and a display.
In this embodiment, the AR scene generator is configured to generate an AR application scene, for example, finally generate a corresponding AR application scene through modeling, managing, and drawing of the AR application scene.
The interaction device, such as a handle, a grip sensor, a stylus pen, a remote controller, a touch screen, a key and the like, is used for realizing input and output of a sensory signal and an environmental control operation signal, and when the interaction device is applied to the embodiment, the interaction device can interact with an external device to obtain structural information of specified parts of different users at the same time in the same AR application scene.
The processor is used for driving the 3D virtual part template corresponding to the designated part to deform according to the acquired structural information of the designated part of each user to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user; and (c) a second step of,
identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which is changed based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene; and returning the variable quantity to AR equipment corresponding to different users;
and the display, such as a transmission helmet display and the like, is used for drawing and displaying the 3D virtual object according to the received variation. In this embodiment, the display is primarily responsible for displaying the virtual and real fused signal.
In this embodiment, the AR device further includes a tracker, such as a gaze tracking device, for acquiring the gaze variation of the user to ensure that the interactive device obtains the signal in real time.
As an embodiment, the above-mentioned interaction device further obtains an operation attribute of each user performing an operation behavior in the AR application scene. Here, the operation attribute may be a strength, a direction, and the like of executing the operation behavior, and the embodiment is not particularly limited.
When the operation attribute of each user executing the operation behavior in the AR application scene is further obtained through the above interaction device, as an embodiment, the processor may utilize the configured physical attribute through the deployed 3D game engine, and simulate the operation behavior of each user in the AR application scene according to the operation attribute to determine the variation causing the change of the 3D virtual object. The variation includes at least one of: speed, acceleration, direction of motion, position; the physical attributes include at least: impact, friction of the contact surfaces, elasticity.
As an embodiment, the processor drives, according to the obtained structural information of the designated part of each user, a 3D virtual part template corresponding to the designated part to deform, and obtaining the 3D virtual part corresponding to the user includes:
aiming at the obtained structural information of the appointed part of each user, constructing a part skeleton according to the structural information;
searching a 3D virtual part template matched with the specified part in the constructed 3D virtual part template;
and deforming the searched 3D virtual part template according to the part skeleton to obtain a 3D virtual part corresponding to the user.
As an embodiment, the obtaining, by the interaction device, the structural information of the designated part of the different users at the same time in the same AR application scene includes:
receiving the structural information of the appointed part of each user at the first moment, which is reported by the AR equipment corresponding to each user in the AR application scene; and obtaining the structural information of the specified part reported by the AR equipment corresponding to different users from the images acquired by the different AR equipment at the first moment.
According to one embodiment, the processor further identifies an action corresponding to the designated part according to the obtained structural information of the designated part of each user; responding to the action in the AR application scene;
wherein, when the action indicates a menu click under the AR application scene, responding to the menu click to provide a clicked menu under the AR application scene; when the action indicates a grabbing operation in the AR application scene and the joint point of the specified part is located inside the 3D virtual object, providing the grabbed 3D virtual object in response to the grabbing operation.
As an embodiment, the designated site structured information includes at least: coordinate information of left-hand and/or right-hand upper-relevant nodes;
the coordinate information is 3D coordinate information or 2D coordinate information;
when the coordinate information is 2D coordinate information, before driving the 3D virtual location template corresponding to the designated location to deform according to the obtained structural information of the designated location of each user, the method further includes: and converting the obtained 2D coordinate information into 3D coordinate information.
Thus, the description of the structure of the AR apparatus shown in fig. 6 is completed.
Based on the same application concept as the method, the embodiment of the present application further provides a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A multi-user virtual interaction method in Augmented Reality (AR) application is characterized by comprising the following steps:
acquiring the structural information of the appointed parts of different users at the same time under the same AR application scene;
driving a 3D virtual part template corresponding to the designated part to deform according to the acquired structural information of the designated part of each user to obtain a 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the appointed part of the user;
identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which changes based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene;
and returning the variable quantity to AR equipment corresponding to different users, and drawing and displaying the 3D virtual object by the AR equipment according to the received variable quantity.
2. The method of claim 1, wherein the obtaining of the specified part structured information of different users at the same time in the same AR application scene comprises:
receiving the structural information of the specified part of the user at the first moment, which is reported by the AR equipment corresponding to each user in the AR application scene; the structural information of the designated part reported by the AR equipment corresponding to different users is obtained from the images collected by different AR equipment at the first moment.
3. The method according to claim 1, wherein the driving, according to the obtained structural information of the designated part of each user, a 3D virtual part template corresponding to the designated part to deform, and obtaining the 3D virtual part corresponding to the user comprises:
aiming at the obtained structural information of the appointed part of each user, constructing a part skeleton according to the structural information;
searching a 3D virtual part template matched with the specified part in the constructed 3D virtual part template;
and deforming the searched 3D virtual part template according to the part skeleton to obtain a 3D virtual part corresponding to the user.
4. The method according to claim 1, wherein the determining, according to the operation behavior of each user in the AR application scenario, an amount of change of the 3D virtual object in the AR application scenario, which changes based on the operation behavior of each user, includes:
simulating, by a deployed 3D game engine, operating behaviors of each user in the AR application scene using the configured physical attributes to determine a variation that causes a change in the 3D virtual object; or simulating the operation behavior of each user in the AR application scene by using the configured physical attributes through the deployed 3D game engine according to the obtained operation attributes of the operation behavior executed by each user in the AR application scene so as to determine the variation causing the change of the 3D virtual object;
wherein the variation includes at least one of: speed, acceleration, direction of motion, position; the physical attributes include at least: collision, friction of the contact surfaces, elasticity.
5. The method of claim 1, further comprising:
identifying the action corresponding to the appointed part according to the acquired structural information of the appointed part of each user;
responding to the action in the AR application scenario;
wherein, when the action indicates a menu click under the AR application scene, responding to the menu click to provide a clicked menu under the AR application scene; when the action indicates a grabbing operation in the AR application scene and the joint point of the specified part is located inside the 3D virtual object, providing the grabbed 3D virtual object in response to the grabbing operation.
6. The method according to any one of claims 1 to 5, wherein the information on the structure of the designated area at least comprises: coordinate information of left-hand and/or right-hand upper-relevant nodes;
the coordinate information is 3D coordinate information or 2D coordinate information;
when the coordinate information is 2D coordinate information, before driving the 3D virtual location template corresponding to the designated location to deform according to the obtained structural information of the designated location of each user, the method further includes: and converting the obtained 2D coordinate information into 3D coordinate information.
7. A multi-person virtual interaction device in an Augmented Reality (AR) application, the device comprising:
the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for obtaining the structural information of the specified parts of different users at the same time under the same AR application scene;
the processing unit is used for driving the 3D virtual part template corresponding to the specified part to deform according to the acquired structural information of the specified part of each user to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user;
the determining unit is used for identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which changes based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene;
and the sending unit is used for returning the variable quantity to the AR equipment corresponding to different users, and the AR equipment draws and displays the 3D virtual object according to the received variable quantity.
8. An AR device, the AR device comprising: an AR scene generator, an interactive device, a processor, and a display;
the AR scene generator is used for generating an AR application scene;
the interaction device is used for interacting with an external device to obtain the structural information of the designated parts of different users at the same time in the same AR application scene;
the processor is used for driving the 3D virtual part template corresponding to the designated part to deform according to the acquired structural information of the designated part of each user, so as to obtain the 3D virtual part corresponding to the user; the 3D virtual part is used for matching the current posture of the designated part of the user; and (c) a second step of,
identifying the operation behaviors of the users in the AR application scene according to the 3D virtual parts corresponding to the users, and determining the variable quantity of the 3D virtual object which is changed based on the operation behaviors of the users in the AR application scene according to the operation behaviors of the users in the AR application scene; and returning the variable quantity to AR equipment corresponding to different users;
and the display is used for drawing and displaying the 3D virtual object according to the received variation.
9. The AR device of claim 8, wherein the processor drives a 3D virtual location template corresponding to the designated location to deform according to the obtained structural information of the designated location of each user, and obtaining the 3D virtual location corresponding to the user comprises:
aiming at the obtained structural information of the appointed part of each user, constructing a part skeleton according to the structural information;
searching a 3D virtual part template matched with the specified part in the constructed 3D virtual part template;
and deforming the searched 3D virtual part template according to the part skeleton to obtain a 3D virtual part corresponding to the user.
10. The AR device of claim 8,
the processor determines, according to the operation behavior of each user in the AR application scene, a variation amount of the 3D virtual object that varies based on the operation behavior of each user in the AR application scene, including:
simulating, by a deployed 3D game engine, operating behaviors of users in the AR application scene using configured physical attributes to determine a variation amount causing a change in the 3D virtual object; or further obtaining, by the interaction device, an operation attribute of each user performing an operation behavior in the AR application scene, simulating, by the deployed 3D game engine, the operation behavior of each user in the AR application scene according to the operation attribute by using the configured physical attribute to determine a variation amount causing the change of the 3D virtual object;
the variation includes at least one of: speed, acceleration, direction of motion, position; the physical attributes include at least: collision, friction of the contact surfaces, elasticity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210927365.3A CN115317907A (en) | 2022-08-03 | 2022-08-03 | Multi-user virtual interaction method and device in AR application and AR equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210927365.3A CN115317907A (en) | 2022-08-03 | 2022-08-03 | Multi-user virtual interaction method and device in AR application and AR equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115317907A true CN115317907A (en) | 2022-11-11 |
Family
ID=83922695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210927365.3A Pending CN115317907A (en) | 2022-08-03 | 2022-08-03 | Multi-user virtual interaction method and device in AR application and AR equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115317907A (en) |
-
2022
- 2022-08-03 CN CN202210927365.3A patent/CN115317907A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10761612B2 (en) | Gesture recognition techniques | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
CN102915112B (en) | For the system and method for closely motion tracking | |
US20130010071A1 (en) | Methods and systems for mapping pointing device on depth map | |
US20120229508A1 (en) | Theme-based augmentation of photorepresentative view | |
CN103793060A (en) | User interaction system and method | |
CN112148189A (en) | Interaction method and device in AR scene, electronic equipment and storage medium | |
EP2814000A1 (en) | Image processing apparatus, image processing method, and program | |
JP2013037675A5 (en) | ||
CN108431734A (en) | Touch feedback for non-touch surface interaction | |
KR20140081840A (en) | Motion controlled list scrolling | |
CN111643899A (en) | Virtual article display method and device, electronic equipment and storage medium | |
JP2022500795A (en) | Avatar animation | |
Alshaal et al. | Enhancing virtual reality systems with smart wearable devices | |
JP2017534135A (en) | Method for simulating and controlling a virtual ball on a mobile device | |
CN110544315B (en) | Virtual object control method and related equipment | |
WO2015167549A1 (en) | An augmented gaming platform | |
Valentini | Natural interface in augmented reality interactive simulations: This paper demonstrates that the use of a depth sensing camera that helps generate a three-dimensional scene and track user's motion could enhance the realism of the interactions between virtual and physical objects | |
Messaci et al. | 3d interaction techniques using gestures recognition in virtual environment | |
CN111488090A (en) | Interaction method, interaction device, interaction system, electronic equipment and storage medium | |
CN115317907A (en) | Multi-user virtual interaction method and device in AR application and AR equipment | |
KR102388715B1 (en) | Apparatus for feeling to remodeling historic cites | |
Eitsuka et al. | Authoring animations of virtual objects in augmented reality-based 3d space | |
CN111176427B (en) | Three-dimensional space drawing method based on handheld intelligent device and handheld intelligent device | |
WO2015030623A1 (en) | Methods and systems for locating substantially planar surfaces of 3d scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |