CN111408137A - Scene interaction method, device, equipment and medium based on augmented reality - Google Patents

Scene interaction method, device, equipment and medium based on augmented reality Download PDF

Info

Publication number
CN111408137A
CN111408137A CN202010128561.5A CN202010128561A CN111408137A CN 111408137 A CN111408137 A CN 111408137A CN 202010128561 A CN202010128561 A CN 202010128561A CN 111408137 A CN111408137 A CN 111408137A
Authority
CN
China
Prior art keywords
target user
augmented reality
virtual object
target
reality space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010128561.5A
Other languages
Chinese (zh)
Inventor
姚润昊
徐杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Paper Games Inc
Original Assignee
Paper Games Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Paper Games Inc filed Critical Paper Games Inc
Priority to CN202010128561.5A priority Critical patent/CN111408137A/en
Publication of CN111408137A publication Critical patent/CN111408137A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5573Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history player location
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The invention discloses a scene interaction method, a scene interaction device, scene interaction equipment and a scene interaction medium based on augmented reality. And generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user. When various operations are performed on the virtual object of the target user in the augmented reality space of the target user, corresponding operations are also performed on the virtual object of the target user in the augmented reality spaces of other target users. According to the method, the interaction of multiple users is carried out through the augmented reality space in the same scene, so that the sense of reality and the sense of immersion of the augmented reality space can be increased, a user can socialize in the augmented reality space, and the user experience is improved.

Description

Scene interaction method, device, equipment and medium based on augmented reality
Technical Field
The invention relates to the field of augmented reality, in particular to a scene interaction method, a scene interaction device, scene interaction equipment and a scene interaction medium based on augmented reality.
Background
The augmented reality technology is a technology for skillfully fusing virtual information and a real world, is a relatively new technology content for promoting the integration of the real world information and the virtual world information content, implements analog simulation processing on the basis of the scientific technologies such as computers and the like on the entity information which is relatively difficult to experience in the space range of the real world originally, superposes the virtual information content for effective application in the real world, and can be perceived by human senses in the process, thereby realizing the sensory experience beyond reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
In the prior art, the current augmented reality technology generally displays virtual item information specified by a user, such as game role information of the user, in an augmented reality space of a user terminal, the user can only see the virtual item information of the user, and when the user needs to communicate with the user, the user can only communicate in a mode of viewing the other terminal mutually, and the reality sense and the immersion sense of the augmented reality space are lacked.
Disclosure of Invention
The invention provides a scene interaction method, a scene interaction device, scene interaction equipment and a scene interaction medium based on augmented reality, which can increase the sense of reality and the sense of immersion of an augmented reality space and improve user experience.
In one aspect, the present invention provides a scene interaction method based on augmented reality, where the method includes:
acquiring environment image information of positions of at least two users;
scene matching is carried out on the environment image information of the positions of the at least two users;
determining at least two target users in the same scene according to the scene matching result;
generating an augmented reality space of each target user according to the environment image information of at least two target users in the same scene, wherein the augmented reality space comprises a virtual object of each target user and virtual objects of other target users in the same scene;
acquiring virtual object information of each target user;
generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
and responding to the user operation instruction, executing corresponding operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
Another aspect provides an augmented reality-based scene interaction apparatus, the apparatus including: the system comprises an environment image acquisition module, a scene matching module, a target user determination module, an augmented reality space generation module, a virtual object information acquisition module, a virtual object generation module and a virtual object operation module;
the environment image acquisition module is used for acquiring environment image information of positions of at least two users;
the scene matching module is used for carrying out scene matching on the environment image information of the positions of the at least two users;
the target user determination module is used for determining at least two target users in the same scene according to the scene matching result;
the augmented reality space generation module is used for generating an augmented reality space of each target user according to the environment image information of at least two target users in the same scene, and the augmented reality space comprises a virtual object of each target user and virtual objects of other target users in the same scene;
the virtual object information obtaining module is used for obtaining virtual object information of each target user;
the virtual object generation module is used for generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
the virtual object operation module is used for responding to a user operation instruction, executing corresponding operation on a virtual object of a target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
In another aspect, an apparatus is provided, where the in-vehicle apparatus includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement an augmented reality-based scene interaction method as described above.
Another aspect provides a storage medium, which includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executed to implement an augmented reality-based scene interaction method as described above.
The invention provides a scene interaction method, a scene interaction device, scene interaction equipment and a scene interaction medium based on augmented reality. And generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user, and when the virtual object of the target user is subjected to position movement, appearance replacement or posture replacement and other operations in the augmented reality space of the target user, the virtual object of the target user is also subjected to position movement, appearance replacement or posture replacement and other operations in the augmented reality spaces of other target users. According to the method, the interaction of multiple users is carried out through the shared augmented reality space in the same scene, so that the sense of reality and the sense of immersion of the augmented reality space can be increased, a user can socialize in the augmented reality space, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scene of a scene interaction method based on augmented reality according to an embodiment of the present invention;
fig. 2 is a flowchart of a scene interaction method based on augmented reality according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for generating a virtual object in an augmented reality space in an augmented reality-based scene interaction method according to an embodiment of the present invention;
fig. 4 is a schematic interface diagram of a shared augmented reality space in an outdoor scene in a scene interaction method based on augmented reality according to an embodiment of the present invention;
fig. 5 is a schematic interface diagram of a shared augmented reality space in an indoor scene in the augmented reality-based scene interaction method according to the embodiment of the present invention;
fig. 6 is a flowchart of a method for operating a virtual object in an augmented reality space in an augmented reality-based scene interaction method according to an embodiment of the present invention;
fig. 7 is a flowchart of a method for performing a position moving operation on a virtual object in an augmented reality space in an augmented reality-based scene interaction method according to an embodiment of the present invention;
fig. 8 is a flowchart of a method for changing an appearance of a virtual object in an augmented reality space in an augmented reality-based scene interaction method according to an embodiment of the present invention;
fig. 9 is a flowchart of a method for changing a posture of a virtual object in an augmented reality space in an augmented reality-based scene interaction method according to an embodiment of the present invention;
fig. 10 is a flowchart of a method for updating an augmented reality space after a user position changes in a scene interaction method based on augmented reality according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a scene interaction device based on augmented reality according to an embodiment of the present invention;
fig. 12 is a schematic hardware structure diagram of an apparatus for implementing the method provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Referring to fig. 1, an application scene schematic diagram of a scene interaction method based on augmented reality according to an embodiment of the present invention is shown, where the application scene includes user terminals 110 and a server 120, the server 120 receives environment image information sent by at least two user terminals 110, and determines whether the environment image information is in the same scene, and if the environment image information is determined to be in the same scene, a multi-user reality augmented scene function may be started, and an augmented reality space of a target user terminal 110 determined to be in the same scene is established. The server 120 acquires the virtual object information of all the target user terminals 110, and generates at least one virtual object in each augmented reality space. And if the user performs some operation on the virtual object of the user, the operation is executed and then the operation is realized in the whole augmented reality space.
In the embodiment of the present invention, the user terminal 110 includes a physical device of a smart phone, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, and the like, and may also include software running in the physical device, such as an application program and the like. In this embodiment of the present application, the operating system running on the user terminal 110 may include, but is not limited to, an android system, an IOS system, linux, Unix, windows, and the like. The user terminal 110 includes a UI (user Interface) layer, and the user terminal 110 provides an augmented reality space and a display of a virtual object to the outside through the UI layer, and transmits environment image information, virtual object information, and the like to the server 120 based on an API (Application Programming Interface).
In the embodiment of the present invention, the server 120 may include a server running independently, or a distributed server, or a server cluster composed of a plurality of servers. The server 120 may include a network communication unit, a processor, a memory, and the like.
Referring to fig. 2, a scene interaction method based on augmented reality is shown, which is applicable to a server side, and the method includes:
s210, obtaining environment image information of positions of at least two users;
s220, carrying out scene matching on the environment image information of the positions of the at least two users;
s230, determining at least two target users in the same scene according to the scene matching result;
specifically, a user may send a multi-user augmented reality scene generation request to a server through a user terminal, and based on the augmented reality scene generation request, the server obtains environment image information of the user who sends the request to determine whether the users are in the same scene. The environment image information is the surrounding environment information of the user, which is acquired by an image acquisition device in the user terminal, such as a camera. When the surrounding environments are matched, target users in the same scene are determined, the target users, namely the users in the shared multi-person augmented reality space, can be generated through scene matching of the positions of the target users, and the number of the target users is at least two.
S240, generating an augmented reality space of each target user according to the environment image information of at least two target users in the same scene, wherein the augmented reality space comprises a virtual object of each target user and virtual objects of other target users in the same scene;
specifically, each user terminal generates an augmented reality space corresponding to the user terminal of the user terminal, but the generated augmented reality space is an interactive augmented reality space in the same scene, each augmented reality space can include a virtual object of each target user and virtual objects of other target users in the same scene, that is, position information of other user terminals can be acquired in the augmented reality space of each user terminal, and interaction is performed among the user terminals, so that the augmented reality space shared by multiple users is realized.
When the augmented reality space is generated, real object information in the environment detected by each user terminal can be acquired, and the relative position between each user terminal can be determined according to the distance, the angle and the position of the real object information, so that the position information of other user terminals can be acquired.
When the augmented reality space is generated, feature point information of each user terminal can be obtained, the corresponding user terminal can be tracked through the feature point information, the feature points of each user terminal are uploaded and stored in the server, the respective feature points of different users in the same scene are compared in the server, and the virtual object can be rendered to the same position in the augmented reality space on different user terminals.
S250, acquiring virtual object information of each target user;
s260, generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
further, referring to fig. 3, the generating the virtual object of each target user and the virtual object of at least one other target user in the same scene as the target user in each augmented reality space according to the virtual object information of each target user and the environment image information of each target user includes:
s310, generating a virtual object of each target user in the augmented reality space of each target user according to the virtual object information of each target user;
s320, determining the orientation relation between the augmented reality spaces of each target user and/or characteristic points in the environment image information according to the environment image information, wherein the characteristic points are characteristic points of each target user terminal;
s330, generating at least one virtual object of other target users in the same scene with the target user in each augmented reality space according to the orientation relation among the augmented reality spaces and/or the characteristic points.
Specifically, virtual object information of each target user is obtained, the virtual object information may be game role information of the user in a game, and the virtual object information may include 3D model structure data, color data, dynamic data, and the like of the game role of the user. Firstly, according to the game role information of the user, a corresponding game role is generated in the augmented reality space in the user terminal of the user. And then, detecting other user terminals in the augmented reality space of each user terminal, and when detecting other user terminals, acquiring game role information of other user terminals through interaction with other user terminals to correspondingly generate game roles of other user terminals in the augmented reality space of the user terminal. When the function of sharing the augmented reality scene is turned off by other user terminals or disappears in the augmented reality space due to the position movement, the game characters corresponding to the user terminals also disappear together.
When other user terminals are detected in the augmented reality space of each user terminal, orientation information between the user terminals in the current environment image can be acquired based on the environment image information, and the other user terminals are positioned, wherein the orientation information can be acquired based on detected plane information, vertical plane information, curved surface information and the like, wherein the plane information comprises distances and angles. Or acquiring the characteristic points of each user terminal in the current environment image, and identifying and tracking other user terminals based on the characteristic points. Or the other user terminals are identified and tracked together based on the characteristic points and the azimuth information.
Fig. 4 shows an interface schematic diagram of a terminal device of a user a when a user a, a user B, and a user C share a real augmented space under a complex light source of an outdoor scene, so that the virtual object information of the user B and the user C can be seen from the terminal device of the user a, and the size, the illumination, and the like of each virtual object can be adjusted according to a perspective rule, thereby increasing the sense of reality. In this case, when tracking the feature points or calculating the orientation information, it is necessary to eliminate the interference of a plurality of light sources, shadows, and the like in the environment image information. Fig. 5 shows an interface schematic diagram of a terminal device of a user a when a user a, a user B, and a user C share a real augmented space under a single light source of an indoor scene, where information of virtual objects of the user B and the user C can be seen from the terminal device of the user a, and the size, the illumination, and the like of each virtual object can be adjusted according to a perspective rule, so as to increase a sense of reality.
When performing recognition and tracking based on orientation information, for example, the a user and the B user share an augmented reality space, the terminal device of the a user detects plane information, vertical plane information, or curved plane information of each object in an environment image, and obtains a distance and an angle between each object and the terminal device of the a user. And the terminal equipment of the user B detects the plane information, the vertical plane information or the curved surface information of each object in the environment image, and acquires the distance and the angle between each object and the terminal equipment of the user B. Based on the server, the terminal device of the user A and the terminal device of the user B perform information interaction, the distance and the angle between the terminal device of the user A and the terminal device of the user B are obtained through calculation, the terminal device of the user A can obtain the position of the terminal device of the user B in the augmented reality space of the user A, and the virtual object of the user B is generated at the position.
When recognition and tracking are performed based on the feature points, for example, the a user and the B user share the augmented reality space, the terminal device of the a user detects the feature points of the B user in the environment image, the a user uploads the feature points of the user terminal of the B user to the server to request the virtual object information of the B user, and the server transmits the virtual object information of the B user to the user terminal of the a user, so that the virtual object of the B user is generated in the augmented reality space of the a user. Based on the feature points of the user B, the user A can track the user terminal of the user B, and modulate the position of the virtual object of the user B in the augmented reality space of the user A in real time. When the a user cannot detect the feature point information of the B user, the corresponding virtual object of the B user also disappears from the augmented reality space of the a user.
By tracking the characteristic points or the orientation information, the interaction of the augmented reality space between the users can be carried out, and the augmented reality space shared by a plurality of users in the same scene is realized.
And S270, responding to the user operation instruction, executing corresponding operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
Further, referring to fig. 6, in response to the user operation instruction, performing a corresponding operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user includes:
s610, responding to the position moving instruction, executing position moving operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user;
s620, and/or responding to the appearance change instruction, executing appearance change operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user;
and S630, and/or responding to the gesture change instruction, executing gesture change operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
Specifically, the users may interact with each other, and when a certain user performs a certain operation on the virtual object of the user, for example, changes in appearance, moves position, or changes in posture, the result of the operation may be displayed in the augmented reality space of another user. After a certain user performs an operation on the virtual object of the user, the user terminal of the user feeds back the operation information to the server. If the other user terminal detects the characteristic point of the user terminal of the user in the augmented reality space, the characteristic point is fed back to the server, or if the other user determines that the user terminal is located in the augmented reality space according to the direction information, the characteristic point is fed back to the server. And the server sends the operation information to other user terminals which feed back that the user is positioned in the own augmented reality space according to the operation information and other user terminals which feed back that the user is positioned in the own augmented reality space, and displays the virtual object of the user after the operation is executed in the augmented reality spaces of the other user terminals.
The interaction of multiple users is carried out through the shared augmented reality space in the same scene, the operation of the users can be synchronously displayed in the augmented reality spaces of other users, the sense of reality and the sense of immersion of the augmented reality spaces can be increased, the users can perform social contact in the augmented reality spaces, and the user experience is improved.
Further, referring to fig. 7, the performing, in response to the position moving instruction, a position moving operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in each augmented reality space includes:
s710, taking the augmented reality space of the target user to be operated as a target augmented reality space;
s720, taking the augmented reality space of the other target users as a non-target augmented reality space;
s730, acquiring target position information of the virtual object of the target user to be operated in the target augmented reality space according to the position moving instruction;
s740, moving the virtual object of the target user to be operated in the target augmented reality space according to the target position information;
s750, acquiring the relative position relation between the virtual object of the target user to be operated and the virtual objects of other target users in the non-target augmented reality space;
and S760, moving the virtual object of the target user to be operated in the non-target augmented reality space according to the relative position relation and the target position information.
Specifically, in the target augmented reality space corresponding to the target user to be operated, the virtual object corresponding to the target user to be operated may be moved directly according to the first target position information of the virtual object in the position moving instruction. After the movement, the information of the movement of the virtual object corresponding to the target user to be operated is fed back to the server, the server acquires the non-target augmented reality space where the virtual object corresponding to the target user to be operated exists, and the server can acquire the information according to the direction information of the terminal equipment of other users or the characteristic point information of the target user to be operated, which is fed back by the terminal equipment of other users. And the server calculates the movement information of the virtual object corresponding to the target user to be operated and then sends the movement information to the terminals of other users, and the virtual object of the target user to be operated corresponding to the target user to be operated moves in the non-target augmented reality space of other users. The movement information of the virtual object corresponding to the target user to be operated may include first target position information of the virtual object.
In the non-target augmented reality space, calculating to obtain a relative position relation between a virtual object corresponding to the non-target augmented reality space and a virtual object of a target user to be operated, converting first target position information of the virtual object according to the relative position relation, and obtaining second target position information of the virtual object of the target user to be operated in the non-target augmented reality space. And moving the virtual object of the target user to be operated in the non-target augmented reality space according to the second target position information.
For example, in a shared augmented reality space of a user a and a user B in the same scene, the user a controls a virtual object of the user a to move to a target position x in the right direction, the relative position of the virtual object of the user a and the virtual object of the user B in the augmented reality space of the user B is Sa, the server obtains information that the virtual object of the user a moves to the target position x in the right direction and the relative position Sa, and when the virtual object of the user a is determined to exist in the augmented reality space of the user B, the conversion is performed according to the relative position Sa, so that the virtual object of the user a moves to a target position y in the augmented reality space of the user B in the left direction, the target position y is sent to the terminal device of the user B, and the virtual object of the user a in the augmented reality space of the user B is moved according to.
The interaction of multiple users is carried out through the shared augmented reality space in the same scene, so that other users can synchronously see the virtual object after a certain user moves, the sense of reality and the sense of immersion of the augmented reality space can be increased, the user can socialize in the augmented reality space, and the user experience is improved.
Further, referring to fig. 8, the performing, in response to the appearance change instruction, an appearance change operation on the virtual object of the target user to be operated in the augmented reality space of each target user includes:
s810, acquiring target appearance information of the virtual object of the target user to be operated according to the appearance change information;
and S820, respectively updating the current appearance information of the virtual object of the target user to be operated in the augmented reality space of each target user according to the target appearance information.
Specifically, after the target appearance information of the virtual object of the target user to be operated is acquired according to the appearance change information, the appearance of the virtual object of the target user to be operated in the target augmented reality space is updated to the target appearance information according to the target appearance information. And feeding back the target appearance information of the virtual object of the target user to be operated to the server. The server obtains a non-target augmented reality space where a virtual object corresponding to the target user to be operated exists at the moment, and the server can obtain the virtual object according to the direction information of the terminal equipment of other users or the characteristic point information of the target user to be operated fed back by the terminal equipment of other users. And the server sends the target appearance information to the terminal equipment of other users, and updates the appearance of the virtual object corresponding to the target user to be operated into the target appearance information in the non-target augmented reality space of other users.
The interaction of multiple users is carried out through the shared augmented reality space in the same scene, so that other users can synchronously see a virtual object of a user after appearance change, the sense of reality and the sense of immersion of the augmented reality space can be increased, the user can socialize in the augmented reality space, and the user experience is improved.
Further, referring to fig. 9, the performing, in response to the gesture change instruction, a gesture change operation on the virtual object of the target user to be operated in the augmented reality space of each target user includes:
s910, acquiring target posture information of the virtual object of the target user to be operated according to the posture change information;
and S920, respectively updating the current attitude information of the virtual object of the target user to be operated in the augmented reality space of each target user according to the target attitude information.
Specifically, after the target posture information of the virtual object of the target user to be operated is acquired according to the posture change information, the appearance of the virtual object of the target user to be operated in the target augmented reality space is updated to the target posture information according to the target posture information. And feeding back the target attitude information of the virtual object of the target user to be operated to the server. The server obtains a non-target augmented reality space where a virtual object corresponding to the target user to be operated exists at the moment, and the server can obtain the virtual object according to the direction information of the terminal equipment of other users or the characteristic point information of the target user to be operated fed back by the terminal equipment of other users. And the server sends the target posture information to terminal equipment of other users, and updates the posture of the virtual object corresponding to the target user to be operated into the target posture information in the non-target augmented reality space of the other users.
The interaction of multiple users is carried out through the shared augmented reality space under the same scene, so that other users can synchronously see the virtual object of a certain user after the gesture change, the sense of reality and the sense of immersion of the augmented reality space can be increased, the user can perform social contact in the augmented reality space, and the user experience is improved.
Further, referring to fig. 10, the method further includes:
s1010, if the position of any target user changes, acquiring environment image information of the target user after the position changes;
s1020, acquiring the relative position relation between virtual objects in the augmented reality space corresponding to the target user after the position is changed;
and S1030, updating the augmented reality space of the target user and the augmented reality spaces of other target users based on the environment image information of the target user after the position of the target user is changed and the relative position relationship of the target user after the position of the target user is changed.
Specifically, when the location of the target user changes, for example, the user a moves to the right or the user a turns back, the location of the target user changes, and therefore, the environment image information of the target user after the location of the target user changes needs to be acquired. The changed environment image information is fed back, the server acquires the augmented reality space where the virtual object corresponding to the target user exists at the moment, the server can acquire the information according to the direction information of the terminal equipment of other users or the characteristic point information of the target user fed back by the terminal equipment of other users, and the relative position relation between the virtual objects in the augmented reality space corresponding to the target user after the position change is acquired according to the augmented reality space where the virtual object corresponding to the target user exists at the moment. And determining the position information of the virtual object of the target user after the position in the augmented reality space of the other user is changed according to the relative position relationship after the position is changed, and updating the augmented reality space of the other user. And updating the augmented reality space of the target user according to the relative position relationship after the position is changed and the environment image information after the position is changed.
The embodiment of the invention provides a scene interaction method based on augmented reality, which comprises the steps of carrying out scene matching according to environment image information, determining target users in the same scene, and establishing an augmented reality space of each target user. And generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user, and when the virtual object of the target user is subjected to position movement, appearance replacement or posture replacement and other operations in the augmented reality space of the target user, the virtual object of the target user is also subjected to position movement, appearance replacement or posture replacement and other operations in the augmented reality spaces of other target users. According to the method, the interaction of multiple users is carried out through the shared augmented reality space in the same scene, so that the sense of reality and the sense of immersion of the augmented reality space can be increased, a user can socialize in the augmented reality space, and the user experience is improved.
An embodiment of the present invention further provides a scene interaction device based on augmented reality, please refer to fig. 11, where the device includes: an environment image acquisition module 1110, a scene matching module 1120, a target user determination module 1130, an augmented reality space generation module 1140, a virtual object information acquisition module 1150, a virtual object generation module 1160, and a virtual object manipulation module 1170;
the environment image obtaining module 1110 is configured to obtain environment image information of positions of at least two users;
the scene matching module 1120 is configured to perform scene matching on the environment image information of the locations where the at least two users are located;
the target user determination module 1130 is configured to determine at least two target users in the same scene according to a result of scene matching;
the augmented reality space generation module 1140 is configured to generate an augmented reality space of each target user according to the environmental image information of at least two target users in the same scene, where the augmented reality space includes a virtual object of each target user and virtual objects of other target users in the same scene;
the virtual object information obtaining module 1150 is configured to obtain virtual object information of each target user;
the virtual object generation module 1160 is configured to generate a virtual object of each target user and at least one virtual object of another target user in the same scene as the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
the virtual object operation module 970 is configured to respond to a user operation instruction, perform a corresponding operation on a virtual object of a target user to be operated in the augmented reality space of each target user, and display the operated virtual object in the augmented reality space of each target user.
The device provided in the above embodiments can execute the method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. For technical details that are not described in detail in the above embodiments, reference may be made to an augmented reality-based scene interaction method provided in any embodiment of the present invention.
The present embodiment also provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are loaded by a processor and execute the method for scene interaction based on augmented reality according to the present embodiment.
The present embodiment also provides an apparatus, which includes a processor and a memory, where the memory stores a computer program, and the computer program is adapted to be loaded by the processor and execute the method for scene interaction based on augmented reality of the present embodiment.
The device may be a computer terminal, a mobile device or a server, and the device may also participate in forming the apparatus or system provided by the embodiments of the present invention. As shown in fig. 12, the mobile device 12 (or computer terminal 12 or server 12) may include one or more (shown here as 1202a, 1002b, … …, 1002 n) processors 1202 (the processors 1202 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), memory 1204 for storing data, and a transmitting device 1206 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, mobile device 12 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
It should be noted that the one or more processors 1002 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the mobile device 12 (or computer terminal). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 1204 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the methods described in the embodiments of the present invention, and the processor 1002 executes various functional applications and data processing by running the software programs and modules stored in the memory 1204, so as to implement the above-described method for generating the self-attention-network-based time-series behavior capture box. The memory 1204 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1204 may further include memory located remotely from processor 1202, which may be connected to mobile device 12 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1206 is used for receiving or sending data via a network. Specific examples of such networks may include wireless networks provided by the communications provider of mobile device 12. In one example, the transmitting device 1206 includes a Network Interface Controller (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmitting device 1206 can be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen-type liquid crystal display (L CD) that may enable a user to interact with the user interface of the mobile device 12 (or computer terminal).
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The steps and sequences recited in the embodiments are but one manner of performing the steps in a multitude of sequences and do not represent a unique order of performance. In the actual system or interrupted product execution, it may be performed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The configurations shown in the present embodiment are only partial configurations related to the present application, and do not constitute a limitation on the devices to which the present application is applied, and a specific device may include more or less components than those shown, or combine some components, or have an arrangement of different components. It should be understood that the methods, apparatuses, and the like disclosed in the embodiments may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or unit modules.
Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An augmented reality-based scene interaction method, the method comprising:
acquiring environment image information of positions of at least two users;
scene matching is carried out on the environment image information of the positions of the at least two users;
determining at least two target users in the same scene according to the scene matching result;
generating an augmented reality space of each target user according to the environment image information of at least two target users in the same scene, wherein the augmented reality space comprises a virtual object of each target user and virtual objects of other target users in the same scene;
acquiring virtual object information of each target user;
generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
and responding to the user operation instruction, executing corresponding operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
2. The method of claim 1, wherein the generating the virtual object of the target user and the virtual object of at least one other target user in the same scene with the target user in each augmented reality space according to the virtual object information of each target user and the environment image information of each target user comprises:
generating a virtual object of each target user in the augmented reality space of each target user according to the virtual object information of each target user;
determining the orientation relation between the augmented reality spaces of each target user and/or characteristic points in the environment image information according to the environment image information, wherein the characteristic points are characteristic points of each target user terminal;
and generating at least one virtual object of other target users in the same scene with the target user in each augmented reality space according to the orientation relation among the augmented reality spaces and/or the characteristic points.
3. The method according to claim 1, wherein the performing, in response to the user operation instruction, a corresponding operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user comprises:
responding to the position moving instruction, executing position moving operation on a virtual object of a target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user;
and/or in response to the appearance change instruction, executing appearance change operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user;
and/or responding to the gesture change instruction, executing gesture change operation on the virtual object of the target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
4. The method according to claim 3, wherein the performing a position moving operation on the virtual object of the target user to be operated in the augmented reality space of each target user in response to the position moving instruction, and displaying the operated virtual object in each augmented reality space comprises:
taking the augmented reality space of the target user to be operated as a target augmented reality space;
taking the augmented reality space of the other target users as a non-target augmented reality space;
acquiring target position information of the virtual object of the target user to be operated in the target augmented reality space according to the position moving instruction;
moving the virtual object of the target user to be operated in the target augmented reality space according to the target position information;
acquiring the relative position relation between the virtual object of the target user to be operated and the virtual object of the other target user in the non-target augmented reality space;
and moving the virtual object of the target user to be operated in the non-target augmented reality space according to the relative position relation and the target position information.
5. The method according to claim 3, wherein the performing, in response to the appearance change instruction, the appearance change operation on the virtual object of the target user to be operated in the augmented reality space of each target user comprises:
acquiring target appearance information of the virtual object of the target user to be operated according to the appearance change information;
and respectively updating the current appearance information of the virtual object of the target user to be operated in the augmented reality space of each target user according to the target appearance information.
6. The method according to claim 3, wherein performing a posture change operation on the virtual object of the target user to be operated in the augmented reality space of each target user in response to the posture change instruction comprises:
acquiring target attitude information of the virtual object of the target user to be operated according to the attitude change information;
and respectively updating the current attitude information of the virtual object of the target user to be operated in the augmented reality space of each target user according to the target attitude information.
7. The method of claim 1, further comprising:
if the position of any target user changes, acquiring environment image information of the target user after the position of the target user changes;
acquiring the relative position relation between virtual objects in the augmented reality space corresponding to the target user after the position is changed;
and updating the augmented reality space of the target user and the augmented reality spaces of other target users based on the environment image information of the target user after the position is changed and the relative position relationship of the target user after the position is changed.
8. An augmented reality based scene interaction apparatus, the apparatus comprising: the system comprises an environment image acquisition module, a scene matching module, a target user determination module, an augmented reality space generation module, a virtual object information acquisition module, a virtual object generation module and a virtual object operation module;
the environment image acquisition module is used for acquiring environment image information of positions of at least two users;
the scene matching module is used for carrying out scene matching on the environment image information of the positions of the at least two users;
the target user determination module is used for determining at least two target users in the same scene according to the scene matching result;
the augmented reality space generation module is used for generating an augmented reality space of each target user according to the environment image information of at least two target users in the same scene, and the augmented reality space comprises a virtual object of each target user and virtual objects of other target users in the same scene;
the virtual object information obtaining module is used for obtaining virtual object information of each target user;
the virtual object generation module is used for generating a virtual object of each target user and at least one virtual object of other target users in the same scene with the target user in the augmented reality space of each target user according to the virtual object information of each target user and the environment image information of each target user;
the virtual object operation module is used for responding to a user operation instruction, executing corresponding operation on a virtual object of a target user to be operated in the augmented reality space of each target user, and displaying the operated virtual object in the augmented reality space of each target user.
9. An apparatus, characterized in that the vehicle-mounted apparatus comprises a processor and a memory, wherein the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to realize the augmented reality based scene interaction method according to any one of claims 1-7.
10. A storage medium comprising a processor and a memory, wherein the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the method for scene interaction based on augmented reality according to any one of claims 1-7.
CN202010128561.5A 2020-02-28 2020-02-28 Scene interaction method, device, equipment and medium based on augmented reality Pending CN111408137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010128561.5A CN111408137A (en) 2020-02-28 2020-02-28 Scene interaction method, device, equipment and medium based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010128561.5A CN111408137A (en) 2020-02-28 2020-02-28 Scene interaction method, device, equipment and medium based on augmented reality

Publications (1)

Publication Number Publication Date
CN111408137A true CN111408137A (en) 2020-07-14

Family

ID=71485071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010128561.5A Pending CN111408137A (en) 2020-02-28 2020-02-28 Scene interaction method, device, equipment and medium based on augmented reality

Country Status (1)

Country Link
CN (1) CN111408137A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988375A (en) * 2020-08-04 2020-11-24 深圳市瑞立视多媒体科技有限公司 Terminal positioning method, device, equipment and storage medium
CN112068703A (en) * 2020-09-07 2020-12-11 北京字节跳动网络技术有限公司 Target object control method and device, electronic device and storage medium
CN112675541A (en) * 2021-03-22 2021-04-20 航天宏图信息技术股份有限公司 AR information sharing method and device, electronic equipment and storage medium
CN113313837A (en) * 2021-04-27 2021-08-27 广景视睿科技(深圳)有限公司 Augmented reality environment experience method and device and electronic equipment
CN114465910A (en) * 2022-02-10 2022-05-10 北京为准智能科技有限公司 Machining equipment calibration method based on augmented reality technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107638690A (en) * 2017-09-29 2018-01-30 北京金山安全软件有限公司 Method, device, server and medium for realizing augmented reality
CN107741886A (en) * 2017-10-11 2018-02-27 江苏电力信息技术有限公司 A kind of method based on augmented reality multi-person interactive
CN107895330A (en) * 2017-11-28 2018-04-10 特斯联(北京)科技有限公司 A kind of visitor's service platform that scenario building is realized towards smart travel
CN109690450A (en) * 2017-11-17 2019-04-26 腾讯科技(深圳)有限公司 Role playing method and terminal device under VR scene
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
US20190321721A1 (en) * 2018-04-18 2019-10-24 Hon Hai Precision Industry Co., Ltd. Server and method for providing interaction in virtual reality multiplayer board game
CN110830521A (en) * 2020-01-13 2020-02-21 南昌市小核桃科技有限公司 VR multi-user same-screen data synchronous processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107638690A (en) * 2017-09-29 2018-01-30 北京金山安全软件有限公司 Method, device, server and medium for realizing augmented reality
CN107741886A (en) * 2017-10-11 2018-02-27 江苏电力信息技术有限公司 A kind of method based on augmented reality multi-person interactive
CN109690450A (en) * 2017-11-17 2019-04-26 腾讯科技(深圳)有限公司 Role playing method and terminal device under VR scene
CN107895330A (en) * 2017-11-28 2018-04-10 特斯联(北京)科技有限公司 A kind of visitor's service platform that scenario building is realized towards smart travel
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
US20190321721A1 (en) * 2018-04-18 2019-10-24 Hon Hai Precision Industry Co., Ltd. Server and method for providing interaction in virtual reality multiplayer board game
CN110830521A (en) * 2020-01-13 2020-02-21 南昌市小核桃科技有限公司 VR multi-user same-screen data synchronous processing method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988375A (en) * 2020-08-04 2020-11-24 深圳市瑞立视多媒体科技有限公司 Terminal positioning method, device, equipment and storage medium
CN111988375B (en) * 2020-08-04 2023-10-27 瑞立视多媒体科技(北京)有限公司 Terminal positioning method, device, equipment and storage medium
CN112068703A (en) * 2020-09-07 2020-12-11 北京字节跳动网络技术有限公司 Target object control method and device, electronic device and storage medium
US11869195B2 (en) 2020-09-07 2024-01-09 Beijing Bytedance Network Technology Co., Ltd. Target object controlling method, apparatus, electronic device, and storage medium
CN112675541A (en) * 2021-03-22 2021-04-20 航天宏图信息技术股份有限公司 AR information sharing method and device, electronic equipment and storage medium
CN113313837A (en) * 2021-04-27 2021-08-27 广景视睿科技(深圳)有限公司 Augmented reality environment experience method and device and electronic equipment
CN114465910A (en) * 2022-02-10 2022-05-10 北京为准智能科技有限公司 Machining equipment calibration method based on augmented reality technology
CN114465910B (en) * 2022-02-10 2022-09-13 北京为准智能科技有限公司 Machining equipment calibration method based on augmented reality technology

Similar Documents

Publication Publication Date Title
CN111408137A (en) Scene interaction method, device, equipment and medium based on augmented reality
Jo et al. ARIoT: scalable augmented reality framework for interacting with Internet of Things appliances everywhere
CN110140099B (en) System and method for tracking controller
US10200819B2 (en) Virtual reality and augmented reality functionality for mobile devices
US20170237789A1 (en) Apparatuses, methods and systems for sharing virtual elements
US20160283778A1 (en) Gaze assisted object recognition
US20090273560A1 (en) Sensor-based distributed tangible user interface
US11287875B2 (en) Screen control method and device for virtual reality service
US20180224945A1 (en) Updating a Virtual Environment
CN108681402A (en) Identify exchange method, device, storage medium and terminal device
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
US20160112279A1 (en) Sensor-based Distributed Tangible User Interface
CN113936085B (en) Three-dimensional reconstruction method and device
JP2017120650A (en) Information processing system, control method thereof, program, information processor, control method thereof, and program
CN112230836A (en) Object moving method and device, storage medium and electronic device
Ryskeldiev et al. Streamspace: Pervasive mixed reality telepresence for remote collaboration on mobile devices
Ahuja et al. BodySLAM: Opportunistic user digitization in multi-user AR/VR Experiences
KR20120010041A (en) Method and system for authoring of augmented reality contents on mobile terminal environment
CN104837066A (en) Method, device and system for processing images of object
CN112506465B (en) Method and device for switching scenes in panoramic roaming
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN113409468A (en) Image processing method and device, electronic equipment and storage medium
KR20200066962A (en) Electronic device and method for providing content based on the motion of the user
CN115131528A (en) Virtual reality scene determination method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200714

RJ01 Rejection of invention patent application after publication