CN114911345A - Interaction method, terminal, equipment and storage medium - Google Patents

Interaction method, terminal, equipment and storage medium Download PDF

Info

Publication number
CN114911345A
CN114911345A CN202210531324.2A CN202210531324A CN114911345A CN 114911345 A CN114911345 A CN 114911345A CN 202210531324 A CN202210531324 A CN 202210531324A CN 114911345 A CN114911345 A CN 114911345A
Authority
CN
China
Prior art keywords
user
prop
terminal
interaction
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210531324.2A
Other languages
Chinese (zh)
Inventor
彭司政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210531324.2A priority Critical patent/CN114911345A/en
Publication of CN114911345A publication Critical patent/CN114911345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The utility model provides an interaction method, a terminal, a device and a storage medium, which can present pre-constructed AR interaction activities on a first user terminal based on reality scenes and AR interaction data, and in the AR interaction activities, a plurality of users can simultaneously respectively operate AR excitation props moving in the AR interaction activities, in the activities, if a target prop applying operation on the first user is not also applied by a second user, the target props can be distributed to the first user, thereby realizing the interaction between the users and the props, realizing the multi-user real-time interaction, greatly mobilizing the enthusiasm of the user interaction, increasing the interest of the interaction, improving the interactivity between the users, presenting the interaction activities in an AR way, greatly improving the immersion feeling of the users during the interaction activities, and having novel and unique ways, the playability is strong.

Description

Interaction method, terminal, equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction technologies, and in particular, to an interaction method, a terminal, a device, and a storage medium.
Background
Along with the rapid development of scientific and technological technologies, science and technology is closer to the daily life of people, the life experience of people is improved, electronic equipment also becomes one of indispensable tools in the life of people gradually, people can communicate, interact and the like through the electronic equipment, along with the powerful and perfect functions of the electronic equipment, some traditional contents can be correspondingly realized through the electronic equipment, for example, the traditional etiquette ceremonies for giving away and attracting users and the like between elders and descendants, friends and relatives, even between merchants and buyers, and the like can also be realized through the electronic equipment such as terminals and the like.
Except for the common modes of issuing electronic red packages through terminals such as mobile phones and the like, some apps or platforms and the like can also release activities in the forms of red package rain and the like to feed back a user, namely in a certain time, the forms of a large number of falling plane red package pictures are displayed through the terminals such as the mobile phones of the user, the user clicks through an electronic screen to obtain a red package prop, the forms are well-defined, the pictures are simple and crude, the interactivity of the user is poor, and the experience sense is poor.
Disclosure of Invention
The embodiment of the disclosure provides at least an interaction method, a terminal, equipment and a storage medium.
The embodiment of the disclosure provides an interaction method, which comprises the following steps:
displaying AR interactive content under a current first visual angle on a first terminal of a first user, wherein the AR interactive content comprises a plurality of AR excitation props moving according to a preset motion track;
responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation for the target prop through a second terminal in a real scene where the first user is located;
and if the target prop does not exist, distributing the target prop to the first user, and canceling the display of the target prop in the AR interactive content.
Therefore, the AR interactive content of the interactive activity which can be participated by multiple persons at the current visual angle can be displayed through the first terminal of the first user, the target prop can be distributed to the first user under the condition that the target prop which is operated by the second user to the first user in the same real scene does not exist, the display of the target prop is cancelled in the AR interactive content, the interactive content is presented in an AR mode, the immersion of the user during the interactive activity can be greatly improved, the mode is novel and unique, the playability is strong, the multi-user activity sharing and the interaction among multiple users in the same scene can be realized, all users participating in the interaction share the same AR interactive content, namely all users interact with the same AR interactive content, the relevance among the users participating in the interaction is effectively enhanced, and the enthusiasm of the user interaction is facilitated, the interest of interaction is increased, and the interactivity between the users is improved.
In an optional implementation manner, after determining whether a second user applies a second preset operation to the target prop through a second terminal in a real scene where the first user is located, the method further includes:
if the first preset operation exists, acquiring first operation time of the first user for applying the first preset operation and second operation time of the second user for applying the second preset operation;
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value and the first operation time is after the second operation time, determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user;
and if the first item acquisition permission is higher than the second item acquisition permission, distributing the target item to the first user, and canceling the display of the target item in the AR interactive content.
Here, for a second user who performs a second preset operation on the target prop in the same way as the first user in the activity, by comparing the heights of the prop acquisition authorities corresponding to the preset operations performed by the first user and the second user, under the condition that the prop acquisition authority of the first user is high, the target prop is allocated to the first user with the highest prop acquisition authority, so that the competitiveness and interactivity among the users can be increased, and the enthusiasm of the users for participating in the interaction can be effectively mobilized.
In an optional embodiment, after the determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user, the method includes:
and if the second prop acquisition right is higher than the first prop acquisition right, canceling the display of the target prop in the AR interactive content, and increasing the number of times of interaction for the first user.
Here, through judging the property acquisition permission corresponding to the preset operation of each user, under the condition that the permission of the second user is high, the target property is allocated to the second user with the highest property acquisition permission, and the number of times of one-time interaction of the first user can be compensated, so that the fairness of interaction is ensured, and the enthusiasm of user interaction is favorably improved.
In an optional embodiment, after the obtaining a first operation time when the first user applies the first preset operation and a second operation time when the second user applies the second preset operation, the method further includes:
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value, determining that the target prop is oriented in the positive direction of the prop in the AR interaction content at the first operation time or the second operation time;
if the prop is oriented frontally towards the first user, assigning the target prop to the first user;
if the prop is oriented to the second user, the target prop is allocated to the second user;
canceling the display of the target item in the AR interactive content, if the target item is allocated to the first user or the second user.
Here, when two users almost simultaneously struggle for grabbing the same stage property, can distribute stage property to the user that stage property openly faces towards according to the motion condition of stage property, stage property distribution mode is convenient effective, and is simple swift, and openly faces towards the user in addition, is favorable to user's perception, is favorable to improving the user and carries out interactive sense organ and enthusiasm.
In an optional implementation, the displaying, on the first terminal of the first user, the AR interactive content at the current first viewing angle includes:
responding to a trigger operation of a first user for a first terminal, and acquiring AR interaction data configured for a real scene to which a current position belongs based on the current position of the first user;
and displaying the AR interactive content of the first terminal under the current first visual angle on the first terminal based on the AR interactive data.
Here, respond to the user to the trigger operation at terminal, according to the AR interactive data who is the AR interactive activity of reality scene configuration that acquires through the current position to show the AR interactive content under the first visual angle of user, realize showing interactive activity through the AR mode, can promote the sense of immersing of user when carrying out interactive activity greatly, the mode is novel unique, and the object viewing angle of combining the user, the fully consider user's interactive situation and wish, and is nimble convenient.
In an optional embodiment, it is determined that the first user has applied the trigger operation to the first terminal by:
if the first user uses the first terminal to scan an interactive information code set in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene; or
And if the first user starts a display interface corresponding to the AR interaction activity through the first terminal and the current position of the first user is located in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene.
Different activity participation modes can be provided for the user, different choices are provided for the user to start the terminal for interaction, and the user can select the most appropriate and most convenient trigger operation aiming at the user.
In an optional embodiment, after displaying the AR interactive content at the current first viewing angle on the first terminal of the first user, the method includes:
in the process of presenting the AR interactive content, responding to the position change and/or the view angle change of the first terminal, and determining a changed second view angle of the first terminal;
determining AR interaction content at the second perspective based on the AR interaction data and the second perspective;
and dynamically switching the AR interactive content displayed on the first terminal under the first visual angle to the AR interactive content under the second visual angle based on the corresponding part of the AR interactive data between the first visual angle and the second visual angle.
Here, the change of the position and/or the change of the view angle of the terminal can cause the view angle of the user to change, the AR interactive contents displayed at different view angles are different, and the terminal can display the AR interactive contents at any view angle in the real environment where the first user is located through the dynamic change of the contents at the view angles, so that an immersive feeling is created, the exploration desire of the user can be stimulated, and the mobility of the user in the interactive process is enhanced.
In an optional implementation manner, the obtaining AR interaction data configured for a real scene to which the current location belongs includes:
sending a data request for acquiring the AR interaction data to a server;
receiving the AR interaction data fed back by the server, wherein the AR interaction data are generated according to each virtual article in a virtual scene corresponding to the real scene and a plurality of AR excitation props configured for the real scene and contain preset motion tracks of the AR excitation props in the real scene.
In an optional implementation manner, the obtaining AR interaction data configured for a real scene to which the current location belongs includes:
acquiring a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene;
generating a preset motion track of the AR excitation prop in the virtual scene based on the initial motion position of the AR excitation prop in the virtual scene and each virtual article in the virtual scene;
and generating AR interaction data configured for the real scene based on the virtual scene, the AR excitation props and the corresponding preset motion tracks.
Here, the track of the prop is generated by means of the virtual scene corresponding to the real scene, and then AR interaction data are generated, the AR excitation prop can simulate the motion process of some objects under the real condition in the motion process, so that after the AR excitation prop in the virtual scene is fused with the real scene, the AR interaction content is more real, and further the experience of user interaction is enhanced.
An optional implementation manner, before displaying, on the first terminal, AR interactive content corresponding to the first view angle, the method further includes:
determining a trigger time of the trigger operation and an activity start time of the AR interaction activity;
if the operation time is before the start time, determining a time difference between the operation time and the start time;
displaying countdown information corresponding to the time difference on the first terminal.
Here, the relation between the trigger time and the start time can be displayed using a countdown, the time of the start of the activity is prompted on the terminal, the user can be pulled into the interactive environment quickly, and the immersion of the user is improved better.
In an alternative embodiment, a first preset operation of the first user for a target prop of the plurality of AR actuating props is determined by:
in the process that each AR excitation prop moves according to the respective preset motion trail, if the application operation of the first user for any AR excitation prop is received, and under the condition that the application operation is received, the AR excitation prop moves to the front side and faces the first user, the AR excitation prop is used as a target prop, and the first user is determined to apply a first preset operation on the target prop.
In an alternative embodiment, said canceling the display of said target prop in said AR interactive content comprises:
controlling the target prop to move to a state of facing a target user in the front in the AR interactive content, wherein the target user comprises a first user and a second user;
and displaying the item distribution animation under the condition that the front side faces the target user, and canceling the display of the target item after the item distribution animation is displayed.
Here, the interest of the interactive activities can be improved by adding the animation.
An embodiment of the present disclosure further provides a terminal, where the terminal is a first terminal, and the terminal includes:
the interaction content display module is used for displaying AR interaction content under a current first visual angle on a first terminal of a first user, and the AR interaction content comprises a plurality of AR excitation props moving according to a preset motion track;
the preset operation response module is used for responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation to the target prop through a second terminal in a real scene where the first user is located;
and the target prop processing module is used for allocating the target prop to the first user and canceling the display of the target prop in the AR interactive content if a second preset operation is not applied to the target prop by a second user through a second terminal.
In an optional implementation manner, the terminal further includes an acquisition permission determination module, where the acquisition permission determination module is configured to:
if the first preset operation exists, acquiring first operation time of the first user for applying the first preset operation and second operation time of the second user for applying the second preset operation;
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value and the first operation time is after the second operation time, determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user;
and if the first item acquisition permission is higher than the second item acquisition permission, distributing the target item to the first user, and canceling the display of the target item in the AR interactive content.
In an optional implementation manner, the obtaining permission determination module is further configured to:
and if the second prop acquisition right is higher than the first prop acquisition right, canceling the display of the target prop in the AR interactive content, and increasing the number of times of interaction for the first user.
In an optional implementation manner, the obtaining permission determination module is further configured to:
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold, determining that the target prop is in the positive orientation in the AR interactive content at the first operation time or the second operation time;
if the prop is oriented frontally towards the first user, assigning the target prop to the first user;
if the prop is oriented to the second user, the target prop is allocated to the second user;
canceling the display of the target item in the AR interactive content, if the target item is allocated to the first user or the second user.
In an optional implementation manner, the interactive content display module is specifically configured to:
responding to a trigger operation of a first user for a first terminal, and acquiring AR interaction data configured for a real scene to which a current position belongs based on the current position of the first user;
and displaying the AR interactive content of the first terminal under the current first visual angle on the first terminal based on the AR interactive data.
In an optional implementation manner, the interactive content display module is further configured to:
if the first user uses the first terminal to scan an interactive information code set in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene; or
And if the first user starts a display interface corresponding to the AR interaction activity through the first terminal and the current position of the first user is determined to be in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene.
In an optional implementation manner, the terminal further includes a display content transformation module, where the display content transformation module is configured to:
in the process of presenting the AR interactive content, responding to the position change and/or the view angle change of the first terminal, and determining a changed second view angle of the first terminal;
determining AR interaction content at the second perspective based on the AR interaction data and the second perspective;
and dynamically switching the AR interactive content displayed on the first terminal under the first visual angle to the AR interactive content under the second visual angle based on the corresponding part of the AR interactive data between the first visual angle and the second visual angle.
In an optional implementation manner, when the interactive content display module is configured to acquire the AR interactive data configured for the real scene to which the current position belongs, the interactive content display module is specifically configured to:
sending a data request for acquiring the AR interaction data to a server;
receiving the AR interaction data fed back by the server, wherein the AR interaction data are generated according to each virtual article in a virtual scene corresponding to the real scene and a plurality of AR excitation props configured for the real scene and contain preset motion tracks of the AR excitation props in the real scene.
In an optional implementation manner, when the interactive content display module is configured to acquire the AR interactive data configured for the real scene to which the current position belongs, the interactive content display module is specifically configured to:
acquiring a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene;
generating a preset motion track of the AR excitation prop in the virtual scene based on the initial motion position of the AR excitation prop in the virtual scene and each virtual article in the virtual scene;
and generating AR interaction data configured for the real scene based on the virtual scene, each AR excitation prop and the corresponding preset motion trail.
In an optional implementation manner, the terminal further includes a trigger time determining module, specifically configured to:
determining a trigger time of the trigger operation and an activity start time of the AR interaction activity;
if the operation time is before the start time, determining a time difference between the operation time and the start time;
displaying countdown information corresponding to the time difference on the first terminal.
In an optional embodiment, the preset operation response module is configured to determine a first preset operation of the first user for a target prop of the plurality of AR excitation props by:
in the process that each AR excitation prop moves according to the respective preset motion trail, if the application operation of the first user for any AR excitation prop is received, and under the condition that the application operation is received, the AR excitation prop moves to the front side and faces the first user, the AR excitation prop is used as a target prop, and the first user is determined to apply a first preset operation on the target prop.
In an optional embodiment, when the target item processing module is configured to cancel the display of the target item in the AR interactive content, the target item processing module is specifically configured to:
controlling the target prop to move to a state of facing a target user in the front in the AR interactive content, wherein the target user comprises a first user and a second user;
and displaying the item distribution animation under the condition that the front side faces the target user, and canceling the display of the target item after the item distribution animation is displayed.
An embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions being executed by the processor to perform the steps of the above-mentioned interaction method.
Alternative implementations of the present disclosure also provide a computer-readable storage medium having stored thereon a computer program, which, when executed, performs the steps of the above-described interaction method.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an interaction method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating an interaction scenario provided by an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an interaction scenario provided by the embodiment of the present disclosure;
FIG. 4 shows a flow chart of another interaction method provided by an embodiment of the present disclosure;
FIG. 5 illustrates one of the interface content displays provided by embodiments of the present disclosure;
FIG. 6 illustrates a second display of interface content provided by an embodiment of the present disclosure;
fig. 7 shows one of the schematic diagrams of a terminal provided by the embodiments of the present disclosure;
fig. 8 shows a second schematic diagram of a terminal provided by the embodiment of the present disclosure;
fig. 9 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that except for common modes of sending electronic red packages through terminals such as mobile phones and the like, some apps or platforms and the like can also provide activities in forms of red package rain and the like to feed back users, namely, in a certain time, the forms of a large number of falling plane red package pictures are displayed through the terminals such as the mobile phones of the users, the users can obtain red package props through electronic screen clicking, the forms are well-defined, the pictures are simple and crude, the interactivity of the users is poor, and the experience is poor.
Based on the above research, the present disclosure provides an interaction method, in which a first terminal of a first user may display AR interaction content of an interactive activity that is available for multiple users to participate in at a current viewing angle, and in the absence of a target prop that a second user also operates the first user in the same real-world scene, the target prop may be allocated to the first user, and the display of the target prop is cancelled in the AR interaction content, and the interaction content is presented in an AR manner, so that the immersion of the user during the interactive activity is greatly improved, the manner is novel and unique, the playability is strong, and the activity sharing of multiple users and the interaction among multiple users in the same scene may be implemented, all users participating in the interaction share the same AR interaction content, that is, all users interact with the same AR interaction content, and the relevance between the users participating in the interaction is effectively enhanced, the method is beneficial to mobilizing the enthusiasm of user interaction, increases the interest of interaction and improves the interactivity between users.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an interaction method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the interaction method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the interaction method may be implemented by a processor invoking computer readable instructions stored in a memory.
The following describes the interaction method provided by the embodiment of the present disclosure by taking a terminal whose execution subject is a user as an example.
Referring to fig. 1, a flowchart of an interaction method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
s101: displaying AR interactive content under a current first visual angle on a first terminal of a first user, wherein the AR interactive content comprises a plurality of AR excitation props moving according to a preset motion trail.
In this step, when the first user wants to participate in the AR interaction, the AR interaction content may be displayed through a display interface of the first terminal, and the AR interaction content displayed by the first terminal may be the AR interaction content that is visible to the first user at the first viewing angle.
The first user is any one of users participating in AR interaction.
The first terminal is an electronic device used by the first user to participate in the AR interaction activity, and the first terminal may be a handheld device of the first user, such as a mobile phone, a tablet, and the like, and may also be an electronic device that is fixedly installed in an interaction place and can be used for the first user interaction, such as AR glasses and the like.
When the first user wants to participate in the AR interactive activity, the first terminal may be used to trigger the AR interactive activity through various preset ways, for example, an online page may be opened, a client installed in advance may be opened, or a corresponding two-dimensional code may be scanned, and then presentation of an AR special effect of at least a part of the AR interactive content may be achieved through data loading and by combining with the view angle information of the first user, the direction information of the terminal, and the like.
And the AR interaction content comprises a plurality of AR excitation props moving according to a preset motion track.
Accordingly, in a possible implementation manner, for displaying the AR interactive content at the current first viewing angle on the first terminal of the first user, the following steps may be specifically performed:
responding to a trigger operation of a first user for a first terminal, and acquiring AR interaction data configured for a real scene to which a current position belongs based on the current position of the first user;
and displaying the AR interactive content of the first terminal under the current first visual angle on the first terminal based on the AR interactive data.
Here, the first user may start the AR interaction activity by applying a trigger operation to the first terminal, and after the AR interaction activity is started, the positioning position of the first terminal may be determined in a manner of GPS positioning carried by the first terminal, positioning recorded by a cloud server, and the like, so as to serve as the current position where the first user is currently located, and a real scene where the first user is currently located may be positioned by the current position, so that AR interaction data configured for the real scene may be obtained, and AR interaction content at the current first viewing angle of the first terminal may be displayed by the obtained AR interaction data.
The first view may be a view range of a stereoscopic scene that can be presented by the first terminal according to the orientation information of the first terminal in the real scene and by combining the current position corresponding to the first terminal, or may be a view direction of the first user tracked by using techniques such as eye tracking, and then by combining the current position and the picture presentation range of the first terminal, a view range of a stereoscopic scene that can be presented to the first user is determined.
Correspondingly, the triggering operation of the first user for the first terminal may be operations such as clicking, double-clicking, sliding and the like of the first user for a client, an online page or an option on a terminal for starting the AR interaction activity, which are installed in advance on the first terminal, or may be a starting path for scanning corresponding content and the like by using a scanning function of the first terminal.
Specifically, in a possible implementation manner, according to different starting manners, it may be determined that the first user applies the trigger operation to the first terminal through the following steps:
if the first user uses the first terminal to scan an interactive information code set in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene; or
And if the first user starts a display interface corresponding to the AR interaction activity through the first terminal and the current position of the first user is determined to be in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene.
Wherein, aim at first terminal is applyed trigger operation can be in the real scene that AR interaction activity corresponds posting the activity information code, the activity information code can be two-dimensional code, bar code etc. and first user uses first terminal successfully scans the activity information code, represents first user promptly and can participate in this time AR interaction activity, for example, the user can use the cell-phone to open interactive software, finds the code scanning function in the interactive software, scans the two-dimensional code in activity place, can also be that the user can input specific password unblock this time AR interaction activity etc. to interactive software, can also open corresponding with client, online webpage or some specific pages's mode such as the binding of AR interaction activity open the corresponding display interface of AR interaction activity.
Specifically, the display interface corresponding to the AR interactive activity is started by the user when the trigger operation is applied to the first terminal, and it may be determined whether the first user is located in a real scene corresponding to the AR interactive activity according to the location information of the first terminal after the display interface corresponding to the AR interactive activity is started by the user, and if the first user is located in the real scene corresponding to the AR interactive activity, it may be considered that the trigger operation is applied to the first user.
Further, if the first user is not located in the real scene corresponding to the AR interaction activity, the first terminal may display a corresponding position guidance information end to the first user, for example, route information or scene information of the real scene corresponding to the AR interaction activity may be displayed, and the first terminal may also be matched with a corresponding guidance track tool, for example, a guidance icon or the like, to indicate the real scene corresponding to the AR interaction activity, so as to guide the user to move to the real scene corresponding to the AR interaction activity.
In practical application, the mobile phone display interface of the user can display indicative information such as arrow props or traveling instructions, the user can move according to the direction guided by the arrow in the display interface or the traveling instructions, and the indicative information disappears after the user reaches a real scene corresponding to the AR interaction activity.
In practical application, the AR interaction data may be, according to scene information of the real scene, constructing a virtual scene corresponding to the real scene, then generating a corresponding AR excitation prop according to prepared activity information of the AR interaction activity, and generating the AR interaction data corresponding to the AR interaction activity through the AR excitation prop and the virtual scene.
Correspondingly, in a possible implementation manner, for obtaining the AR interaction data, a data request for obtaining the AR interaction data may be sent to a server, and then the AR interaction data fed back by the server is received, where the AR interaction data is data that is generated according to each virtual article in a virtual scene corresponding to the real scene and a plurality of AR excitation props configured for the real scene and contains a preset motion trajectory of the AR excitation props in the real scene.
Here, after the first user determines to participate in the AR interaction, for example, after the trigger operation is applied, a data request may be sent to the server through the first terminal to request the AR interaction data, correspondingly, after the server receives the request of the first terminal, the server may process the request of the first terminal through verification and the like, and after it is determined that the request of the first terminal satisfies a corresponding request condition, for example, under a condition such as that the first user is in the real scene configured with the AR interaction, the server may send the AR interaction data to the first terminal, and after the first terminal receives the AR interaction data from the server, the AR interaction content may be realized.
The nature such as kind, size of a plurality of AR excitation props can be designed according to the practical application scene to more laminate motion etc. in the virtual scene.
The AR interaction data may be generated in advance by a server and bound to the fixed real scene, and if the AR interaction data enters the real scene bound with the AR interaction data, the AR interaction data can participate in the AR interaction activity and display the AR interaction content.
Therefore, by generating and sending the AR interaction data through the server side, the waiting time of data loading and the like when the user participates in activities can be reduced, and the operation burden and the resource consumption of the terminal are reduced.
However, the AR data may be generated by the first terminal in real time, because in the real scene, contents such as objects and environments in the scene may change according to actual situations, which may cause a situation that a virtual reality is actually moving during interaction.
The data of the AR excitation prop comprise shape information, size information, weight information and other information required for generating the AR excitation prop.
Specifically, in a possible implementation manner, the obtaining of the AR interaction data may be further implemented by the following steps:
acquiring a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene;
generating a preset motion track of the AR excitation prop in the virtual scene based on the initial motion position of the AR excitation prop in the virtual scene and each virtual article in the virtual scene;
and generating AR interaction data configured for the real scene based on the virtual scene, the AR excitation props and the corresponding preset motion tracks.
Here, a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene may be acquired through communication with a server, or according to the actual situation of the real scene or according to activity information of AR interaction activity, and then the AR excitation props are made to start moving from the initial motion positions of the virtual scene in combination with each virtual article in the virtual scene, that is, the placement situation and the like of the real article in the real scene and the initial motion positions of the AR excitation props in the virtual scene, and move according to preset motion trajectories generated by the situation of each virtual article, so that AR interaction data for realizing AR interaction activity may be obtained.
The motion trail is a motion trail simulating the AR excitation prop during motion in the real scene, the AR excitation prop can be located at a certain initial position, physical laws in real life are simulated to move until the motion position of the AR excitation prop coincides with the ground, the AR excitation prop can disappear and can also be stationary on the ground according to the state when the AR excitation prop falls to the ground, the AR excitation prop can fall down according to the conditions of the weight, the shape, the size and the like of the AR excitation prop in the motion process of the AR excitation prop, and the motion trail and the like of the AR excitation prop in the falling process can be preset according to the coordinate setting.
When the AR excitation prop moves in the virtual scene, when the AR excitation prop moves to the position of the virtual object, the AR excitation prop can simulate to collide with the real object or fall on the real object through collision with the virtual object or fall on the virtual object, and further change the motion state, when the virtual scene and the real scene are fused, based on the position of the first terminal, the AR excitation props with different depths can be displayed on the display interface of the first terminal, and the AR excitation prop can possibly present the condition of being shielded by the real object in the display interface of the first terminal.
For example, referring to fig. 2, fig. 2 shows one of the interaction scene diagrams provided by the embodiment of the present disclosure, where the diagram shows an attached red-covered rain virtual scene in a real scene, taking the real scene as an example of an office environment, taking the AR excitation prop as a red package, and during a falling process of the red-covered rain in the virtual scene, the red package as the AR excitation prop starts to move from a preset height in the air, and the size of each red package is set to be the same, but according to a visual difference of a human, the same object in the real environment follows a principle of near-large and far-small, the red package shown in fig. 2 has large and small sizes, the position of the large red package from the first terminal is closer than the position of the small red package from the first terminal, and the long-distance red package prop may be blocked by a real object, and during the movement process, the red package simulates a movement situation of the real red package in the real scene, can rotate, collide and the like, for example, the red envelope prop in the figure can fall on the desk surface, can collide with the objects on the desk surface and the like.
For the first terminal and/or the server, before a virtual scene corresponding to the real scene is constructed, map information of the real scene needs to be acquired, after the map information of the real scene is obtained, the virtual scene is attached to the real scene according to the transformation of a coordinate system, and finally an AR activity scene is generated on a display interface of the first terminal.
Correspondingly, a high-precision map in a real scene can be constructed by using traditional three-dimensional reconstruction (SFM) And Simultaneous positioning And Mapping (SLAM), the traditional three-dimensional reconstruction acquires data back And then carries out off-line processing to reconstruct the environment around the position of the equipment, And the SLAM can enable one machine to continuously acquire pictures of the environment in the process of position movement in a completely unfamiliar environment, move And construct the map at the same time.
S102: and responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation for the target prop through a second terminal in a real scene where the first user is located.
In this step, the plurality of AR excitation props appear in a display interface of the first terminal, after the first user applies a first preset operation to the target prop, an instruction that the first user applies the first preset operation to the target prop may be sent to the server through the first terminal, the server detects whether the same instruction of the second terminal is received, that is, the same preset operation is applied to the same target prop by the second user corresponding to the second terminal, after the detection result is obtained, the server may determine whether the preset operation of the first user or the second user is in effect, and then send the interaction-allowing instruction to the terminal where the server determines that the preset operation is in effect, so that whether the second user exists may be determined.
The first user may apply a first preset operation to the target prop displayed on the display interface of the first terminal, for example, the first user performs operations such as clicking and sliding on the target prop displayed on the display interface of the first terminal, or the first user applies a first preset operation to a virtual scene attached to the virtual scene in a real scene, for example, a certain gesture operation of the first terminal capturing the user is applied to the target prop, and a gesture used for performing the first preset operation may be set in advance.
Further, the target prop is an AR excitation prop capable of applying a preset operation, or an AR excitation prop in an article state capable of applying a preset operation state, for example, a prop for at least a part of specific users or a prop close to a user in spatial distance.
In practical application, when a user performs AR interaction through a terminal, a plurality of AR excitation props often appear in AR interaction content presented on the terminal, especially interaction activities for multiple persons to participate, such as red envelope grabbing activities for multiple persons to participate, generally in the form of falling red envelope rain, for big families to grab red envelopes together, and in the AR interaction content, each AR excitation prop has its own motion track and motion state, and during the AR excitation prop motion, each user participating in the activity can obtain the AR excitation props, but this may cause disorder of prop acquisition, for example, when multiple persons simultaneously acquire the same prop, the attribution of the props needs to be judged, in order to reduce the probability of too many users competing for the same prop, the user can be limited to acquire the props, for example, when the user grabs the prop, the prop can be required to be in a certain state or a certain posture, the distance, the angle and the like between the prop and the user can meet certain conditions, the prop can be required to be in a certain area, and the user can be calculated to apply effective operation to the prop only after the set conditions are met, so that the distribution of the prop can be further realized.
Thus, in some possible embodiments, a first preset operation of the first user against a target prop of the plurality of AR actuation props may be determined by:
in the process that each AR excitation prop moves according to the respective preset motion track, if the trigger operation of the first user for any AR excitation prop is received, and the AR excitation prop moves to the front side facing the first user under the condition that the trigger operation is received, taking the AR excitation prop as a target prop, and determining that the first user applies a first preset operation to the target prop.
In this step, in the process that each AR excitation prop moves according to its respective preset movement trajectory, only when the AR excitation prop moves to the front side toward the first user, the trigger operation of the first user for the AR excitation prop is received, the AR excitation prop is taken as the target prop, and it is determined that the first user applies the first preset operation for the target prop.
The AR excitation prop may be moved to face the first user from the front, and may use a plane where a terminal interface used by the user is located as a standard, when the front of the target prop faces the user and the plane where the front of the target prop is located and the plane where the terminal interface is located are parallel planes, the target prop may be considered to face the user from the front, or an included angle between the front of the prop of the AR excitation prop and the plane where the terminal interface is located is smaller than a preset angle, for example, smaller than 10 degrees, and the target prop may be considered to face the user basically.
For example, referring to fig. 3, fig. 3 shows a second interaction scenario provided by the embodiment of the present disclosure, in fig. 3, a situation of a red packet at a certain time during a falling process is shown, and we may set that a click operation on the red packet is only valid when the front side of the red packet is all displayed in a display interface of a terminal, for example, red packets A, B and C in fig. 3 are red packets to which a preset operation can be applied, and other red packets in the figure cannot be clicked to obtain at the certain time, for example, red packet D is a red packet whose back side is all displayed in a display interface of a terminal.
Here, only when the prop moves to face the user in the front direction, the operation of the user for the prop is regarded as an effective operation, but not limited to this, in other embodiments, the state of the prop may not be limited, that is, the operation of the user may be counted as an effective operation without the prop moving to face the user in the front direction, the operation for the prop may be counted as an effective operation at any time, and the condition of the effective operation may be set as needed.
S103: and if the target prop does not exist, distributing the target prop to the first user, and canceling the display of the target prop in the AR interactive content.
In this step, if it is determined that a second user does not apply a second preset operation to the target prop through a second terminal in the real scene where the first user is located, the target prop is directly allocated to the first user, the first user obtains the target prop, and the target prop is no longer displayed in the AR interactive content.
Wherein, canceling the display of the target item in the AR interactive content may further include:
controlling the target prop to move in the AR interactive content to a state that the front surface faces a target user, wherein the target user comprises a first user and a second user;
and displaying the prop distribution animation under the condition that the front face faces the target user, and canceling the display of the target prop after the display of the prop distribution animation is finished.
In this step, after the target prop is determined to be allocated to the user who acquires the target prop, the target prop may be further controlled to move in the AR interactive content to a state where the front face faces the target user and displayed on a display interface of a terminal used by the user, then a prop allocation animation may be displayed, and after the display of the prop allocation animation is completed, the display of the target prop is cancelled in the AR interactive content.
The state of controlling the target prop to move to the front-facing target user in the AR interactive content may be a state of directly displaying the front-facing target user on a display interface of a terminal of the user regardless of the front-facing direction of the target prop when the target prop is determined to be allocated to the user.
The target prop moving to the front side facing the target user in the AR interactive content may be based on a plane where a terminal interface used by the user is located, and when the front side of the target prop faces the user and the plane where the front side of the target prop is located and the plane where the terminal interface is located are parallel planes, the target prop may be considered to face the user.
After the first user acquires the target prop, the first user may apply a first preset operation to a next target prop, and repeat the step S102 until the AR interaction activity is finished.
According to the interaction method provided by the embodiment of the disclosure, AR interaction content under a current first visual angle is displayed on a first terminal of a first user, wherein the AR interaction content comprises a plurality of AR excitation props moving according to a preset motion track; responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation for the target prop through a second terminal in a real scene where the first user is located; and if the target prop does not exist, distributing the target prop to the first user, and canceling the display of the target prop in the AR interactive content.
Like this, through show AR interactive content, realize user and stage property to and the interaction between the user, multi-user real-time interaction can greatly mobilize user interaction's enthusiasm, increases interactive interest, improves the interactivity between user and the user, presents interactive activity through the AR mode moreover, can promote the user greatly and immerse the sense when carrying out interactive activity, and the mode is novel unique, and object for appreciation nature is strong.
Referring to fig. 4, fig. 4 is a flowchart of another interaction method provided in the embodiment of the present disclosure. As shown in fig. 4, an interaction method provided by the embodiment of the present disclosure includes:
s401: displaying AR interactive content under a current first visual angle on a first terminal of a first user, wherein the AR interactive content comprises a plurality of AR excitation props moving according to a preset motion trail.
S402: and responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation for the target prop through a second terminal in a real scene where the first user is located.
S403: and if the target prop does not exist, distributing the target prop to the first user, and canceling the display of the target prop in the AR interactive content.
The descriptions of step S401 to step S403 may refer to the descriptions of step S101 to step S103, and the same technical effect and the same technical problem may be achieved, which is not described herein again.
S404: if the first preset operation exists, acquiring first operation time of the first user for applying the first preset operation and second operation time of the second user for applying the second preset operation.
In this step, if it is determined that the second user exists, the specific attribution of the target prop needs to be determined, and at this time, a first operation time corresponding to the first preset operation and a second operation time corresponding to the second preset operation may be obtained first, so as to perform further determination.
S405: and if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value and the first operation time is after the second operation time, determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user.
In this step, a time difference between the first operation time and the second operation time may be calculated, for the same target prop, when two users simultaneously apply preset operations, in terms of human eyes or real-time, the first operation time and the second operation time may be performed simultaneously, but for the service end, there may be a time difference, the existing time difference may be very small, for example, a value smaller than 1 millisecond, 1 microsecond, and the like, at this time, a threshold is set, and the time difference values meeting the preset time difference threshold are all regarded as the preset operations that occur simultaneously, that is, the time difference is smaller than or equal to the preset time difference threshold.
Correspondingly, after the time difference between the first operation time and the second operation time is determined to be smaller than the preset time difference threshold, the situation that the first operation time is actually after the second operation time may exist, at this time, the heights of the prop acquisition permission of the first user and the prop acquisition permission of the second user need to be determined, and the target prop is determined to the user with the highest prop acquisition permission.
The item acquisition permission may be a priority for the user to acquire the target item, for example, some limiting conditions may be preset, for different users participating in the AR interactive activity, participation conditions of each user are different, for example, the number of times of participating in the similar activity, the device performance, and the like, for different participation conditions of the users, priorities may be set, for example, the priority of the user who has participated in the similar activity for a greater number of times is higher than the priority of the user who has participated in the similar activity for a lesser number of times, and the item acquisition permission may be determined according to the priority of one participation condition, or may be determined according to the priority after the arrangement and combination of a plurality of participation conditions.
S406: and if the first item acquisition permission is higher than the second item acquisition permission, distributing the target item to the first user, and canceling the display of the target item in the AR interactive content.
In this step, if the time difference between the first operation time and the second operation time is smaller than the preset time difference threshold, that is, if the first operation time and the second operation time are both triggered at substantially the same time, the target prop may be preferentially allocated to a user with high prop acquisition permission, that is, if the first prop acquisition permission is higher than the second prop acquisition permission, the target prop may be allocated to the first user, and at the same time, the display of the target prop is cancelled in the AR interactive content.
Here, after the target prop is acquired by the user with the highest acquisition permission, the target prop disappears in the AR interactive content, all users cannot see the target prop any more, and the terminal can perform instant settlement after storing the target prop in an online pocket of the user, that is, after acquiring one target prop, a settlement result of the target prop is obtained, and then perform the next interactive action, or after the AR interactive activity is finished, the obtained target props can be settled, which may be the number of the target props acquired in total, or the number of each target prop acquired finally corresponds to a different settlement result.
The AR interaction activity ending can be the real-time display time of the first terminal meeting the activity ending time of the AR interaction activity, the time for carrying out the AR interaction activity can also meet the duration time of the preset AR interaction activity, the use of the preset AR interaction times of the user can also be finished, and the like.
For example, after a red-envelope rain activity is finished, the number of red-envelope props finally acquired by each user can be calculated, the participating users are arranged from multiple to few according to the number of red-envelope props finally acquired, the user with the first number of red-envelope props finally acquired can obtain the highest-level reward, the user with the second number of red-envelope props finally acquired can obtain the next-level reward, and the like; or each red-envelope prop in the red-envelope prop rain represents different rewards, and the user obtains different red-envelope props to obtain different rewards.
When the target prop is acquired, a sound effect can be added, when the target prop is actually acquired, the terminal can send out a sound effect representing that the target prop is successfully acquired, and when the target prop is acquired or operated incorrectly, the terminal can send out a sound effect representing that the target prop is not successfully acquired, for example, when the acquisition permission of the first user is higher than that of the second user, the first user acquires the target prop, so when the first user applies a first preset operation to the target prop, the first terminal sends out a sound effect representing that the target prop is successfully acquired, and when the second user does not acquire the target prop, the second terminal of the second user sends out a sound effect representing that the target prop is not successfully acquired.
In a possible implementation, after step S405, the method comprises:
and if the second prop acquisition right is higher than the first prop acquisition right, canceling the display of the target prop in the AR interactive content, and increasing the number of times of interaction for the first user.
In this step, after it is determined that a second preset operation is applied to the target prop by a second user through a second terminal in a real scene where the first user is located, if the second prop acquisition permission is higher than the first prop acquisition permission, the second terminal of the second user is acquired to the target prop, and for the first user, the number of interactions is increased for the first user.
When the first preset operation of the first user for the target prop is received by the server, the first user cannot obtain the target prop due to the low prop obtaining authority of the first user, and then for the sake of fairness, the server sends an instruction of increasing the number of interaction times to the first terminal to increase the number of interaction times for the first user, wherein the AR interaction data of the AR interaction activities may include the number of interactable times set for each user, that is, the number of operation times of the user for obtaining the AR excitation prop.
Correspondingly, after the number of times of interaction of the first user is exhausted, the first user can not perform AR interaction activities any more.
In a possible implementation, after step S404, the method further includes:
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value, determining that the target prop is oriented in the positive direction of the prop in the AR interaction content at the first operation time or the second operation time;
if the prop is oriented to the first user, assigning the target prop to the first user;
if the prop is oriented to the second user, the target prop is allocated to the second user;
canceling the display of the target item in the AR interactive content, if the target item is allocated to the first user or the second user.
In this step, when it is determined that the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold, that is, when the first operation time and the second operation time are almost simultaneously triggered, if it is necessary to further determine that the target prop belongs to the target prop, the prop front orientation of the target prop in the AR interaction content when the first user applies the first preset operation at the first operation time and the prop front orientation of the target prop in the AR interaction content when the first user applies the first preset operation at the first operation time may be further determined, and the target prop is allocated to the user by determining which user the prop front orientation is directed, and then the display of the target prop is cancelled in the AR interaction content.
For a user performing AR interaction, a plane where a terminal interface used by the user is located is taken as a standard, and when the front face of the target prop faces the user and the plane where the front face of the target prop is located and the plane where the terminal interface is located are parallel planes, the front face of the target prop can be considered to face the user.
Correspondingly, whether the target prop can be judged to face the user in the front direction can be determined according to the size of an included angle between the plane where the front face of the target prop is located and the plane where the display interface of the terminal is located, and it can be considered that the smaller the included angle is, the more the front face of the target prop faces the target user, so that for the first user and the second user, whether the front face of the target prop faces the first user or the second user can be determined by judging the size between the included angle between the plane where the display interface of the first terminal is located and the plane where the front face of the target prop is located and the included angle between the plane where the display interface of the second terminal is located and the plane where the front face of the target prop is located.
In practical applications, the first viewing angle may be environment capture by the first user through a display interface of the first terminal, and the content displayed in the display interface of the first terminal may be a part or all of the environment that can be captured by the eyes of the first user.
For example, referring to fig. 5, fig. 5 shows one of interface content display diagrams provided by the embodiment of the present disclosure, in practical applications, when a first user uses a mobile phone to interact with AR interactive content, a camera of the mobile phone of the first user captures an image of an interactive environment where the first user is located and displays the image on a screen of the mobile phone, and the first user performs subsequent operations by observing the AR interactive content displayed on the screen of the mobile phone, where a displayed picture of the screen of the mobile phone is actually a part of the environment that can be observed by human eyes, and the part displays the AR interactive content through rendering of an electronic device. Furthermore, the first user may use the AR glasses, and when the first user uses the AR glasses, the environments that the first user can observe by the eyes when wearing the AR glasses are all the environments that the first user can observe by the eyes at that time.
Correspondingly, in a case that the AR interactive content at the current first view angle is displayed on the first terminal of the first user, the AR interactive content presented by the first terminal may dynamically change with the first terminal or the view angle of the first user, so that after the AR interactive content at the current first view angle is displayed on the first terminal of the first user, the method includes:
in the process of presenting the AR interactive content, responding to the position change and/or the view angle change of the first terminal, and determining a changed second view angle of the first terminal;
determining AR interaction content at the second perspective based on the AR interaction data and the second perspective;
and dynamically switching the AR interactive content displayed on the first terminal under the first visual angle to the AR interactive content under the second visual angle based on the corresponding part of the AR interactive data between the first visual angle and the second visual angle.
In this step, with the position change and/or the orientation change of the first terminal, the first view angle of the first terminal may be changed into the second view angle, and in the process of changing the first view angle into the second view angle, the AR interactive content corresponding to part of the AR interactive data that the first terminal can display may change, so that the AR interactive content displayed on the first terminal at the first view angle may be dynamically switched to the AR interactive content at the second view angle.
In the process that the first visual angle is changed into the second visual angle, the position of the first terminal may move, along with the change of the position of the first terminal, the depth relation of the first terminal for each AR excitation prop also changes, the position of the AR excitation prop relative to the world coordinate system does not change, but the position of the AR excitation prop relative to the first terminal changes along with the change of the position of the first terminal, and therefore the first user can search for the target prop meeting interaction requirements to interact by changing the position of the first terminal.
For example, when a user uses a mobile phone to participate in a red-covered rain activity, the user can change the range of an AR activity scene displayed by the mobile phone through the position of the mobile phone, but the position of the mobile phone cannot move out of a real scene corresponding to an AR interaction activity.
In a possible implementation, before step S401, the method further comprises the following steps:
determining a trigger time of the trigger operation and an activity start time of the AR interaction activity;
if the operation time is before the start time, determining a time difference between the operation time and the start time;
displaying countdown information corresponding to the time difference on the first terminal.
In practical application, when the first user participates in the AR interaction activity by applying a trigger operation, the activity start time is a preset AR interaction activity start time, the trigger time of the trigger operation is a real-time when the first terminal responds to the trigger operation, and the start time may be before the trigger operation time or after the trigger operation time.
If the starting time can be before the triggering operation time or the triggering operation time just meets the starting time, the display interface of the first terminal directly jumps to an AR interaction activity interface without performing the next countdown operation, if the starting time can be after the triggering operation time, the calculation of the time difference between the operation time and the starting time is required, and the calculated result is displayed in the display interface of the first terminal.
The expression of the time difference may be a format using XX minutes and XX seconds, or may be a format converting the time represented by XX minutes and XX seconds into seconds, that is, the expression of the time difference displayed by the first terminal is XX seconds.
Further, as the time difference decreases until the value of the time difference is zero, the display interface of the first terminal jumps from the display countdown interface to the AR interaction activity start interface, and then the AR interaction content is displayed.
The displaying of the countdown information on the first terminal may be displaying of a countdown prop on a display interface of the first terminal, where the countdown prop may use a three-dimensional virtual model, or may directly display a countdown number only on the display interface of the first terminal.
For example, referring to fig. 6, fig. 6 shows a second display of interface content provided by the embodiment of the present disclosure, and the display interface of the mobile phone shown in fig. 6 shows a preparation stage when the AR interaction activity is not started, and the mobile phone interface displays countdown information.
The interaction method provided by the embodiment of the disclosure realizes users and props by displaying AR interaction content, and interaction between users, multi-user real-time interaction, and interactive activities are presented by an AR mode, so that immersion feeling of the users during interactive activities can be greatly promoted, the mode is novel and unique, playability is strong, if the same target prop is aimed at, the condition that multiple users apply and preset operation simultaneously exists, the prop acquisition permission of the users needs to be determined, the users with the highest prop acquisition permission obtain the target prop, enthusiasm of user interaction can be greatly mobilized, interaction interest is increased, and interactivity and fairness between the users are improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a terminal corresponding to the interaction method, and as the principle of solving the problem of the terminal in the embodiment of the present disclosure is similar to the interaction method described above in the embodiment of the present disclosure, the implementation of the terminal may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 7 and 8, fig. 7 is a first schematic diagram of a terminal according to an embodiment of the disclosure, and fig. 8 is a second schematic diagram of the terminal according to the embodiment of the disclosure. As shown in fig. 7, a terminal 700 provided in an embodiment of the present disclosure includes:
an interactive content display module 710, configured to display, on a first terminal of a first user, AR interactive content at a current first view angle, where the AR interactive content includes multiple AR excitation props that move according to a preset motion trajectory;
a preset operation response module 720, configured to determine, in response to a first preset operation of the first user for a target property in the plurality of AR excitation properties, whether a second user applies a second preset operation for the target property through a second terminal in a real scene where the first user is located;
and the target prop processing module 730 is configured to, if there is no second preset operation applied by the second user to the target prop through the second terminal, allocate the target prop to the first user, and cancel the display of the target prop in the AR interactive content.
In an optional implementation manner, as shown in fig. 8, the terminal 700 further includes an acquisition permission determining module 740, where the acquisition permission determining module 740 is specifically configured to:
if the first preset operation exists, acquiring first operation time of the first user for applying the first preset operation and second operation time of the second user for applying the second preset operation;
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value and the first operation time is after the second operation time, determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user;
and if the first item acquisition permission is higher than the second item acquisition permission, distributing the target item to the first user, and canceling the display of the target item in the AR interactive content.
In an optional implementation, the acquisition permission determination 740 is further configured to:
and if the second prop acquisition right is higher than the first prop acquisition right, canceling the display of the target prop in the AR interactive content, and increasing the number of times of interaction for the first user.
In an optional implementation manner, the obtaining permission determination module 740 is further configured to:
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value, determining that the target prop is oriented in the positive direction of the prop in the AR interaction content at the first operation time or the second operation time;
if the prop is oriented frontally towards the first user, assigning the target prop to the first user;
if the prop is oriented to the second user, the target prop is allocated to the second user;
canceling the display of the target item in the AR interactive content, if the target item is allocated to the first user or the second user.
In an optional implementation manner, the interactive content display module 710 is specifically configured to:
responding to a trigger operation of a first user for a first terminal, and acquiring AR interaction data configured for a real scene to which a current position belongs based on the current position of the first user;
and displaying the AR interactive content of the first terminal under the current first visual angle on the first terminal based on the AR interactive data.
In an optional implementation, the interactive content display module 710 is further configured to:
if the first user uses the first terminal to scan an interactive information code set in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene; or
And if the first user starts a display interface corresponding to the AR interaction activity through the first terminal and the current position of the first user is determined to be in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene.
In an optional implementation manner, the terminal 700 further includes a display content transformation module 750, where the display content transformation module 750 is specifically configured to:
in the process of presenting the AR interactive content, responding to the position change and/or the view angle change of the first terminal, and determining a changed second view angle of the first terminal;
determining AR interaction content at the second perspective based on the AR interaction data and the second perspective;
and dynamically switching the AR interactive content displayed on the first terminal under the first visual angle to the AR interactive content under the second visual angle based on the corresponding part of the AR interactive data between the first visual angle and the second visual angle.
In an optional implementation manner, when the interactive content display module 710 is configured to acquire the AR interactive data configured for the real scene to which the current location belongs, specifically, to:
sending a data request for acquiring the AR interaction data to a server;
receiving the AR interaction data fed back by the server, wherein the AR interaction data are generated according to each virtual article in a virtual scene corresponding to the real scene and a plurality of AR excitation props configured for the real scene and contain preset motion tracks of the AR excitation props in the real scene.
In an optional embodiment, when the interactive content display module 710 is configured to acquire the AR interactive data configured for the real scene to which the current position belongs, the interactive content display module is specifically configured to:
acquiring a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene;
generating a preset motion track of the AR excitation prop in the virtual scene based on the initial motion position of the AR excitation prop in the virtual scene and each virtual article in the virtual scene;
and generating AR interaction data configured for the real scene based on the virtual scene, each AR excitation prop and the corresponding preset motion trail.
In an optional implementation manner, as shown in fig. 8, the terminal 700 further includes a trigger time determining module 760, where the trigger time determining module 760 is specifically configured to:
determining a trigger time of the trigger operation and an activity start time of the AR interaction activity;
if the operation time is before the start time, determining a time difference between the operation time and the start time;
displaying countdown information corresponding to the time difference on the first terminal.
In an optional embodiment, the preset operation response module 720 is configured to determine a first preset operation of the first user for a target prop in the plurality of AR excitation props by:
in the process that each AR excitation prop moves according to the respective preset motion trail, if the application operation of the first user for any AR excitation prop is received, and under the condition that the application operation is received, the AR excitation prop moves to the front side and faces the first user, the AR excitation prop is used as a target prop, and the first user is determined to apply a first preset operation on the target prop.
In an optional embodiment, when the target item processing module 730 is configured to cancel the display of the target item in the AR interactive content, specifically, it is configured to:
controlling the target prop to move to a state of facing a target user in the front in the AR interactive content, wherein the target user comprises a first user and a second user;
and displaying the item distribution animation under the condition that the front side faces the target user, and canceling the display of the target item after the item distribution animation is displayed.
The terminal provided by the embodiment of the disclosure can realize multi-user interaction in the same space, enhances the interactivity among users, enriches the interaction forms and contents, mobilizes the enthusiasm of the interaction among the users, and improves the immersion sense of the users during the interaction.
The description of the processing flow of each module in the terminal and the interaction flow between each module may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device 900, as shown in fig. 9, which is a schematic structural diagram of the computer device 900 provided in the embodiment of the present disclosure, and includes:
a processor 910, a memory 920, and a bus 930; the storage 920 stores machine-readable instructions executable by the processor 910, including a memory 921 and an external storage 922; the memory 921 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 910 and the data exchanged with an external memory 922 such as a hard disk, the processor 910 exchanges data with the external memory 922 through the memory 921, when the computer device 900 operates, the processor 910 communicates with the memory 920 through the bus 930, and the machine-readable instructions are executed by the processor 910 to perform the steps of the above-described interaction method.
The disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps of the interaction method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the interaction method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. An interactive method, characterized in that the method comprises:
displaying AR interactive content under a current first visual angle on a first terminal of a first user, wherein the AR interactive content comprises a plurality of AR excitation props moving according to a preset motion track;
responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation for the target prop through a second terminal in a real scene where the first user is located;
and if the target prop does not exist, distributing the target prop to the first user, and canceling the display of the target prop in the AR interactive content.
2. The method according to claim 1, wherein after determining whether a second preset operation is applied to the target prop by a second user through a second terminal in a real scene in which the first user is located, the method further comprises:
if the first preset operation exists, acquiring first operation time of the first user for applying the first preset operation and second operation time of the second user for applying the second preset operation;
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value and the first operation time is after the second operation time, determining whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user;
and if the first item acquisition permission is higher than the second item acquisition permission, distributing the target item to the first user, and canceling the display of the target item in the AR interactive content.
3. The method of claim 2, wherein after the determination of whether the first prop acquisition permission of the first user is higher than the second prop acquisition permission of the second user, the method comprises:
and if the second prop acquisition right is higher than the first prop acquisition right, canceling the display of the target prop in the AR interactive content, and increasing the number of times of interaction for the first user.
4. The method according to claim 2, wherein after the obtaining of the first operation time for the first user to apply the first preset operation and the second operation time for the second user to apply the second preset operation, the method further comprises:
if the time difference between the first operation time and the second operation time is smaller than a preset time difference threshold value, determining that the target prop is oriented in the positive direction of the prop in the AR interaction content at the first operation time or the second operation time;
if the prop is oriented frontally towards the first user, assigning the target prop to the first user;
if the prop is oriented to the second user, the target prop is allocated to the second user;
canceling the display of the target item in the AR interactive content, if the target item is allocated to the first user or the second user.
5. The method of claim 1, wherein displaying the AR interaction content at the current first view on the first terminal of the first user comprises:
responding to a trigger operation of a first user for a first terminal, and acquiring AR interaction data configured for a real scene to which a current position belongs based on the current position of the first user;
and displaying the AR interactive content of the first terminal under the current first visual angle on the first terminal based on the AR interactive data.
6. The method of claim 5, wherein the first user is determined to have applied the trigger operation to the first terminal by:
if the first user uses the first terminal to scan an interactive information code set in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene; or alternatively
And if the first user starts a display interface corresponding to the AR interaction activity through the first terminal and the current position of the first user is determined to be in the real scene, determining that the first user applies the trigger operation to the first terminal in the real scene.
7. The method of claim 5, wherein after displaying the AR interaction content at the current first view on the first terminal of the first user, the method comprises:
in the process of presenting the AR interactive content, responding to the position change and/or the view angle change of the first terminal, and determining a changed second view angle of the first terminal;
determining AR interaction content at the second perspective based on the AR interaction data and the second perspective;
and dynamically switching the AR interactive content displayed on the first terminal under the first visual angle to the AR interactive content under the second visual angle based on the corresponding part of the AR interactive data between the first visual angle and the second visual angle.
8. The method of claim 5, wherein the obtaining AR interaction data configured for the real scene to which the current location belongs comprises:
sending a data request for acquiring the AR interaction data to a server;
receiving the AR interaction data fed back by the server, wherein the AR interaction data are generated according to each virtual article in a virtual scene corresponding to the real scene and a plurality of AR excitation props configured for the real scene and contain preset motion tracks of the AR excitation props in the real scene.
9. The method of claim 5, wherein the obtaining AR interaction data configured for the real scene to which the current location belongs comprises:
acquiring a plurality of AR excitation props configured for the real scene and a virtual scene which is constructed for the real scene in advance and corresponds to the real scene;
generating a preset motion track of the AR excitation prop in the virtual scene based on the initial motion position of the AR excitation prop in the virtual scene and each virtual article in the virtual scene;
and generating AR interaction data configured for the real scene based on the virtual scene, each AR excitation prop and the corresponding preset motion trail.
10. The method according to any of claims 5 to 8, wherein before displaying the AR interaction content corresponding to the first view on the first terminal, the method further comprises:
determining a trigger time of the trigger operation and an activity start time of the AR interaction activity;
if the operation time is before the start time, determining a time difference between the operation time and the start time;
displaying countdown information corresponding to the time difference on the first terminal.
11. The method of claim 1, wherein the first user's first preset action for a target prop of the plurality of AR actuating props is determined by:
in the process that each AR excitation prop moves according to the respective preset motion trail, if the application operation of the first user for any AR excitation prop is received, and under the condition that the application operation is received, the AR excitation prop moves to the front side and faces the first user, the AR excitation prop is used as a target prop, and the first user is determined to apply a first preset operation on the target prop.
12. The method of any one of claims 1 to 4, wherein said canceling the display of the target prop in the AR interaction content comprises:
controlling the target prop to move to a state of facing a target user in the front in the AR interactive content, wherein the target user comprises a first user and a second user;
and displaying the item distribution animation under the condition that the front side faces the target user, and canceling the display of the target item after the item distribution animation is displayed.
13. A terminal, wherein the terminal is a first terminal, the terminal comprising:
the interaction content display module is used for displaying AR interaction content under a current first visual angle on a first terminal of a first user, and the AR interaction content comprises a plurality of AR excitation props moving according to a preset motion track;
the preset operation response module is used for responding to a first preset operation of the first user for a target prop in the plurality of AR excitation props, and determining whether a second user applies a second preset operation to the target prop through a second terminal in a real scene where the first user is located;
and the target prop processing module is used for allocating the target prop to the first user and canceling the display of the target prop in the AR interactive content if a second preset operation is not applied to the target prop by a second user through a second terminal.
14. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the interaction method of any one of claims 1 to 12.
15. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the interaction method according to any one of claims 1 to 12.
CN202210531324.2A 2022-05-16 2022-05-16 Interaction method, terminal, equipment and storage medium Pending CN114911345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531324.2A CN114911345A (en) 2022-05-16 2022-05-16 Interaction method, terminal, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531324.2A CN114911345A (en) 2022-05-16 2022-05-16 Interaction method, terminal, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114911345A true CN114911345A (en) 2022-08-16

Family

ID=82766113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531324.2A Pending CN114911345A (en) 2022-05-16 2022-05-16 Interaction method, terminal, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114911345A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545622A (en) * 2022-11-30 2022-12-30 中建安装集团有限公司 Engineering material storage management system and method based on digital construction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545622A (en) * 2022-11-30 2022-12-30 中建安装集团有限公司 Engineering material storage management system and method based on digital construction
CN115545622B (en) * 2022-11-30 2023-04-07 中建安装集团有限公司 Engineering material storage management system and method based on digital construction

Similar Documents

Publication Publication Date Title
CN107852573B (en) Mixed reality social interactions
US10325407B2 (en) Attribute detection tools for mixed reality
CN114625304B (en) Virtual reality and cross-device experience
CN109802931B (en) Communication processing method, terminal and storage medium
CN112243583B (en) Multi-endpoint mixed reality conference
KR102382362B1 (en) Providing a tele-immersive experience using a mirror metaphor
US9952820B2 (en) Augmented reality representations across multiple devices
US10192363B2 (en) Math operations in mixed or virtual reality
JP2022537614A (en) Multi-virtual character control method, device, and computer program
KR102402580B1 (en) Image processing system and method in metaverse environment
JP5295416B1 (en) Image processing apparatus, image processing method, and image processing program
KR20130029683A (en) Mobile terminal, server and method for forming communication channel using augmented reality
EP2814000A1 (en) Image processing apparatus, image processing method, and program
GB2526245A (en) Sharing content
KR20090087807A (en) Method for implementing augmented reality
CN109690540A (en) The access control based on posture in virtual environment
KR20200067537A (en) System and method for providing a virtual environmental conference room
CN114911345A (en) Interaction method, terminal, equipment and storage medium
CN114863014B (en) Fusion display method and device for three-dimensional model
CN109389687A (en) Information processing method, device, equipment and readable storage medium storing program for executing based on AR
CN110192169A (en) Menu treating method, device and storage medium in virtual scene
CN110276794A (en) Information processing method, information processing unit, terminal device and server
US20210034318A1 (en) Shared volume computing architecture of a virtual reality environment and related systems and methods
CN114489337A (en) AR interaction method, device, equipment and storage medium
CN108092950B (en) AR or MR social method based on position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination