CN117224947A - AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium - Google Patents

AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN117224947A
CN117224947A CN202311397931.5A CN202311397931A CN117224947A CN 117224947 A CN117224947 A CN 117224947A CN 202311397931 A CN202311397931 A CN 202311397931A CN 117224947 A CN117224947 A CN 117224947A
Authority
CN
China
Prior art keywords
user
virtual
target building
doll
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397931.5A
Other languages
Chinese (zh)
Inventor
干啸天
王立强
陈泽嘉
孙中伦
刘一锋
周涛
张迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maibijing Singapore Ltd
Original Assignee
Maibijing Singapore Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maibijing Singapore Ltd filed Critical Maibijing Singapore Ltd
Priority to CN202311397931.5A priority Critical patent/CN117224947A/en
Publication of CN117224947A publication Critical patent/CN117224947A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an AR interaction method, an AR interaction device, electronic equipment and a computer readable storage medium, wherein the AR interaction method comprises the following steps: responding to a first live-action starting instruction issued by a first user, and displaying a first AR live-action space and a virtual doll setting control in a first graphical user interface; when the first AR real-scene space is based on a first real-scene image obtained by shooting a target building in real time, responding to the adding operation of a first user on a virtual doll setting control, attaching a first virtual doll controlled by the first user to the target building according to the size of the target building, and displaying a second AR real-scene space in a second graphical user interface after a second user gives a second real-scene starting instruction; the second AR real space is formed based on a superposition of a second real image obtained by photographing the target building in real time and the first virtual doll attached to the target building. By the method, the exposure of the virtual doll in the game scene is improved.

Description

AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of interaction technologies, and in particular, to an AR interaction method, apparatus, electronic device, and computer readable storage medium.
Background
With the rapid development of internet technology, the variety and style of online games are becoming more and more abundant. In a network game, each user typically controls at least one virtual doll. When a plurality of users put respective controlled virtual dolls into the same game scene, the virtual dolls controlled by each user are generally displayed in the game scene, that is, a plurality of virtual dolls are displayed in the game scene. At this time, the server typically pushes only the respective virtual dolls in the game scene to the user terminals of the users who control the virtual dolls, that is, only the users who control the virtual dolls can see the respective virtual dolls in the game scene, which may make the exposure of the virtual dolls in the game scene low.
Disclosure of Invention
Accordingly, the present application is directed to an AR interaction method, apparatus, electronic device, and computer readable storage medium for improving exposure of virtual dolls in a game scene.
In a first aspect, an embodiment of the present application provides an AR interaction method, including:
responding to a first live-action starting instruction issued by a first user, displaying at least part of a first AR live-action space in a first graphical user interface, and floating and displaying a virtual doll setting control in the first graphical user interface;
When the first AR real scene space is based on a first real scene image obtained by shooting a target building in real time, responding to the adding operation of the first user on a virtual doll setting control, attaching a first virtual doll controlled by the first user to the target building in the first AR real scene space according to the size of the target building so as to display at least part of a second AR real scene space in a second graphical user interface after a second user gives a second real scene starting instruction; the second AR real space is formed by overlapping a second real image obtained by shooting the target building in real time and the first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where each preset position on the target building in the first AR real space is provided with a corresponding AR virtual anchor point; the attaching the first virtual doll controlled by the first user to the target building in the first AR live-action space according to the size of the target building in response to the adding operation of the first user to the virtual doll setting control includes:
Responding to the selection operation of the first user on an AR target virtual anchor point in an AR virtual anchor point so as to determine the position to be attached of the first virtual doll from all preset positions on the target building in the first AR real space;
and in response to the first user setting control adding operation for the virtual doll, attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building.
With reference to the first possible implementation manner of the first aspect, the embodiment of the present application provides a second possible implementation manner of the first aspect, wherein the attaching, in response to an adding operation of the first user to a virtual doll setting control, the first virtual doll controlled by the first user to the to-be-attached position on the target building in the first AR real space according to the size of the target building includes:
in response to the first user adding operation of the virtual doll setting control, displaying a first virtual doll to be added and a virtual doll gesture adjustment control in the first graphical user interface in a floating mode;
And responding to the gesture adjustment operation of the first user on the virtual doll gesture adjustment control, and attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the gesture adjustment operation.
With reference to the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where the responding to the adding operation of the first user to the virtual doll setting control by the first user attaches the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building includes:
in response to an add operation of the first user to a virtual doll setting control, displaying a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll on the target building in the first graphical user interface;
and in response to the first user performing virtual resource consumption operation on the virtual resource consumption control based on the virtual resource quantity, attaching the first virtual doll controlled by the first user to the target building according to the size of the target building.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where, in response to an adding operation of the first user to a virtual doll setting control, attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to a size of the target building includes:
responding to the adding operation of the first user on the virtual doll setting control, and displaying a virtual resource input control, a virtual resource consumption control and the current highest virtual resource quantity input by other users in the current period in the first graphical user interface;
and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user for the virtual resource consumption control, and attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building.
With reference to the first aspect, the embodiment of the present application provides a fifth possible implementation manner of the first aspect, where the method further includes:
Starting timing when a first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and only allowing the first virtual doll to be attached to the target building in the first AR real space within a protection period according to the size of the target building;
when the protection period is exceeded and the expiration period is not reached, deleting the first virtual doll from the target building and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface;
and deleting the first virtual doll from the target building when the expiration period is exceeded and no third user performs an adding operation on a virtual doll setting control displayed in a third graphical user interface.
With reference to the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where the method further includes:
when the first AR real-scene space is formed by overlapping a first real-scene image obtained by shooting a target building in real time and other virtual dolls, responding to a first interaction instruction issued by the first user for the other virtual dolls, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
With reference to the first aspect, the embodiment of the present application provides a seventh possible implementation manner of the first aspect, where the method further includes:
after receiving a second interaction instruction aiming at the first virtual doll issued by the second user, displaying prompt information corresponding to the second interaction instruction issued by the second user in the first graphical user interface, so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
With reference to the seventh possible implementation manner of the first aspect, the embodiment of the present application provides an eighth possible implementation manner of the first aspect, wherein the second interaction instruction is a barrage sending instruction; displaying prompt information corresponding to a second interaction instruction issued by the second user in the first graphical user interface, wherein the prompt information comprises;
and displaying barrage information corresponding to the barrage sending instruction issued by the second user in the first graphical user interface.
In a second aspect, an embodiment of the present application further provides an AR interaction device, including:
the display module is used for responding to a first live-action starting instruction issued by a first user, displaying at least part of a first AR live-action space in a first graphical user interface and displaying a virtual doll setting control in a floating mode in the first graphical user interface;
The attaching module is used for responding to the adding operation of the first user on the virtual doll setting control when the first AR real space is based on a first real image obtained by shooting a target building in real time, attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building so as to display at least part of a second AR real space in a second graphical user interface after a second user gives a second real starting instruction; the second AR real space is formed by overlapping a second real image obtained by shooting the target building in real time and the first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
With reference to the second aspect, an embodiment of the present application provides a first possible implementation manner of the second aspect, where each preset position on the target building in the first AR real space is provided with a corresponding AR virtual anchor point; the attaching module is specifically configured to, when responding to an adding operation of the first user on a virtual doll setting control, attach a first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building:
Responding to the selection operation of the first user on an AR target virtual anchor point in an AR virtual anchor point so as to determine the position to be attached of the first virtual doll from all preset positions on the target building in the first AR real space;
and in response to the first user setting control adding operation for the virtual doll, attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building.
With reference to the first possible implementation manner of the second aspect, the embodiment of the present application provides a second possible implementation manner of the second aspect, wherein the attaching module is specifically configured to, when, in response to an adding operation of the first user to a virtual doll setting control, attach a first virtual doll controlled by the first user to the to-be-attached location on the target building in the first AR real space according to the size of the target building:
in response to the first user adding operation of the virtual doll setting control, displaying a first virtual doll to be added and a virtual doll gesture adjustment control in the first graphical user interface in a floating mode;
And responding to the gesture adjustment operation of the first user on the virtual doll gesture adjustment control, and attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the gesture adjustment operation.
With reference to the second aspect, an embodiment of the present application provides a third possible implementation manner of the second aspect, wherein the attaching module is specifically configured to, when responding to an adding operation of the first user to a virtual doll setting control, attach a first virtual doll controlled by the first user to the target building in the first AR real space according to a size of the target building:
in response to an add operation of the first user to a virtual doll setting control, displaying a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll on the target building in the first graphical user interface;
and in response to the first user performing virtual resource consumption operation on the virtual resource consumption control based on the virtual resource quantity, attaching the first virtual doll controlled by the first user to the target building according to the size of the target building.
With reference to the second aspect, an embodiment of the present application provides a fourth possible implementation manner of the second aspect, wherein the attaching module is specifically configured to, when responding to an adding operation of the first user to a virtual doll setting control, attach a first virtual doll controlled by the first user to the target building in the first AR real space according to a size of the target building:
responding to the adding operation of the first user on the virtual doll setting control, and displaying a virtual resource input control, a virtual resource consumption control and the current highest virtual resource quantity input by other users in the current period in the first graphical user interface;
and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user for the virtual resource consumption control, and attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building.
With reference to the second aspect, an embodiment of the present application provides a fifth possible implementation manner of the second aspect, where the method further includes:
A protection module for starting timing from when the first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and allowing only the first virtual doll to be attached to the target building in the first AR real space according to the size of the target building within a protection period;
the first deleting module is used for deleting the first virtual doll from the target building and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface when the protection period is exceeded and the expiration period is not reached;
and the second deleting module is used for deleting the first virtual doll from the target building when the expiration period is exceeded and no third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface.
With reference to the second aspect, an embodiment of the present application provides a sixth possible implementation manner of the second aspect, where the method further includes:
And the first interaction module is used for responding to a first interaction instruction issued by the first user for the other virtual dolls when the first AR real space is formed by overlapping a first real image obtained by shooting a target building in real time and the other virtual dolls, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
With reference to the second aspect, an embodiment of the present application provides a seventh possible implementation manner of the second aspect, where the method further includes:
and the second interaction module is used for displaying prompt information corresponding to a second interaction instruction issued by the second user in the first graphical user interface after receiving the second interaction instruction issued by the second user and aiming at the first virtual doll, so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
With reference to the seventh possible implementation manner of the second aspect, an embodiment of the present application provides an eighth possible implementation manner of the second aspect, where the second interaction instruction is a barrage sending instruction;
The second interaction module is specifically configured to, when being configured to display, in the first graphical user interface, prompt information corresponding to a second interaction instruction issued by the second user:
and displaying barrage information corresponding to the barrage sending instruction issued by the second user in the first graphical user interface.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the possible implementations of the first aspect described above.
According to the AR interaction method, the AR interaction device, the electronic equipment and the computer readable storage medium, when a first user gives a first live-action starting instruction, after at least part of a first AR live-action space is displayed in a first graphical user interface of the first user, if the first AR live-action space only contains a first live-action image obtained by shooting a target building in real time, the first user can set a control through a virtual doll floating displayed in the first graphical user interface, and the first virtual doll controlled by the first user is attached to the target building in the first AR live-action space according to the size of the target building. At this time, after each second user issues a second live-action start instruction, a second AR live-action space is displayed in a second graphical user interface of each second user, where the second AR live-action space is formed by superimposing a second live-action image obtained by shooting a target building in real time and a first virtual doll attached to the target building. That is, each second user can see the first virtual doll attached to the target building through the second graphic interface. And, because only one virtual doll is allowed to be attached to the target building at the same time, after the first user attaches the first virtual doll to the target building, each second user can only see the first virtual doll attached to the target building according to the size of the target building through the second graphical interface, so that the first virtual doll can be seen by more second users, which is beneficial to improving the exposure of the first virtual doll, and is beneficial to enabling more second users to interact with the first user through the first virtual doll.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flowchart of an AR interaction method provided by an embodiment of the present application;
fig. 2 illustrates a schematic diagram of a first AR live-action space and virtual doll setup control provided by an embodiment of the present disclosure;
fig. 3 illustrates a schematic view of a first virtual doll attached to a target building in a first AR real space, as provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an AR virtual anchor provided by an embodiment of the present application;
FIG. 5 illustrates a schematic view of a first virtual doll and virtual doll pose adjustment control provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a first payment interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a second payment interface provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an AR interaction device according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
With the rapid development of internet technology, the variety and style of online games are becoming more and more abundant. In a network game, each user typically controls at least one virtual doll. When a plurality of users put respective controlled virtual dolls into the same game scene, the virtual dolls controlled by each user are generally displayed in the game scene, that is, a plurality of virtual dolls are displayed in the game scene. At this time, the server typically pushes only the respective virtual dolls in the game scene to the user terminals of the users who control the virtual dolls, that is, only the users who control the virtual dolls can see the respective virtual dolls in the game scene, which may make the exposure of the virtual dolls in the game scene low.
And, while big data algorithms may push content or other users to users that may be of interest to the user, other users that are pushed to the user are typically located anywhere, mostly far from the user, with too high a randomness of the push.
Moreover, at present, when a user uses a camera in a user terminal to shoot, the graphical user interface is only used for displaying a real scene, a building and the like shot by the camera, so that the camera has a single function.
Based on the above, the embodiment of the application provides an AR interaction method, an AR interaction device, electronic equipment and a computer readable storage medium, so as to improve the exposure of a virtual doll in a game scene, enhance the accuracy of information pushing and enrich the functions of the existing camera. The following is a description of examples.
For the sake of understanding the present embodiment, first, a detailed description is given of an AR interaction method disclosed in the present embodiment. Fig. 1 shows a flowchart of an AR interaction method provided by an embodiment of the present application, as shown in fig. 1, including the following steps S101-S102:
s101: in response to a first live-action initiation instruction issued by the first user, displaying at least a portion of the first AR live-action space in the first graphical user interface, and floating displaying a virtual doll setup control in the first graphical user interface.
In this embodiment, the method is applied to a first smart terminal held by a first user, which may be a mobile smart terminal, such as a mobile phone or a head-mounted AR (Augmented Reality ) device (e.g. AR glasses).
The first user may initiate the first live-action start instruction in any manner, where the first live-action start instruction may complete issuing the first live-action start instruction in a manner of physical key, voice control, virtual key, eye movement control, gesture instruction, and the like. The physical key is a mechanical key arranged on the first intelligent terminal, such as a push key, a toggle key and the like, which gives an instruction by physically displacing the key. The voice control is a mode that the user can give an instruction by speaking, for example, the user can say "start AR mode" to give the first live-action start instruction, and of course, other characters can be used, and specific character content can be preset. The virtual key is a key displayed (or not displayed in a certain case) on the first intelligent terminal, and the user can click the virtual key in a touch manner, so as to further issue a first live-action starting instruction. Eye movement control is a way of detecting eye movement of a user by an eye movement sensor to give instructions. There are many ways to issue the first live action start command, which will not be described here.
Virtual keys are commonly used on hand-held mobile terminals, such as cell phones; eye movement control is commonly used on head mounted mobile terminals, such as AR glasses.
After the first user gives a first live-action starting instruction through the first intelligent terminal, at least part of the first AR live-action space is displayed in a first graphical user interface of the first intelligent terminal.
In this embodiment, the first smart terminal is provided with an original camera and an AR camera, and the first live-action image in the real world is photographed in real time by the original camera, and the virtual doll in the virtual world is photographed by the AR camera. Since the AR technology is a technology of smartly fusing virtual information with the real world, that is, superimposing the virtual doll on the real world, the first AR real space displayed in the first graphic user interface may include the first real image and the virtual doll, or may include only the first real image.
Also displayed on the first graphical user interface in this embodiment is a virtual doll setup control that is typically displayed on the live view image at all times so that the first user can see and operate the virtual doll setup control.
S102: when the first AR real scene space is based on a first real scene image obtained by shooting a target building in real time, responding to the adding operation of a first user on a virtual doll setting control, attaching a first virtual doll controlled by the first user to the target building in the first AR real scene space according to the size of the target building so as to display at least part of a second AR real scene space in a second graphical user interface after a second user gives a second real scene starting instruction; the second AR real-scene space is formed by overlapping a second real-scene image obtained by shooting a target building in real time and a first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
Fig. 2 is a schematic diagram showing a first AR real space and a virtual doll setting control provided by an embodiment of the present application, where a first user uses a first intelligent terminal to photograph a target building, and when no other virtual doll is currently attached to the target building, as shown in fig. 2, the first AR real space displayed in the first graphical user interface is based on a first real image obtained by photographing the target building in real time, that is, only a real-world scene in the first AR real space at this time. At this time, the first intelligent terminal may attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building in response to the first user's addition operation of the virtual doll setting control. Fig. 3 illustrates a schematic diagram of a first virtual doll attached to a target building in a first AR real space, as provided by an embodiment of the present application.
The adding operation may be touch operation, eye movement operation, voice control or key control, and after the first user issues the adding operation to the virtual doll setting control, as shown in fig. 3, the first virtual doll controlled by the first user may be added to the target building in the first AR real space. After the first virtual doll is added to the target building in the first AR real space, for any one second user, after the second user gives a second real scene starting instruction through the second intelligent terminal, at least part of the second AR real space is displayed in a second graphical user interface of the second intelligent terminal, at this time, if the second user uses the second intelligent terminal to shoot the target building, the second AR real space is formed by superposition of a second real scene image obtained by shooting the target building in real time and the first virtual doll attached to the target building, that is, at this time, the second user can see the first virtual doll attached to the target building.
When the photographing angles of the first user and the second user for the target building are different, angles of the target building and the first virtual doll displayed in the first AR real space and the second AR real space are different.
The first virtual doll may be in various forms, and there are two specific forms, one is a form set by the first user and one is a form set by the system automatically. If the first user is in a self-setting form, the first user can be in any form, such as cartoon characters, simulated characters or some articles which can be seen in life. If the system is automatically set, the user can be given a form selected or automatically generated according to the gender and the real image of the user.
In this embodiment, the virtual doll may be a virtual doll, and then the first virtual doll may be a first virtual doll. The second user may be a user other than the first user. The target building refers to landmark building of each city. In the first AR real space and the second AR real space, the size of the first virtual doll is determined by the size of the target building, and specifically, the first virtual doll is equal to the target building in height.
In view of the scheme provided by the application, after the first user attaches the first virtual doll controlled by the first user to the target building, when each second user near the target building gives a second live-action starting instruction to shoot the target building, the first virtual doll attached to the target building can be displayed in the second graphical user interface. This approach has at least the following advantages: 1, since only one virtual doll is allowed to be attached to the target building at the same time, after the first user attaches the first virtual doll to the target building, each second user can only see the first virtual doll attached to the target building according to the size of the target building through the second graphical interface, so that the first virtual doll can be seen by more second users, the exposure of the first virtual doll is facilitated to be improved, and more second users can interact with the first user through the first virtual doll. 2, since the user can photograph the target building only near the target building, after the first virtual doll is attached to the target building, only the second user near the target building can see the first virtual doll attached to the target building in the second graphical user interface, so that the second user can clearly know that the first user is near or that the first user and the second user are located in the same area, which means that the content focused by the first user and the second user is the same (for example, the target building), and therefore, pushing the first virtual doll to the second user is accurate, which is beneficial to improving the accuracy of information pushing. And 3, by superposing an AR camera on the basis of the original camera of the intelligent terminal, the AR real scene space displayed in the graphical user interface not only contains scenes in the real world, but also contains virtual dolls, thereby being beneficial to enriching the functions of the camera.
It should be noted that, in the process of executing steps S101 to S102, the first user and the second user may be in a friend state (both have added friends in advance) or may be in a non-friend state (both have not added friends in advance), in other words, when executing the two steps, the friend states of the users do not affect the implementation of the whole step.
In one possible implementation manner, each preset position on the target building in the first AR real space is respectively provided with a corresponding AR virtual anchor point, and when executing step S102 to respond to the adding operation of the first user on the virtual doll setting control, the first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, the following steps S1021-S1022 may be executed specifically:
s1021: responding to the selection operation of the first user on the AR target virtual anchor point in the AR virtual anchor points, so as to determine the position to be attached of the first virtual doll from each preset position on the target building in the first AR real scene space.
Fig. 4 is a schematic diagram of an AR virtual anchor provided by an embodiment of the present application, as shown in fig. 4, where a target building in a first AR real space includes a plurality of AR virtual anchors, such as AR virtual anchors A1, A2, and A3 in fig. 4. The positions of different AR virtual anchors on the target building are different, and each AR virtual anchor characterizes each preset position of the target building.
The selection operation may specifically be a touch operation, as shown in fig. 4, where the first user may select one of the AR target virtual anchors (for example, AR virtual anchor A1) from the multiple AR virtual anchors according to a preset position of each AR virtual anchor on the target building, and then determine the preset position of the AR target virtual anchor on the target building as the position to be attached of the first virtual doll.
S1022: and in response to the first user setting the adding operation of the control for the virtual doll, attaching the first virtual doll controlled by the first user to a position to be attached on a target building in the first AR real space according to the size of the target building.
In this embodiment, after determining the position to be attached of the first virtual doll, the first user may add the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space (for example, the preset position of the AR virtual anchor point A1 on the target building) after performing the adding operation on the virtual doll setting control, as shown in fig. 3.
After the first virtual doll is added to the position to be attached of the target building in the first AR real space, for any one second user, after the second user gives a second real scene starting instruction through the second intelligent terminal, at least part of the second AR real space is displayed in a second graphical user interface of the second intelligent terminal, and at the moment, if the second user uses the second intelligent terminal to shoot the target building, the second user can see the first virtual doll attached to the position to be attached of the target building.
Therefore, in the scheme, the position to be attached of the first virtual doll on the target building can be selected according to the preference of the first user, so that the attachment flexibility is improved.
In one possible implementation, when performing step S1022 to attach the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in response to the first user' S addition operation of the virtual doll setting control, the following steps S10221 to S10222 may be specifically performed:
s10221: in response to an adding operation of the first user to the virtual doll setting control, a first virtual doll to be added and a virtual doll posture adjustment control are floatingly displayed in a first graphical user interface.
Fig. 5 illustrates a schematic diagram of a first virtual doll and virtual doll pose adjustment controls provided by an embodiment of the present disclosure, as shown in fig. 5, including a plurality of virtual doll pose adjustment controls, e.g., virtual doll pose adjustment controls B1, B2, B3 … … B9. Different virtual doll gesture adjustment controls correspond to different gestures of the first virtual doll, and the gestures can be any of the following: the present application is not limited to this, as in the standing posture, the squatting posture, the sitting posture, the clapping posture, the open heart posture, the heart-hurting posture, the vital-energy posture, the surprise posture, the charming posture, and the like.
S10222: and responding to the gesture adjustment operation of the first user on the virtual doll gesture adjustment control, and attaching the first virtual doll controlled by the first user to a position to be attached on a target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the gesture adjustment operation.
In this embodiment, the gesture adjustment operation may be a selection operation, and the first user may attach the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the target virtual doll gesture adjustment control, with respect to the selection operation (for example, click selection operation) of the target virtual doll gesture adjustment control in each virtual doll gesture adjustment control.
In one possible implementation, when performing step S102 to attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building in response to the first user' S adding operation of the virtual doll setting control, the following steps may be specifically performed:
S1023: in response to an add operation of the first user to the virtual doll setting control, a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll to the target building are displayed in the first graphical user interface.
In this embodiment, fig. 6 is a schematic diagram of a first payment interface provided by the embodiment of the present application, and as shown in fig. 6, after the first user issues an add operation for the virtual doll setting control, a payment pop-up window including a virtual resource consumption control and the number of virtual resources required for attaching the first virtual doll on the target building is displayed in the first graphical user interface.
S1024: in response to a virtual resource consumption operation of the first user for the virtual resource consumption control based on the number of virtual resources, the first virtual doll controlled by the first user is attached to the target building in accordance with the size of the target building.
The virtual resource consumption operation may be a clicking operation on a virtual resource consumption control, where the first user clicks the virtual resource consumption control to pay for the virtual resource, and after the payment is successful, the first virtual doll controlled by the first user is attached to the target building according to the size of the target building.
In another possible embodiment, when performing step S102 to attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building in response to the first user' S adding operation of the virtual doll setting control, the following steps may be specifically performed:
s1025: and in response to the first user adding operation of the virtual doll setting control, displaying a virtual resource input control, a virtual resource consumption control and the current highest virtual resource quantity input by other users in the current period in a first graphical user interface.
In this embodiment, fig. 7 is a schematic diagram showing a second payment interface provided by the embodiment of the present application, and after the first user issues an adding operation for the virtual doll setting control, as shown in fig. 7, the interface shown in fig. 7 is displayed in the first graphical user interface, where the interface includes at least a virtual resource input control, a virtual resource consumption control, the current highest virtual resource number (e.g. 200) input by other users in the current period, and the user name of the first user, the target building name, the remaining time of the current period (e.g. 1 minute and 30 seconds), and so on.
S1026: and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user for the virtual resource consumption control, and attaching the first virtual doll controlled by the first user to a target building in the first AR real space according to the size of the target building.
The first user can input the number of virtual resources which the first user wants to pay in the virtual resource input control, when the current period is finished, if the number of virtual resources input by the first user is higher than the current highest number of virtual resources input by other users, the first user obtains payment qualification, at this time, the first user can perform virtual resource consumption operation on the virtual resource consumption control, for example, virtual resource payment operation on the virtual resource consumption control, and the first virtual doll controlled by the first user is attached to a target building in the first AR real space according to the size of the target building.
In one possible implementation, after performing step S102, it may be further performed according to the following steps S1031-S1033:
s1031: starting timing when the first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and only allowing the first virtual doll to be attached to the target building in the first AR real space according to the size of the target building in the protection period.
In this embodiment, the timer is started when the first virtual doll is attached to the target building in the first AR live-action space, and only the first virtual doll is allowed to be attached to the target building and no other virtual dolls are allowed to be attached to the target building for a protection period (for example, 10 minutes).
S1032: and when the protection period is exceeded and the expiration period is not reached, deleting the first virtual doll from the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface, and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building.
In this embodiment, the third user may be another user than the first user, and the third user may or may not be the second user.
The third user may issue a third live-action start instruction in the third intelligent terminal, display at least a portion of a third AR live-action space in a third graphical user interface of the third intelligent terminal, and float display a virtual doll setup control in the third graphical user interface. The third AR real space is formed by overlapping a third real image obtained by shooting the target building in real time and a first virtual doll attached to the target building. Thus, the third user can see the first virtual doll attached to the target building.
When the guard period (e.g., 10 minutes) is exceeded and the expiration period (e.g., 20 minutes) is not reached, the third virtual doll controlled by the third user may preempt the authority of the first virtual doll to attach to the target building. Specifically, when the third user wants to preempt the right of the attached target building of the first virtual doll, the third user issues an adding operation to the virtual doll setting control displayed in the third graphical user interface, then the first virtual doll is deleted from the target building in the AR real space corresponding to each user, for example, the first virtual doll is deleted from the first AR real space, the second AR real space and the target building in the third AR real space, and the third virtual doll controlled by the third user is attached to the target building according to the size of the target building, so that after the third user issues a real-scene starting instruction, other users except the third user can display at least part of the AR real-scene space in each graphical user interface, wherein the AR real-scene space is formed by overlapping the real-scene image obtained by photographing the target building in real time and the third virtual doll attached to the target building, that is, other users except the third user can see the third virtual doll of the third user through the own graphical user display interface.
S1033: when the expiration period is exceeded and there is no third user performing an adding operation with respect to the virtual doll setting control displayed in the third graphic user interface, the first virtual doll is deleted from the target building.
In this embodiment, when the expiration period (e.g., 20 minutes) is exceeded and there is no third user performing an add operation with respect to the virtual doll setting control displayed in the third graphical user interface, the first virtual doll is deleted from the target building in each AR real space. At this time, after each user gives a live-action starting instruction, only the real target building can be seen through the graphical user interface, and the first virtual doll cannot be seen.
In a possible embodiment, after performing step S101, the following steps may be further performed: when the first AR real-scene space is formed by overlapping a first real-scene image obtained by shooting a target building in real time and other virtual dolls, responding to a first interaction instruction aiming at the other virtual dolls issued by a first user, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
In this embodiment, after the first user issues the first live-action start instruction, if the first user uses the first intelligent terminal to photograph the target building, and when the first AR live-action space displayed in the first graphical user interface is formed by overlapping the first live-action image obtained by photographing the target building in real time with other virtual dolls, it indicates that the fourth user has attached his own virtual doll (i.e. other virtual dolls) to the target building, and at this time, if the first user wants to interact with the fourth user corresponding to the other virtual dolls, the first user may issue a first interaction instruction for the other virtual dolls, and then display interaction information corresponding to the first interaction instruction in the fourth graphical user interface of the fourth user, and if the fourth user also desires to communicate with the first user, the fourth user may interact with the first user through the prompt information. The information interaction here may refer to a temporary information interaction interface, adding friends, etc. Wherein the fourth user may be a user other than the first user.
In this embodiment, the first interaction instruction may be a private letter instruction, a bullet screen instruction, or the like, and when the first interaction instruction is a private letter instruction, the interaction information corresponding to the first interaction instruction may be specific private letter content, and when the first interaction instruction is a bullet screen instruction, the interaction information corresponding to the first interaction instruction may be specific bullet screen content.
In this embodiment, since the fourth user attaches the other virtual doll controlled by the fourth user to the target building, it is explained that the fourth user is near the target building, and the first user displays the target building and the other virtual dolls in the first user interface, it is explained that the first user is also near the target building, and at this time, the first user and the second user interact accurately, which improves the accuracy of the user interaction.
In a possible implementation manner, after performing step S102, the following steps may be further performed: after receiving a second interaction instruction aiming at the first virtual doll and issued by the second user, displaying prompt information corresponding to the second interaction instruction issued by the second user in a first graphical user interface so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
In this embodiment, after the second user sees the first virtual doll in the second gui, if the second user wants to communicate with the first user, the second user may issue a second interaction instruction for the first virtual doll, and then display corresponding prompt information in the gui of the first user. If the first user also desires to communicate with the second user, the information interaction with the second user can be started through the prompt information. The information interaction here may refer to a temporary information interaction interface, adding friends, etc.
In one possible real-time manner, after the executing step receives the second interaction instruction for the first virtual doll issued by the second user, the first graphical user interface displays the prompt information corresponding to the second interaction instruction issued by the second user, so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction, and then the executing step may further include the following steps:
responding to a first response operation issued by a first user aiming at the prompt information, and entering an information interaction interface for directly communicating the first user with a second user;
And/or the number of the groups of groups,
and responding to a second response operation of the first user aiming at the prompt information, and floating and displaying a simple interaction window for the first user to directly communicate with the second user in the graphical user interface.
After the first response operation is issued, the first user can enter an information interaction interface of the information private chat. The information interaction interface directly covers the originally displayed first AR real space, and in a certain case, the information interaction interface can occupy all the first graphical user interfaces, or after entering the information interaction interface, a first user can only conduct information interaction without operating the first AR real space. In the information interaction interface, the user can send and receive text information, pictures, voice information and other contents.
After the first user issues the second response operation, the first user can enter an information interaction interface of the information private chat. The information interaction interface is a simple interaction window which is not covered by the first AR real space displayed originally, but is displayed on the first AR real space displayed originally in a floating mode, the area of the simple interaction window is smaller than that of the first graphical user interface, or the simple interaction window does not cover the images of the first AR real space displayed in all the first graphical user interfaces.
In one possible real-time manner, the second interactive instruction is a barrage send instruction; when the executing step displays the prompt information corresponding to the second interaction instruction issued by the second user in the first graphical user interface, the executing step specifically may be executed according to the following steps: and displaying barrage information corresponding to the barrage sending instruction issued by the second user in the first graphical user interface.
Other users can see the barrage information in the corresponding AR live-action spaces, namely the barrage information can be seen by everybody.
In one possible embodiment, attribute information of the first virtual doll is also displayed in the vicinity of the first virtual doll; the attribute information of the first virtual doll is determined according to any one or more of the following information: information input by the first user, historical position information of the first user, historical track information of the first user and identity information of the first user.
For example, attribute information of the first virtual doll may be displayed above the first virtual doll.
The information input by the first user is input by the user in advance or in real time (the user can modify the displayed attribute information at any time). The first user may enter information about next desired actions (content to be acted on) about difficulties it is currently experiencing, what it is desired to do, etc., or some introduction of the first user's own identity.
The historical position information of the first user reflects the position reached by the first user, the historical track information of the first user reflects the moving path of the first user, and the second user can judge whether the first user has similar experience with the second user.
The identity information of the first user reflects the identity information of individuals such as occupation, gender, age and the like of the first user. Such information may be automatically extracted by the system.
In order to facilitate understanding of the above embodiments, the present application provides a specific embodiment:
s201: in response to a first live-action initiation instruction issued by the first user, displaying at least a portion of the first AR live-action space in the first graphical user interface, and floating displaying a virtual doll setup control in the first graphical user interface.
S202: when the first AR live-action space is based on a first live-action image obtained by shooting a target building in real time, as shown in fig. 4, corresponding AR virtual anchor points are respectively arranged at each preset position on the target building in the first AR live-action space, and a selection operation of the first user for the AR target virtual anchor points in the AR virtual anchor points is responded to determine a position to be attached of the first virtual doll from each preset position on the target building in the first AR live-action space.
In one possible embodiment, in response to an add operation of the first user to the virtual doll setup control, as shown in fig. 6, a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll to the target building are displayed in the first graphical user interface; in response to a virtual resource consumption operation of the first user for the virtual resource consumption control based on the number of virtual resources, a first virtual doll and a virtual doll pose adjustment control to be added are floatingly displayed in a first graphical user interface, as shown in fig. 5.
In another possible embodiment, in response to an add operation of the first user to the virtual doll setting control, as shown in fig. 7, a virtual resource input control, a virtual resource consumption control, and a current highest virtual resource amount input by other users in the current period are displayed in the first graphical user interface; and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user on the virtual resource consumption control, and displaying the first virtual doll to be added and the virtual doll gesture adjustment control in a floating mode in a first graphical user interface as shown in fig. 5.
After a first virtual doll and a virtual doll gesture adjustment control to be added are displayed in a floating mode in a first graphical user interface, responding to gesture adjustment operation of the first user on the virtual doll gesture adjustment control, attaching the first virtual doll controlled by the first user to a position to be attached on a target building in a first AR real space according to the size of the target building in the first AR real space and a target gesture corresponding to the gesture adjustment operation, so that after a second user gives a second real scene starting instruction, at least part of a second AR real space is displayed in the second graphical user interface; the second AR real-scene space is formed by overlapping a second real-scene image obtained by shooting a target building in real time and a first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
S203: after receiving a second interaction instruction aiming at the first virtual doll and issued by the second user, displaying prompt information corresponding to the second interaction instruction issued by the second user in a first graphical user interface so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
S204: starting timing when a first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and only allowing the first virtual doll to be attached to the target building in the first AR real space within a protection period according to the size of the target building;
when the protection period is exceeded and the expiration period is not reached, deleting the first virtual doll from the target building and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface;
and deleting the first virtual doll from the target building when the expiration period is exceeded and no third user performs an adding operation on a virtual doll setting control displayed in a third graphical user interface.
S205: when the first AR real-scene space is formed by overlapping a first real-scene image obtained by shooting a target building in real time and other virtual dolls, responding to a first interaction instruction aiming at the other virtual dolls issued by a first user, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
Based on the same technical concept, the present application further provides an AR interaction device, and fig. 8 shows a schematic structural diagram of an AR interaction device provided by the embodiment of the present application, as shown in fig. 8, including:
a display module 801, configured to display at least a portion of a first AR live-action space in a first graphical user interface in response to a first live-action start instruction issued by a first user, and to float display a virtual doll setting control in the first graphical user interface;
an attaching module 802, configured to attach, when the first AR live-action space is based on a first live-action image obtained by capturing a target building in real time, a first virtual doll controlled by the first user to the target building in the first AR live-action space according to a size of the target building in response to an adding operation of a virtual doll setting control by the first user, so as to display at least part of a second AR live-action space in a second graphical user interface after a second user issues a second live-action start instruction; the second AR real space is formed by overlapping a second real image obtained by shooting the target building in real time and the first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
Optionally, each preset position on the target building in the first AR live-action space is respectively provided with a corresponding AR virtual anchor point; the attaching module 802 is specifically configured to, when, in response to an adding operation of the first user to a virtual doll setting control, attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building:
responding to the selection operation of the first user on an AR target virtual anchor point in an AR virtual anchor point so as to determine the position to be attached of the first virtual doll from all preset positions on the target building in the first AR real space;
and in response to the first user setting control adding operation for the virtual doll, attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building.
Optionally, the attaching module 802 is specifically configured to, when, in response to an adding operation of the first user to the virtual doll setting control, attach the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building:
In response to the first user adding operation of the virtual doll setting control, displaying a first virtual doll to be added and a virtual doll gesture adjustment control in the first graphical user interface in a floating mode;
and responding to the gesture adjustment operation of the first user on the virtual doll gesture adjustment control, and attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the gesture adjustment operation.
Optionally, the attaching module 802 is specifically configured to, when in response to an adding operation of the first user to the virtual doll setting control, attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building:
in response to an add operation of the first user to a virtual doll setting control, displaying a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll on the target building in the first graphical user interface;
And in response to the first user performing virtual resource consumption operation on the virtual resource consumption control based on the virtual resource quantity, attaching the first virtual doll controlled by the first user to the target building according to the size of the target building.
Optionally, the attaching module 802 is specifically configured to, when in response to an adding operation of the first user to the virtual doll setting control, attach the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building:
responding to the adding operation of the first user on the virtual doll setting control, and displaying a virtual resource input control, a virtual resource consumption control and the current highest virtual resource quantity input by other users in the current period in the first graphical user interface;
and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user for the virtual resource consumption control, and attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building.
Optionally, the method further comprises:
a protection module for starting timing from when the first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and allowing only the first virtual doll to be attached to the target building in the first AR real space according to the size of the target building within a protection period;
the first deleting module is used for deleting the first virtual doll from the target building and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface when the protection period is exceeded and the expiration period is not reached;
and the second deleting module is used for deleting the first virtual doll from the target building when the expiration period is exceeded and no third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface.
Optionally, the method further comprises: and the first interaction module is used for responding to a first interaction instruction issued by the first user for the other virtual dolls when the first AR real space is formed by overlapping a first real image obtained by shooting a target building in real time and the other virtual dolls, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
Optionally, the method further comprises: and the second interaction module is used for displaying prompt information corresponding to a second interaction instruction issued by the second user in the first graphical user interface after receiving the second interaction instruction issued by the second user and aiming at the first virtual doll, so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
Optionally, the second interaction instruction is a barrage sending instruction;
the second interaction module is specifically configured to, when being configured to display, in the first graphical user interface, prompt information corresponding to a second interaction instruction issued by the second user:
and displaying barrage information corresponding to the barrage sending instruction issued by the second user in the first graphical user interface.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application, including: a processor 901, a memory 902 and a bus 903, the memory 902 storing machine readable instructions executable by the processor 901, when the electronic device runs the above information processing method, the processor 901 communicates with the memory 902 through the bus 903, and the processor 901 executes the machine readable instructions to perform the method steps described in the first embodiment.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps described in embodiment one.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, electronic device and computer readable storage medium described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An AR interaction method, comprising:
responding to a first live-action starting instruction issued by a first user, displaying at least part of a first AR live-action space in a first graphical user interface, and floating and displaying a virtual doll setting control in the first graphical user interface;
When the first AR real scene space is based on a first real scene image obtained by shooting a target building in real time, responding to the adding operation of the first user on a virtual doll setting control, attaching a first virtual doll controlled by the first user to the target building in the first AR real scene space according to the size of the target building so as to display at least part of a second AR real scene space in a second graphical user interface after a second user gives a second real scene starting instruction; the second AR real space is formed by overlapping a second real image obtained by shooting the target building in real time and the first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
2. The method of claim 1, wherein each preset position on the target building in the first AR real space is provided with a corresponding AR virtual anchor point, respectively; the attaching the first virtual doll controlled by the first user to the target building in the first AR live-action space according to the size of the target building in response to the adding operation of the first user to the virtual doll setting control includes:
Responding to the selection operation of the first user on an AR target virtual anchor point in an AR virtual anchor point so as to determine the position to be attached of the first virtual doll from all preset positions on the target building in the first AR real space;
and in response to the first user setting control adding operation for the virtual doll, attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building.
3. The method of claim 2, wherein the attaching the first virtual doll controlled by the first user to the target building at the to-be-attached location in the first AR real space according to the size of the target building in response to the first user's addition operation of a virtual doll setting control comprises:
in response to the first user adding operation of the virtual doll setting control, displaying a first virtual doll to be added and a virtual doll gesture adjustment control in the first graphical user interface in a floating mode;
and responding to the gesture adjustment operation of the first user on the virtual doll gesture adjustment control, and attaching the first virtual doll controlled by the first user to the position to be attached on the target building in the first AR real space according to the size of the target building in the first AR real space and the target gesture corresponding to the gesture adjustment operation.
4. The method of claim 1, wherein the attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building in response to the first user's add operation to virtual doll setup control comprises:
in response to an add operation of the first user to a virtual doll setting control, displaying a virtual resource consumption control and an amount of virtual resources required to attach the first virtual doll on the target building in the first graphical user interface;
and in response to the first user performing virtual resource consumption operation on the virtual resource consumption control based on the virtual resource quantity, attaching the first virtual doll controlled by the first user to the target building according to the size of the target building.
5. The method of claim 1, wherein the attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building in response to the first user's add operation to virtual doll setup control comprises:
Responding to the adding operation of the first user on the virtual doll setting control, and displaying a virtual resource input control, a virtual resource consumption control and the current highest virtual resource quantity input by other users in the current period in the first graphical user interface;
and in the current period, if the number of the virtual resources input by the first user in the virtual resource input control is higher than the current highest number of the virtual resources input by other users, responding to the virtual resource consumption operation of the first user for the virtual resource consumption control, and attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building.
6. The method according to claim 1, wherein the method further comprises:
starting timing when a first virtual doll controlled by the first user is attached to the target building in the first AR real space according to the size of the target building, and only allowing the first virtual doll to be attached to the target building in the first AR real space within a protection period according to the size of the target building;
When the protection period is exceeded and the expiration period is not reached, deleting the first virtual doll from the target building and attaching the third virtual doll controlled by the third user to the target building according to the size of the target building if the third user performs adding operation on the virtual doll setting control displayed in the third graphical user interface;
and deleting the first virtual doll from the target building when the expiration period is exceeded and no third user performs an adding operation on a virtual doll setting control displayed in a third graphical user interface.
7. The method according to claim 1, wherein the method further comprises:
when the first AR real-scene space is formed by overlapping a first real-scene image obtained by shooting a target building in real time and other virtual dolls, responding to a first interaction instruction issued by the first user for the other virtual dolls, so that the first user performs information interaction with a fourth user controlling the other virtual dolls through interaction information corresponding to the first interaction instruction.
8. The method according to claim 1, wherein the method further comprises:
After receiving a second interaction instruction aiming at the first virtual doll issued by the second user, displaying prompt information corresponding to the second interaction instruction issued by the second user in the first graphical user interface, so that the first user performs information interaction with the second user through the prompt information corresponding to the second interaction instruction.
9. The method of claim 8, wherein the second interactive instruction is a barrage send instruction; displaying prompt information corresponding to a second interaction instruction issued by the second user in the first graphical user interface, wherein the prompt information comprises;
and displaying barrage information corresponding to the barrage sending instruction issued by the second user in the first graphical user interface.
10. An AR interactive apparatus, comprising:
the display module is used for responding to a first live-action starting instruction issued by a first user, displaying at least part of a first AR live-action space in a first graphical user interface and displaying a virtual doll setting control in a floating mode in the first graphical user interface;
the attaching module is used for responding to the adding operation of the first user on the virtual doll setting control when the first AR real space is based on a first real image obtained by shooting a target building in real time, attaching the first virtual doll controlled by the first user to the target building in the first AR real space according to the size of the target building so as to display at least part of a second AR real space in a second graphical user interface after a second user gives a second real starting instruction; the second AR real space is formed by overlapping a second real image obtained by shooting the target building in real time and the first virtual doll attached to the target building; only one virtual doll is allowed to be attached to the target building at the same time.
11. An electronic device, comprising: a processor, a memory and a bus, said memory storing machine-readable instructions executable by said processor, said processor and said memory communicating over the bus when the electronic device is running, said machine-readable instructions when executed by said processor performing the steps of the method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 9.
CN202311397931.5A 2023-10-25 2023-10-25 AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium Pending CN117224947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397931.5A CN117224947A (en) 2023-10-25 2023-10-25 AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397931.5A CN117224947A (en) 2023-10-25 2023-10-25 AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117224947A true CN117224947A (en) 2023-12-15

Family

ID=89086136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397931.5A Pending CN117224947A (en) 2023-10-25 2023-10-25 AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117224947A (en)

Similar Documents

Publication Publication Date Title
CN108616761B (en) Multi-person video interaction method, device, system, storage medium and computer equipment
US20220410007A1 (en) Virtual character interaction method and apparatus, computer device, and storage medium
CN107911736B (en) Live broadcast interaction method and system
CN111884914B (en) Communication method and device based on virtual character interactive interface and computer equipment
CN111050222B (en) Virtual article issuing method, device and storage medium
CN109254650B (en) Man-machine interaction method and device
CN110496391B (en) Information synchronization method and device
US10880398B2 (en) Information updating/exchange method, apparatus, and server
CN110545442B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN108108012A (en) Information interacting method and device
CN113648650B (en) Interaction method and related device
US11882336B2 (en) Method and system for interaction in live streaming
US11491406B2 (en) Game drawer
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN111569436A (en) Processing method, device and equipment based on interaction in live broadcast fighting
JP2019017662A (en) Game processing program, game processing method, and game processor
CN108959311B (en) Social scene configuration method and device
EP4206886A1 (en) Method and apparatus for displaying an interface for emoji input
CN114885199B (en) Real-time interaction method, device, electronic equipment, storage medium and system
CN117224947A (en) AR interaction method, AR interaction device, electronic equipment and computer-readable storage medium
CN114257827B (en) Game live broadcast room display method, device, equipment and storage medium
CN112448825B (en) Session creation method, device, terminal and storage medium
WO2023246270A1 (en) Information processing method and apparatus, and storage medium and electronic device
CN117764758A (en) Group establishment method, device, equipment and storage medium for virtual scene
CN116943243A (en) Interaction method, device, equipment, medium and program product based on virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination