CN114602167A - Interaction method and device based on augmented reality - Google Patents

Interaction method and device based on augmented reality Download PDF

Info

Publication number
CN114602167A
CN114602167A CN202210423445.5A CN202210423445A CN114602167A CN 114602167 A CN114602167 A CN 114602167A CN 202210423445 A CN202210423445 A CN 202210423445A CN 114602167 A CN114602167 A CN 114602167A
Authority
CN
China
Prior art keywords
virtual object
picture
target
throwing
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210423445.5A
Other languages
Chinese (zh)
Inventor
李嘉钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210423445.5A priority Critical patent/CN114602167A/en
Publication of CN114602167A publication Critical patent/CN114602167A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode

Abstract

The disclosure provides an interaction method and device based on augmented reality, wherein the method comprises the following steps: acquiring a real scene picture; acquiring a real scene picture; determining and displaying an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected based on the real scene picture; determining a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object; the throwing result is shown in the AR picture. The augmented reality picture provided by the embodiment of the disclosure can realize an interactive process closer to the traditional ferrule activity under the condition of breaking the constraint and limitation of a real space.

Description

Interaction method and device based on augmented reality
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to an interaction method and apparatus based on augmented reality.
Background
Traditional snare activities typically require placing a snare object in a real environment and deploying the activity by technically throwing out a throw. Such activities require a certain real activity space.
Augmented Reality (AR) technology is a technology that skillfully fuses virtual information with the real world. How to reasonably utilize the AR technology to realize an interactive process which can break the constraint and limitation of a real space on one hand and has real experience on the other hand is a problem worthy of solving.
Disclosure of Invention
The embodiment of the disclosure at least provides an interaction method and device based on augmented reality.
In a first aspect, an embodiment of the present disclosure provides an interaction method based on augmented reality, including:
acquiring a real scene picture;
determining and displaying an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected based on the real scene picture;
determining a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
the throwing result is shown in the AR picture.
The embodiment of the disclosure fuses a first virtual object to be thrown and at least one second virtual object into a real scene, generates an AR picture, and by means of interacting with the AR picture, namely executing a target triggering operation in the AR picture, a throwing result of the first virtual object to the second virtual object can be displayed in the AR picture, and an interaction process closer to a traditional lasso activity can be realized under the condition of breaking constraint and limitation of a real space.
In an optional embodiment, the determining a throwing result of the first virtual object for the second virtual object in response to a target trigger operation for the first virtual object includes:
determining a throwing distance and a throwing direction corresponding to the first virtual object in response to a target trigger operation aiming at the first virtual object;
determining a throwing result of the first virtual object for the second virtual object based on the throwing distance and the throwing direction.
In the embodiment of the disclosure, the throwing distance and the throwing direction corresponding to the first virtual object are determined, so that the throwing result under the combined action of the throwing distance and the throwing direction can be displayed, and the real experience of the looping process in a real scene can be more approached.
In an optional embodiment, the determining a throwing distance corresponding to the first virtual object in response to a target trigger operation for the first virtual object includes:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation.
In the embodiment of the disclosure, the user can determine the throwing distance of the first virtual object through the accumulation of the target triggering operation and the triggering duration of the target triggering operation, so that the throwing result matched with the throwing distance can be displayed, and the interaction process closer to the traditional ferrule activity is realized.
In an optional embodiment, the determining, in response to a target trigger operation for the first virtual object and according to a trigger duration of the target trigger operation, a throwing distance corresponding to the first virtual object includes:
responding to the target trigger operation acted on a target operation area or a target operation object in the AR picture, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation;
the target operation object comprises the first virtual object and/or a preset operation control used for controlling the first virtual object.
The target triggering operation in the embodiment of the present disclosure may be applied to a target operation area (for example, a set operation area surrounding the first virtual object, or a certain fixed position area in the AR screen) in the AR screen, or may be applied to a specific target operation object (for example, the first virtual object and/or a preset operation control for controlling the first virtual object), and based on a plurality of kinds of action objects corresponding to the target triggering operation, the operation flexibility in the AR interaction process may be increased.
In an optional embodiment, the method further comprises:
responding to the target trigger operation, and updating the accumulation progress value of the accumulation progress bar displayed in the AR picture according to the trigger duration of the target trigger operation; the longer the trigger duration, the larger the accumulation advance value; and a preset corresponding relation is formed between the power accumulation progress value and the throwing distance.
In the embodiment of the disclosure, the accumulation progress value of the accumulation progress bar is updated through the trigger duration, so that a user can know the trigger duration condition and the accumulation progress condition of the target trigger operation more accurately, and can grasp the throwing opportunity better.
In an optional embodiment, the determining, in response to the target trigger operation for the first virtual object, a throwing direction corresponding to the first virtual object includes:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing direction corresponding to the first virtual object according to a camera shooting angle corresponding to the AR picture at a trigger time corresponding to the target trigger operation.
Here, the camera shooting angle corresponding to the AR picture reflects the current orientation of the shooting user, so that the throwing direction of the user's intention can be determined by using the camera shooting angle, and further, by displaying the throwing result matched with the throwing direction, ferrule interaction can be realized and a corresponding ferrule result can be displayed, thereby enhancing the real experience of the ferrule interaction process.
In an optional embodiment, the method further comprises:
displaying a direction indication mark in the AR picture in response to the target trigger operation; the direction indicator is used for indicating the current throwing direction, and the direction indicator can be adjusted in pointing direction under the control of the target trigger operation.
The method and the device for displaying the direction guide identification in the AR picture can prompt the user of the currently selected throwing direction, and can adjust in time according to the real-time direction of the direction guide identification when the direction of the direction guide identification is not the throwing direction intended by the user, so that the accuracy in the circle is improved.
In an optional embodiment, the determining and presenting, based on the real scene picture, an augmented reality AR picture containing a first virtual object to be thrown and at least one second virtual object to be selected includes:
determining initial position information of a first virtual object to be thrown and at least one second virtual object to be selected, which are respectively blended into a real scene, based on the real scene picture; displaying an initial AR picture based on initial position information of the first virtual object and the at least one second virtual object in the real scene;
determining post-change position information of the first virtual object in the real scene in response to a change in camera shooting pose; wherein the position of the first virtual object in the real scene changes along with the change of the camera shooting pose, and the position of the at least one second virtual object in the real scene does not change;
and displaying the changed AR picture based on the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene.
In the embodiment of the disclosure, the position of the first virtual object in the real scene can be changed along with the change of the shooting pose of the camera, and the changed AR picture can be displayed through the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene, so that the effect that the first virtual object moves along with the user in the real scene can be presented, and the real experience of the looping process in the real scene can be closer.
In an optional embodiment, before determining the throwing distance and the throwing direction corresponding to the first virtual object in response to a target triggering operation, the method further comprises:
judging whether the distance between the first virtual object and the at least one second virtual object is smaller than a set distance threshold value or not;
and if the number of the second virtual objects is smaller than the preset number, prompting the user to get away from the at least one second virtual object.
According to the method and the device, under the condition that the distance between the first virtual object and the at least one second virtual object is smaller than the set distance threshold value, the user is prompted to be far away from the at least one second virtual object, the first virtual object and the second virtual object can be thrown under the condition that the distance between the first virtual object and the second virtual object is kept at a certain distance, the interaction process is matched with the lasso rule in a real scene, and the real experience of the lasso interaction process is enhanced.
In an optional embodiment, the determining and presenting, based on the real scene picture, an augmented reality AR picture containing a first virtual object to be thrown and at least one second virtual object to be selected includes:
and judging whether the real scene picture is matched with a target scene type or not based on the real scene picture, and if so, determining and displaying an Augmented Reality (AR) picture containing the first virtual object and the at least one second virtual object based on the real scene picture.
The method and the device for displaying the ferrule interaction image determine and display the augmented reality AR image containing the first virtual object and the at least one second virtual object under the condition that the real scene image is determined to be matched with the target scene type, and the ferrule interaction process is limited under the matched scene type by limiting the scene type of the AR image generated in the process, so that the ferrule activity can be integrated into a more reasonable real scene, and the realistic experience is improved.
In an alternative embodiment, the at least one second virtual object comprises a plurality of second virtual objects;
the showing and the throwing result in the AR picture comprises:
and displaying the selected target second virtual object in the plurality of second virtual objects in the AR picture, and displaying the reward information corresponding to the target second virtual object.
The reward information corresponding to the target second virtual object displayed by the embodiment of the disclosure can be separated from the limit of the actual object, and compared with the traditional ferrule process, the interaction result can be more timely and conveniently presented.
In a second aspect, an embodiment of the present disclosure further provides an augmented reality-based interaction apparatus, including:
the acquisition module is used for acquiring a real scene picture;
a first determining module, configured to determine and display an Augmented Reality (AR) picture including a first virtual object to be thrown and at least one second virtual object to be selected, based on the real scene picture;
a second determination module to determine a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
a first display module for displaying the throwing result in the AR picture.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the above augmented reality-based interaction apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the above augmented reality-based interaction method, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an augmented reality-based interaction method provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an effect of an AR picture provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating another AR picture effect provided by the embodiment of the disclosure;
FIG. 4a is a schematic diagram illustrating an effect of a throwing distance corresponding to a first virtual object in a case where a power-up progress value is small, provided by an embodiment of the present disclosure;
FIG. 4b is a schematic diagram illustrating the effect of the throwing distance corresponding to the first virtual object when the power-up progress value is large, provided by the embodiment of the disclosure;
FIG. 5 is a schematic diagram illustrating another AR picture effect provided by the embodiment of the disclosure;
fig. 6 is a schematic diagram illustrating an augmented reality-based interaction apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In traditional ferrule activities, the ferrule objects are usually placed in a real environment, and the person participating in the ferrule activity experiences the game process by skillfully throwing out the throw. The traditional ferrule movement is easily restricted and limited by a real space, and how to reasonably utilize the AR technology realizes an interactive process which can break the restriction and limitation of the real space and has real experience on the other hand, so that the problem which is worthy of solving is solved.
Based on the above, the present disclosure provides an interaction method based on augmented reality, in which a virtual throwing circle and at least one virtual circle selection object are integrated into a real scene to generate an AR picture, and a user can accumulate force through target triggering operation and determine a throwing distance corresponding to the virtual throwing circle through the triggering duration of the target triggering operation, so as to display a throwing result matched with the throwing distance; based on the AR interaction process, the interaction process which is closer to the traditional ferrule activity can be realized under the condition of breaking the constraint and limitation of the real space.
The defects existing in the above solutions and the proposed solutions are the results obtained after the inventor has made practice and careful study, therefore, the discovery process of the above problems and the solutions proposed by the present disclosure in the following problems should be the contribution of the inventor to the present disclosure in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The interaction method based on augmented reality disclosed by the embodiment of the disclosure can be applied to scenes for simulating real lasso activities by utilizing an AR technology, for example, can be applied to travel scenes, such as scenes of scenic spots, streets, parks and the like, and provides leisure experience for tourists; and can also be applied to related activities such as shopping malls, supermarkets and the like.
To facilitate understanding of the embodiment, a detailed description is first given to an augmented reality-based interaction method disclosed in the embodiment of the present disclosure, and an execution subject of the augmented reality-based interaction method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability.
The augmented reality-based interaction method provided by the embodiment of the present disclosure is described below by taking an execution subject as a terminal device as an example.
Referring to fig. 1, a flowchart of an interaction method based on augmented reality provided in an embodiment of the present disclosure is shown, where the method includes S101 to S104, where:
s101: and acquiring a real scene picture.
In the embodiment of the present disclosure, the terminal device may be used to acquire the real scene picture in real time, and for example, the real scene picture may be acquired by a camera on the terminal device.
In one embodiment, the real scene picture may include a real scene picture acquired from an arbitrary real scene. In another embodiment, the real scene picture may include a real scene picture matching the target scene type, that is, the real scene picture is a real scene picture acquired for the target scene type, for example, the target scene type may include a scene type of a park, a street, etc., and the real scene picture may be a real scene picture acquired in a scene of a park, a street, etc. Based on the AR picture generated by the real scene picture, the ferrule interaction process can be limited under the matched scene type, so that the real scene picture can be better matched with the real scene.
S102: based on the real scene picture, determining and presenting an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected.
The augmented reality AR picture comprises a real scene picture, a first virtual object to be thrown and at least one second virtual object to be selected, wherein the first virtual object and the at least one second virtual object are displayed on the real scene picture. The second virtual object may be an object selected by the first virtual object, that is, one second virtual object may be selected from among the second virtual objects by throwing the first virtual object. Wherein the first virtual object to be thrown and the at least one second virtual object to be selected may be rendered in a real scene screen by a game development engine (e.g., Unity/UE4, etc.).
In a specific implementation, the first virtual object may be any form of virtual object to be thrown, for example, the first virtual object may be a ring-shaped object (which may be referred to as a virtual throwing circle at this time), and is conveniently sleeved on the second virtual object for selection, where the ring shape may be a square ring, a circular ring, or the like, and the embodiment of the present disclosure is not limited. The second virtual object may be any form of virtual candidate object, for example, the second virtual object may be a virtual score column. In the case where the first virtual object is thrown a plurality of times and the virtual score bars in the set are thrown at least once, the corresponding total score may be generated based on the score values respectively corresponding to the virtual score bars in the set.
In one approach, at least one first virtual object may be individually presented on the real scene screen. In another way, the at least one first virtual object may also be uniformly deployed on the deployer, for example, the at least one first virtual object is uniformly fixed on the floating plate, and then the at least one first virtual object and the deployer are displayed together on the real scene picture. Fig. 2 shows an effect diagram of an AR picture, which includes a real scene picture collected in a park, in which a virtual throwing circle is shown, and a board with three rows of virtual score bars, each row including three virtual score bars, each virtual score bar corresponding to a different score value. Wherein the virtual throwing circle is a first virtual object, and the virtual score columns are second virtual objects.
In order to present an AR picture in which the first virtual object and the at least one second virtual object can be more integrated with the real scene, in one embodiment, the AR picture may be determined and presented based on the three-dimensional pose information of the first virtual object and the at least one second virtual object corresponding to the real scene picture and the real scene picture.
Here, the three-dimensional pose information may include position information and posture information corresponding to the first virtual object and the at least one second virtual object in the real scene screen. Based on the corresponding position information and the corresponding posture information of the first virtual object and the at least one second virtual object in the real scene picture, the stereoscopic effect of the first virtual object and the at least one second virtual object can be presented in the AR picture.
In view of the fact that the corresponding positions of the first virtual object and the at least one second virtual object in the real scene picture are related to the positions of the objects included in the real scene picture, in one mode, the relative pose information between the first virtual object and the at least one second virtual object may also be preset and stored, and after the real scene picture is acquired, the corresponding three-dimensional pose information of the first virtual object and the at least one second virtual object in the real scene picture may be determined based on the relative pose information between the first virtual object and the at least one second virtual object and the positions of the respective objects included in the real scene picture. For example, the real scene picture includes a plurality of trees, an open space region is defined between the plurality of trees, and at this time, the position information corresponding to the first virtual object and the at least one second virtual object in the real scene picture may be determined based on the positions of the plurality of trees.
As described above, the real scene picture may include any real scene picture, and may also include a real scene picture matching the target scene type. For the two situations, the process of generating and displaying the AR picture is different based on the real scene picture.
Specifically, when the real scene picture includes any real scene picture, that is, after the real scene picture is collected, the AR picture including the first virtual object and the at least one second virtual object may be determined and displayed without determining whether the collected real scene picture matches the target scene type.
When the real scene picture includes a real scene picture matched with the target scene type, it is necessary to determine whether the acquired real scene picture is matched with the target scene type. The AR picture can be determined and displayed only when the real scene picture is the real scene picture matched with the target scene type; and when the real scene picture is the real scene picture which is not matched with the target scene type, the AR picture cannot be determined and displayed.
Specifically, in an embodiment, whether the real scene picture matches the target scene type may be determined based on the real scene picture, and if so, an augmented reality AR picture including a first virtual object and at least one second virtual object may be determined and displayed based on the real scene picture.
Here, whether the real scene picture matches the target scene type may be determined by comparing the collected real scene picture with a target real scene picture collected in advance in the target scene type.
Specifically, a target real scene picture in the target scene type may be acquired and stored in advance. After the real scene picture is collected, the collected real scene picture can be compared with the stored target real scene picture, and in one mode, the image characteristics of the real scene picture and the image characteristics of the target real scene picture can be extracted, and then the image characteristics of the real scene picture and the image characteristics of the target real scene picture are compared. When the similarity between the acquired image features of the real scene picture and the image features of the target real scene picture meets a preset condition, for example, when the similarity is greater than a set threshold, the real scene picture can be considered to be matched with the target scene type; when the similarity between the acquired real scene picture and the target real scene picture does not meet the preset condition, for example, when the similarity is less than or equal to the set threshold, it can be considered that the real scene picture is not matched with the target scene type.
In other modes, whether the acquired real scene picture contains a target object in the target scene type can be judged, and whether the real scene picture is matched with the target scene type is further judged. For example, the target scene type may be a playground, the target object in the target scene type may be an amusement facility such as a roller coaster or a ferris wheel, and when at least one of the amusement facilities is included in the captured real scene picture, the real scene picture may be considered to be matched with the target scene type; in the case where any of the above-described amusement facilities is not included in the captured real scene picture, the real scene picture may be considered as not matching the target scene type.
According to the judgment result, under the condition that the real scene picture is matched with the target scene type, determining and displaying an augmented reality AR picture containing a first virtual object and at least one second virtual object based on the real scene picture; in the case where the real scene picture does not match the target scene type, for example, a prompt to fail to recognize the current scene may be displayed, or a prompt to reacquire the real scene picture may be displayed.
S103: determining a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object.
The target trigger operation may refer to a trigger operation performed for the first virtual object to control the first virtual object to be thrown. The target trigger operation may be any form of trigger operation, such as a click operation, a slide operation, a press operation, and the like.
In a specific implementation, the role object corresponding to the target trigger operation may include any possible role object such as a target operation area or a target operation object acting in the AR screen. The target operation area may be an arbitrary area on the AR screen, for example, a set operation area surrounding the first virtual object, or a fixed position area on the AR screen. The target operation object may be the first virtual object, or may be a preset operation control for controlling the first virtual object, for example, a power-saving progress bar. In one embodiment, any one of the above-mentioned role objects may be selected as the role object of the target trigger operation. In another embodiment, the above-mentioned multiple kinds of action objects may also be selected as action objects for the target trigger operation. By selecting multiple action objects corresponding to the target trigger operation, the operation flexibility in the AR interaction process can be increased.
In a lasso activity in a real-world scenario, the outcome of a throw of a circle may be related to the force and direction of the throw by the participants of the activity. To simulate a ferrule activity in a real-world scene, in one embodiment, a throwing distance and a throwing direction corresponding to a first virtual object may be determined in response to a target trigger operation; a throwing result of the first virtual object is then determined for the second virtual object based on the throwing distance and the throwing direction.
In the above embodiment, the throwing result to the at least one second virtual object may be determined by both the throwing distance and the throwing direction. The throwing distance can influence the distance of the selected second virtual object, and the throwing direction can influence the azimuth information of the selected second virtual object. The cast outcome may include a selected outcome, an unselected outcome, an unsuccessful cast outcome, etc. For example, the first virtual object selects at least one second virtual object, and a selected result can be obtained; the first virtual object does not select any second virtual object, and an unselected result can be obtained; the first virtual object is not thrown and results of unsuccessful throws can be obtained.
In a specific implementation, the throwing distance and the throwing direction corresponding to the first virtual object can be determined through the trigger attribute information of the target trigger operation. The trigger attribute information may include attribute information such as trigger duration, trigger strength, trigger times, trigger angle, and trigger position. Here, the trigger attribute information for determining the throwing distance and the throwing direction corresponding to the first virtual object may be the same or different.
For example, in one embodiment, in response to a target trigger operation for a first virtual object, a throwing distance corresponding to the first virtual object may be determined according to a trigger duration of the target trigger operation. Namely, the throwing distance corresponding to the first virtual object is determined according to the trigger duration in the trigger attribute information of the target trigger operation. Here, the throwing strength may be represented by a trigger duration, and the longer the trigger duration of the target trigger operation is, the greater the throwing strength is, the longer the throwing distance corresponding to the first virtual object may be. Therefore, in one mode, the corresponding relationship between the trigger duration and the throwing distance may be preset, and then the throwing distance may be determined according to the trigger duration of the target trigger operation and the corresponding relationship between the trigger duration and the throwing distance.
In one embodiment, in response to a target trigger operation acting on a target operation area or a target operation object in the AR screen, a throwing distance corresponding to the first virtual object may be determined according to a trigger duration of the target trigger operation.
That is, the target trigger operation may include a trigger operation that acts on a target operation region or a target operation object in the AR screen. When the target trigger operation starts to act, the trigger duration of the target trigger operation can be counted, and the throwing distance corresponding to the first virtual object is increased along with the extension of the trigger duration.
In order to know the trigger duration condition of the target trigger operation more accurately, in one mode, a power progress bar corresponding to the trigger duration may be displayed in the AR screen. The power accumulation progress bar can display the power accumulation progress value, and the longer the trigger duration is, the larger the power accumulation progress value is. In a specific implementation, in response to the target trigger operation for the first virtual object, the accumulation progress value of the accumulation progress bar displayed in the AR screen may be updated according to the trigger duration of the target trigger operation.
When the target operation object is a first virtual object or the target operation object is other preset operation controls except for the power-saving progress bar, the power-saving progress bar can be displayed in the AR picture only when the target trigger operation is responded, namely, the power-saving progress bar can not be displayed when the target trigger operation is not executed; the power progress bar may be displayed in the AR screen at all times. When the target operation object is the power progress bar, the power progress bar may be always displayed in the AR screen.
Through triggering duration, updating the power accumulation progress value of the power accumulation progress bar, a user can accurately know the triggering duration condition and the power accumulation progress condition of target triggering operation, and then the throwing opportunity can be better mastered.
In another AR screen effect diagram shown in fig. 3, the progressive bar may show the value of the power progress when the target trigger operation is performed on the power progress bar. The power progress value in the power progress bar may increase as the trigger duration is extended, and the throw distance of the virtual throw circle (first virtual object) increases as the power progress value increases. Fig. 4a shows a schematic view of the effect of the throwing distance corresponding to the virtual throwing circle in the case where the power-storing progress value is small, and fig. 4b shows a schematic view of the effect of the throwing distance corresponding to the virtual throwing circle in the case where the power-storing progress value is large. The virtual throwing circle in fig. 4a has the first row (viewed from the near side to the far side) of the virtual score columns (second virtual object) and the virtual throwing circle in fig. 4a has the third row (viewed from the near side to the far side) of the score columns, so that it can be seen that the power accumulation value is small, the throwing distance corresponding to the virtual throwing circle is short, the power accumulation value is large, and the throwing distance corresponding to the virtual throwing circle is long. After the throwing result is displayed in the AR picture, the virtual throwing circle to be thrown can be continuously displayed.
As described above, in the course activities in real scenes, the result of a throw of a circle is related to the throw direction in addition to the throw distance. In order to simulate the ferrule activity in the real scene, in an embodiment, in response to a target trigger operation for a first virtual object, a throwing direction corresponding to the first virtual object may be determined according to a camera shooting angle corresponding to an AR picture at a trigger time corresponding to the target trigger operation.
That is, the throwing direction corresponding to the first virtual object may be determined by determining the camera shooting angle corresponding to the AR picture. In a specific implementation, the shooting angle of the camera can be determined through the internal parameter information of the camera, and then the throwing object corresponding to the first virtual object is determined.
In one approach, the throwing direction corresponding to the first virtual object may be coincident with a camera shooting angle corresponding to the AR picture. For example, the shooting angle of the camera is upward, and the throwing direction corresponding to the first virtual object is also upward; the camera shooting angle is leftward, and the throwing direction corresponding to the first virtual object is also leftward.
In order to improve the accuracy of the selection and improve the interaction experience of the user, in an embodiment, the direction guide identifier may be displayed in the AR screen in response to a target trigger operation for the first virtual object. The direction guide mark is used for indicating the current throwing direction, and the direction guide mark can be adjusted in a pointing direction under the control of target triggering operation.
Here, as described previously, the target trigger operation may include a target trigger operation that acts on a target operation area or a target operation object in the AR screen.
In a case where the target trigger operation includes a target trigger operation that acts on a target operation area in the AR screen, in one manner, the direction guide identifier may be presented at a trigger position of the target operation area; in another mode, the direction indication identification can be displayed on the first virtual object; in another mode, the direction indicator can be displayed at a preset position of the target operation area.
In a case where the target trigger operation includes a target trigger operation that acts on a target operation object in the AR screen, in one manner, the direction indication identifier may be presented on the first virtual object or a preset operation control for controlling the first virtual object; in another mode, the direction indicator may also be displayed at a preset position of the target operation area.
Illustratively, the direction indication arrow may be a direction indication arrow, and the direction of the direction indication arrow may correspond to the current throwing direction. In another example of the AR screen shown in fig. 5, a direction control (the direction control is a preset operation control) is shown, and when the direction control is pressed, a direction indication arrow pointing to a virtual score bar in the first row (seen from near to far) disposed on the board can be shown.
In one approach, the triggering direction of the target triggering operation or the degree of pressing in the respective triggering direction may be varied. Therefore, by changing the trigger direction of the target trigger operation or the pressing degree in each trigger direction, the direction of the direction indicator can be adjusted, and the throwing direction of the first virtual object can be adjusted.
Finally, based on the determined throwing distance and throwing direction, a throwing result of the first virtual object to the at least one second virtual object after being thrown is determined.
In a specific implementation, when the throwing distance and/or the throwing direction of the first virtual object are different, the corresponding throwing result may also be different. For example, when the throwing distance of the first virtual object is the same and the throwing direction is deviated to the left, the corresponding throwing result may include a second virtual object on the left side; when the throwing direction is inclined to the right, the second virtual object on the right side may be included in the corresponding throwing result. For another example, when the throwing direction of the first virtual object is the same and the throwing distance is small, the corresponding throwing result may include a second virtual object that is closer to the throwing direction; when the throwing distance is too long, the corresponding throwing result may include a second virtual object that is far away in the throwing direction. For another example, when the throwing direction is left and the throwing distance is long, the corresponding throwing result may include a second virtual object that is far away in the left direction.
S104: and displaying the throwing result in the AR picture.
Here, the thrown result matched with the at least one second virtual object may be a virtual result, and may be, for example, a score, a virtual object, identification information of a real object, or the like.
In one embodiment, the at least one second virtual object in the generated AR picture may include a plurality of second virtual objects. The plurality of second virtual objects may be the same or different.
When at least one second virtual object in the generated AR screen includes a plurality of second virtual objects, a selected target second virtual object of the plurality of second virtual objects may be displayed in the AR screen, and bonus information corresponding to the target second virtual object may be displayed.
The target second virtual object may be a second virtual object selected after one throw, or may be a second virtual object selected after a plurality of throws. The target second virtual object may include at least one second virtual object.
The reward information corresponding to the target second virtual object may be the target second virtual object itself, for example, the target second virtual object may be an electronic form of preferential resource. The reward information corresponding to the target second virtual object may also be an actual object corresponding to the target second virtual object, in this case, the target second virtual object may be identification information of the actual object, such as an image of the actual object, a name of the actual object, and the like; the target second virtual object may also be the exchange value information corresponding to the actual object, such as the point value.
In the case that the target second virtual object includes a plurality of second virtual objects, the reward information corresponding to the target second virtual object may be displayed each time one target second virtual object is selected, or the reward information of each of the plurality of target second virtual objects may be displayed after the plurality of target second virtual objects are selected.
In a live-action loop, there is typically a transmission line, and the participants in the action must stand outside the transmission line to start the game. To simulate a lasso activity in a real scene, a throwing boundary line may be set between a first virtual object and at least one second virtual object, e.g. between the first virtual object and the closest at least one second virtual object. The distance between the first virtual object and the at least one second virtual object may be a distance between the throwing boundary line and the at least one second virtual object. Here, a distance threshold value between the throwing boundary line and the at least one second virtual object may be set. In one approach, the distance threshold may be set at a magnitude related to the difficulty of the game, with the distance threshold being set at a greater magnitude as the difficulty of the game increases. The first virtual object needs to be kept outside the throwing boundary line to be thrown normally, that is, the distance between the first virtual object and the at least one second virtual object cannot be smaller than the set distance threshold. In one approach, the throwing boundary line may be an invisible boundary line or an invisible boundary line.
In the embodiment of the present disclosure, before starting a ferrule activity on a terminal device, initial position information of a first virtual object to be thrown and at least one second virtual object to be selected, which are respectively merged into a real scene, may be determined based on a real scene picture; and displaying the initial AR picture based on the initial position information of the first virtual object and the at least one second virtual object in the real scene.
Based on the initial position information of the first virtual object and the initial position information, an initial relative distance between the first virtual object and the at least one second virtual object may be determined. In one mode, the initial position information of the first virtual object merged into the real scene may be determined according to the position of the thrown boundary line, and the initial position information of the at least one second virtual object merged into the real scene may be preset position information. At this time, the initial relative distance between the first virtual object and the at least one second virtual object may be a distance threshold between the throwing boundary line and the at least one second virtual object. That is, in the initial AR screen, the first virtual object is located outside the throwing boundary line.
In order to more approximate the real experience of the looping process in the real scene, in an embodiment, the changed position information of the first virtual object in the real scene may be determined according to the change information of the shooting pose of the camera.
In one mode, the change information of the shooting pose of the camera can be acquired through a gyroscope and other devices in the camera. In a specific implementation, the position of the first virtual object in the real scene can change along with the change of the shooting pose of the camera, and the position of the second virtual object in the real scene is not changed.
And finally, displaying the AR picture after the change based on the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene.
In the above manner, the changed AR picture is displayed through the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene, so that the effect that the first virtual object moves along with the user in the real scene can be presented, and the real experience of the looping process in the real scene can be more approximate.
Since the distance between the first virtual object and the at least one second virtual object in the AR screen may change after the change, in one embodiment, before determining the throwing distance and the throwing direction corresponding to the first virtual object in response to the target trigger operation for the first virtual object, it may be determined whether the distance between the first virtual object and the at least one second virtual object is less than a set distance threshold; if so, the user may be prompted to move away from the at least one second virtual object.
The interaction method based on augmented reality provided by the embodiment of the disclosure can be applied to a single entertainment activity scene and can also be applied to a team entertainment activity scene, namely a multi-person entertainment activity scene. The process applied to the multi-person entertainment activity scene is similar to the process applied to the single-person entertainment activity scene, and the single-person entertainment activity scene and the multi-person entertainment activity scene are not respectively explained in the disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an augmented reality-based interaction device corresponding to the augmented reality-based interaction method is further provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the above-mentioned augmented reality-based interaction method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 6, an architecture diagram of an interaction device based on augmented reality according to an embodiment of the present disclosure is shown, where the device includes: an obtaining module 601, a first determining module 602, a second determining module 603, and a first displaying module 604; wherein the content of the first and second substances,
an obtaining module 601, configured to obtain a real scene picture;
a first determining module 602, configured to determine and display an augmented reality AR picture including a first virtual object to be thrown and at least one second virtual object to be selected, based on the real scene picture;
a second determining module 603, configured to determine a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
a first display module 604, configured to display the throwing result in the AR picture.
In an optional implementation manner, the second determining module 603 is specifically configured to:
determining a throwing distance and a throwing direction corresponding to the first virtual object in response to a target trigger operation aiming at the first virtual object;
determining a throwing result of the first virtual object for the second virtual object based on the throwing distance and the throwing direction.
In an optional implementation manner, the second determining module 603 is specifically configured to:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation.
In an optional implementation manner, the second determining module 603 is specifically configured to:
responding to the target trigger operation acted on a target operation area or a target operation object in the AR picture, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation;
the target operation object comprises the first virtual object and/or a preset operation control used for controlling the first virtual object.
In an alternative embodiment, the apparatus further comprises:
the updating module is used for responding to the target triggering operation and updating the power accumulation progress value of the power accumulation progress bar displayed in the AR picture according to the triggering duration of the target triggering operation; the longer the trigger duration is, the larger the accumulation progress value is; and a preset corresponding relation is formed between the power accumulation progress value and the throwing distance.
In an optional implementation manner, the second determining module 603 is specifically configured to:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing direction corresponding to the first virtual object according to a camera shooting angle corresponding to the AR picture at a trigger time corresponding to the target trigger operation.
In an alternative embodiment, the apparatus further comprises:
the second display module is used for responding to the target trigger operation and displaying the direction guide identification in the AR picture; the direction indicator is used for indicating the current throwing direction, and the direction indicator can be adjusted in pointing direction under the control of the target trigger operation.
In an optional implementation manner, the first determining module 602 is specifically configured to:
determining initial position information of a first virtual object to be thrown and at least one second virtual object to be selected, which are respectively blended into a real scene, based on the real scene picture; displaying an initial AR picture based on initial position information of the first virtual object and the at least one second virtual object in the real scene;
determining post-change position information of the first virtual object in the real scene in response to a change in camera shooting pose; wherein the position of the first virtual object in the real scene changes along with the change of the camera shooting pose, and the position of the at least one second virtual object in the real scene does not change;
and displaying the changed AR picture based on the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene.
In an optional embodiment, before performing the target triggering operation for the first virtual object, determining a throwing distance and a throwing direction corresponding to the first virtual object, the apparatus further includes:
the judging module is used for judging whether the distance between the first virtual object and the at least one second virtual object is smaller than a set distance threshold value or not;
and the prompting module is used for prompting the user to keep away from the at least one second virtual object if the number of the second virtual objects is smaller than the number of the first virtual objects.
In an optional implementation manner, the first determining module 602 is specifically configured to:
and judging whether the real scene picture is matched with a target scene type or not based on the real scene picture, and if so, determining and displaying an Augmented Reality (AR) picture containing the first virtual object and the at least one second virtual object based on the real scene picture.
In an alternative embodiment, the at least one second virtual object comprises a plurality of second virtual objects;
the first display module 604 is specifically configured to: and displaying the selected target second virtual object in the plurality of second virtual objects in the AR picture, and displaying the reward information corresponding to the target second virtual object.
The description of the processing flow of each module in the apparatus and the interaction flow between the modules may refer to the relevant description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 7, a schematic structural diagram of a computer device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the computer apparatus 700 is operated, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
acquiring a real scene picture;
determining and displaying an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected based on the real scene picture;
determining a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
and displaying the throwing result in the AR picture.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the interaction method based on augmented reality in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
An embodiment of the present disclosure further provides a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the interaction method based on augmented reality in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (14)

1. An augmented reality-based interaction method, comprising:
acquiring a real scene picture;
determining and displaying an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected based on the real scene picture;
determining a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
the throwing result is shown in the AR picture.
2. The method of claim 1, wherein said determining a throwing outcome of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object comprises:
determining a throwing distance and a throwing direction corresponding to the first virtual object in response to a target trigger operation aiming at the first virtual object;
determining a throwing result of the first virtual object for the second virtual object based on the throwing distance and the throwing direction.
3. The method of claim 2, wherein the determining a throwing distance corresponding to the first virtual object in response to the target trigger operation for the first virtual object comprises:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation.
4. The method according to claim 3, wherein the determining a throwing distance corresponding to the first virtual object according to a trigger duration of the target trigger operation in response to the target trigger operation for the first virtual object comprises:
responding to the target trigger operation acted on a target operation area or a target operation object in the AR picture, and determining a throwing distance corresponding to the first virtual object according to the trigger duration of the target trigger operation;
the target operation object comprises the first virtual object and/or a preset operation control used for controlling the first virtual object.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
responding to the target trigger operation, and updating the accumulation progress value of the accumulation progress bar displayed in the AR picture according to the trigger duration of the target trigger operation; the longer the trigger duration is, the larger the accumulation progress value is; and a preset corresponding relation is formed between the power accumulation progress value and the throwing distance.
6. The method of claim 2, wherein the determining a throwing direction to which the first virtual object corresponds in response to the target trigger operation for the first virtual object comprises:
responding to a target trigger operation aiming at the first virtual object, and determining a throwing direction corresponding to the first virtual object according to a camera shooting angle corresponding to the AR picture at a trigger time corresponding to the target trigger operation.
7. The method of claim 2, further comprising:
displaying a direction indication mark in the AR picture in response to the target trigger operation; the direction indicator is used for indicating the current throwing direction, and the direction indicator can be adjusted in pointing direction under the control of the target trigger operation.
8. The method according to claim 1, wherein the determining and presenting, based on the real scene picture, an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected comprises:
determining initial position information of a first virtual object to be thrown and at least one second virtual object to be selected which are respectively blended into a real scene based on the real scene picture; displaying an initial AR picture based on initial position information of the first virtual object and the at least one second virtual object in the real scene;
determining post-change position information of the first virtual object in the real scene in response to a change in camera shooting pose; wherein the position of the first virtual object in the real scene changes along with the change of the camera shooting pose, and the position of the at least one second virtual object in the real scene does not change;
and displaying the changed AR picture based on the changed position information of the first virtual object in the real scene and the initial position information of the at least one second virtual object in the real scene.
9. The method of claim 2, wherein prior to determining a throw distance and a throw direction corresponding to the first virtual object in response to a target trigger operation for the first virtual object, the method further comprises:
judging whether the distance between the first virtual object and the at least one second virtual object is smaller than a set distance threshold value or not;
and if the number of the second virtual objects is smaller than the preset number, prompting the user to get away from the at least one second virtual object.
10. The method according to claim 1, wherein the determining and presenting, based on the real scene picture, an Augmented Reality (AR) picture containing a first virtual object to be thrown and at least one second virtual object to be selected comprises:
and judging whether the real scene picture is matched with a target scene type or not based on the real scene picture, and if so, determining and displaying an Augmented Reality (AR) picture containing the first virtual object and the at least one second virtual object based on the real scene picture.
11. The method of claim 1, wherein the at least one second virtual object comprises a plurality of second virtual objects;
the showing and the throwing result in the AR picture comprises the following steps:
and displaying the selected target second virtual object in the plurality of second virtual objects in the AR picture, and displaying the reward information corresponding to the target second virtual object.
12. An augmented reality-based interaction device, comprising:
the acquisition module is used for acquiring a real scene picture;
a first determining module, configured to determine and display an Augmented Reality (AR) picture including a first virtual object to be thrown and at least one second virtual object to be selected, based on the real scene picture;
a second determination module to determine a throwing result of the first virtual object for the second virtual object in response to a target triggering operation for the first virtual object;
a first display module for displaying the throwing result in the AR picture.
13. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the augmented reality based interaction method of any one of claims 1 to 11.
14. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the augmented reality based interaction method according to any one of claims 1 to 11.
CN202210423445.5A 2022-04-21 2022-04-21 Interaction method and device based on augmented reality Pending CN114602167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210423445.5A CN114602167A (en) 2022-04-21 2022-04-21 Interaction method and device based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210423445.5A CN114602167A (en) 2022-04-21 2022-04-21 Interaction method and device based on augmented reality

Publications (1)

Publication Number Publication Date
CN114602167A true CN114602167A (en) 2022-06-10

Family

ID=81870121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210423445.5A Pending CN114602167A (en) 2022-04-21 2022-04-21 Interaction method and device based on augmented reality

Country Status (1)

Country Link
CN (1) CN114602167A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117085315A (en) * 2023-07-25 2023-11-21 北京维艾狄尔信息科技有限公司 AR interactive game method, system and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117085315A (en) * 2023-07-25 2023-11-21 北京维艾狄尔信息科技有限公司 AR interactive game method, system and storage medium

Similar Documents

Publication Publication Date Title
US9545563B2 (en) Augmented reality gaming systems and methods
CA2621191C (en) Interactivity via mobile image recognition
US9076077B2 (en) Interactivity via mobile image recognition
US9168455B2 (en) Game system with marked play-pieces
US20070104348A1 (en) Interactivity via mobile image recognition
US20160067616A1 (en) Augmented reality gaming systems and methods
CN112121430B (en) Information display method, device, equipment and storage medium in virtual scene
US20120154438A1 (en) Interactivity Via Mobile Image Recognition
EP3525897B1 (en) Game system
CN112870706B (en) Teaching content display method, device, equipment and storage medium
WO2023005522A1 (en) Virtual skill control method and apparatus, device, storage medium, and program product
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
CN114602167A (en) Interaction method and device based on augmented reality
CN112870702B (en) Recommendation method, device and equipment for road resources in virtual scene and storage medium
CN115212570A (en) Virtual game control method, device, storage medium and electronic equipment
JP5536991B2 (en) GAME DEVICE, GAME DATA DISTRIBUTION SYSTEM, AND GAME PROGRAM
KR20220049034A (en) Multiplayer, Multisport Indoor Game Systems and Methods
Quek et al. Obscura: A mobile game with camera based mechanics
KR100607046B1 (en) Image processing method for bodily sensitive game and game method using same
US20230390641A1 (en) Toy system for multiplayer social play
WO2023231557A1 (en) Interaction method for virtual objects, apparatus for virtual objects, and device, storage medium and program product
CN114584677A (en) Information display method and device, electronic equipment and storage medium
CN114584678A (en) Augmented reality AR presentation method and device, electronic equipment and storage medium
CN112402965A (en) Position monitoring and anti-monitoring method, device, terminal and storage medium
CN116860379A (en) Object grouping display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination