CN113117327B - Augmented reality interaction control method and device, electronic equipment and storage medium - Google Patents

Augmented reality interaction control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113117327B
CN113117327B CN202110390746.8A CN202110390746A CN113117327B CN 113117327 B CN113117327 B CN 113117327B CN 202110390746 A CN202110390746 A CN 202110390746A CN 113117327 B CN113117327 B CN 113117327B
Authority
CN
China
Prior art keywords
virtual object
nodes
skill
dimensional space
space structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110390746.8A
Other languages
Chinese (zh)
Other versions
CN113117327A (en
Inventor
邓宇星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110390746.8A priority Critical patent/CN113117327B/en
Publication of CN113117327A publication Critical patent/CN113117327A/en
Application granted granted Critical
Publication of CN113117327B publication Critical patent/CN113117327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an augmented reality interaction control method, an augmented reality interaction control device, electronic equipment and a storage medium, which comprise the steps that when a terminal shoots a real scene, a basic plane is formed according to a real object in the real scene, the basic plane is spatially mapped to form a three-dimensional space structure, the three-dimensional space structure comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes; displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure; and controlling the skill release of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is positioned. The embodiment of the invention fully utilizes the space in the real scene to control the release skill of the virtual object, and improves the game experience of the game player.

Description

Augmented reality interaction control method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an augmented reality interaction control method, an augmented reality interaction control device, electronic equipment and a storage medium.
Background
AR (Augmented Reality ) is a technology capable of fusing a real scene with a virtual scene, and a virtual object created by a computer program is fused into an image of the real scene captured by a camera in real time, so that the combination and interaction of the real scene and the virtual scene are realized on a screen of a terminal.
At present, the augmented reality technology can be applied to games, virtual objects in the games, such as monsters or battlefields in the games, are substituted into a real scene, and the augmented reality games can control the virtual objects in the game world through actions of game players in the real scene, so that the participation of the players in the games is greatly improved.
However, the virtual object is displayed at a certain position in the real scene shot by the terminal, so that the game player can only interact with the virtual object at the position, the space is limited for the game player, and the game experience is not good.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention are provided to provide an augmented reality interaction control method, an electronic device, and a storage medium, which overcome or at least partially solve the foregoing problems.
In order to solve the above problems, an embodiment of the present invention discloses an augmented reality interaction control method, which includes:
when a terminal shoots a real scene, forming a basic plane according to a real object in the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure body, wherein the three-dimensional space structure body comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
and controlling the skill release of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is positioned.
Optionally, after displaying the virtual object on one of the position nodes in the three-dimensional space structure, the method further includes:
In response to an adjustment operation of the photographing viewing angle direction, the photographing viewing angle direction is adjusted.
Optionally, the controlling the release of the skill on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located includes:
and when the shooting visual angle direction faces to the position node where the virtual object is located, controlling to release skills to the virtual object.
Optionally, an auxiliary aiming control corresponding to the virtual object is displayed on the terminal, the auxiliary aiming control is associated with the shooting view angle direction, and the adjusting the shooting view angle direction in response to the adjusting operation of the shooting view angle direction includes:
moving the auxiliary aiming control in response to an adjustment operation of the shooting view angle direction, wherein the adjustment of the shooting view angle direction enables the auxiliary aiming control to perform associated movement;
the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
and controlling the skill release of the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located.
Optionally, the skills include a first skill and a second skill; and controlling the skill release to the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located, comprising:
if the relative position distance between the auxiliary aiming control and the position node where the virtual object is located is smaller than or equal to a preset distance threshold value, releasing the first skill;
and if the relative position distance between the auxiliary aiming control and the position node where the virtual object is positioned is greater than a preset distance threshold, releasing the second skill.
Optionally, the skills include at least one skill attribute corresponding thereto; the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
determining a target skill attribute from the at least one skill attribute according to a relative position distance between the auxiliary aiming control and a position node where the virtual object is located;
and releasing the skills for the virtual object according to the target skill attributes.
Optionally, the forming a base plane according to the real object in the real scene, and performing spatial mapping on the base plane to form a three-dimensional spatial structure body includes:
Projecting three-dimensional structural light rays to the real scene, and collecting reflection signals reflected after the three-dimensional structural light rays are projected to the real object to obtain position information and depth information of the real object;
determining corresponding nodes in the real scene according to the position information and the depth information;
connecting the nodes to form a basic plane, and performing space mapping on the nodes on the basic plane to form a plurality of replication nodes;
and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
Optionally, the forming a base plane according to the real object in the real scene, and performing spatial mapping on the base plane to form a spatial structure body includes:
identifying a reference object from the real objects of the real scene;
connecting the reference objects to form a basic plane, and performing space mapping on nodes on the basic plane to form a plurality of replication nodes;
and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
Optionally, the spatially mapping the nodes on the base plane forms a plurality of replicated nodes, including:
Generating a vertical axis perpendicular to the base plane at a node on the base plane;
and copying the nodes on the vertical axis according to a preset distance parameter and a preset node number to obtain copied nodes.
Optionally, before the virtual object on a certain of the position nodes displayed in the three-dimensional space structure, the method further includes:
when the virtual object enters a designated area and the three-dimensional space structure body is formed successfully, model parameters of the virtual object fed back by the server are received; wherein the model parameters are used to display the virtual object on one of the position nodes of the three-dimensional spatial structure.
The embodiment of the invention also discloses an augmented reality interaction control device, which comprises:
the three-dimensional space structure forming module is used for forming a basic plane according to a real object in a real scene when the terminal shoots the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure, wherein the three-dimensional space structure comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
a virtual object display module for displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
And the skill releasing module is used for controlling the skill releasing of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is positioned.
The embodiment of the invention discloses an electronic device, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of the augmented reality interaction method when being executed by the processor.
The embodiment of the invention discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program realizes the steps of the augmented reality interaction method when being executed by a processor.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when a terminal shoots a real scene, a basic plane is formed according to a real object in the real scene, the basic plane is spatially mapped to form a three-dimensional space structure body, a virtual object is displayed on a certain position node in the three-dimensional space structure body, and then, the release skill of the virtual object is controlled according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located. According to the embodiment of the invention, the three-dimensional space structure body is formed based on the real object in the real scene, and the game player controls the release of skills on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located, so that the interaction between the player and the virtual object is realized, the space in the real scene is fully utilized, in addition, the three-dimensional space structure body comprises a plurality of position nodes, so that the game player can interact with the virtual object in a plurality of position nodes of the three-dimensional space structure body instead of point-to-point interaction with the virtual object in a fixed position, in addition, the plurality of position nodes of the three-dimensional space structure body at least comprise one position node and other position nodes which are in different planes, and therefore, the interaction between the game player and the virtual object is not limited to one plane, and the game experience of the game player is improved.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of an augmented reality interactive control method of the present invention;
FIG. 2 is a schematic diagram of a terminal and a virtual object in a default state according to the present invention;
FIG. 3 is a schematic diagram of a terminal and a virtual object in a mobile state according to the present invention;
FIG. 4 is a schematic diagram of a terminal and a virtual object in an interactive state according to the present invention;
FIG. 5 is a schematic illustration of a three-dimensional structure formed in accordance with the present invention;
fig. 6 is a block diagram illustrating an embodiment of an augmented reality interactive control device according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
At present, in a use scheme of an augmented reality technology in a game, specifically, a current real scene is captured through a camera, and a virtual object is generated at a corresponding position, so that interaction with the corresponding virtual object in the real scene is realized. For example, taking a game of capturing a small animal as an example, a GPS (Global Positioning System ) technology is used in advance to set the appearance position of the small animal (virtual object) in a city, when a game player reaches the appearance position corresponding to the small animal in a real scene, the virtual small animal can be displayed in the real scene photographed by a terminal, at this time, the game player can interact with the small animal, for example, capture, and when capturing, the game player can click the terminal to launch a capturing prop (for example, a capturing net), and whether the small animal is successfully captured is based on the set probability.
However, in the current use scheme of augmented reality technology in games, a virtual object, such as the small animal, can only be displayed at a certain appearance position in a real scene shot by a terminal, so that a game player can only interact with the virtual object at the appearance position, and the interaction space is limited for the game player, so that the game experience is not good.
It can be seen that in the current use scheme, at least two defects exist, and the first defect is that feedback of the virtual object is fixed, and one-to-many and randomized interaction effect cannot be achieved; the second drawback is that the space of the real scene is too little, usually after the game player reaches the appearance position, the virtual object is generated at the appearance position, and the game player can only perform the point-to-point operation with the virtual object at the appearance position, so that the space of the real scene is not fully utilized, and meanwhile, the interaction space with the virtual object is limited, so that the interaction mode of the game player and the virtual object is too little, and the game playing method is also affected.
Aiming at the problems, the embodiment of the invention provides an augmented reality interaction control method, which aims to augment the interaction experience of a game player and a virtual object in a real scene, on one hand, improves the interaction randomness of the virtual object and the game player, and on the other hand, fully utilizes the space of the real scene to display the virtual object.
The augmented reality interaction control method in one embodiment of the invention can be operated on the terminal equipment or the server. The terminal device may be a local terminal device. When the augmented reality interaction control method is operated on a server, the augmented reality interaction control method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the augmented reality interaction method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for performing the augmented reality interaction method is a cloud game server in the cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Referring to fig. 1, a step flow chart of an embodiment of an augmented reality interaction control method of the present invention is shown, and the embodiment of the present invention may specifically include the following steps:
and step 101, when a terminal shoots a real scene, forming a basic plane according to a real object in the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure.
The terminal refers to a device with a camera function, such as a mobile terminal with a camera, an unmanned aerial vehicle, a remote control car, a wearable device or an experience hall with augmented reality. The real scene may be outdoor or indoor, and the real object refers to an object in the real scene, such as a tree, a hill, a valley, a cup, a table or a chair, etc.
In the embodiment of the invention, when a game player shoots a real scene through a terminal, a basic plane is formed according to a real object in the real scene, and then space mapping is performed on the basis of the basic plane, so that a three-dimensional space structure body is formed.
And 102, displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure.
The virtual objects may be various virtual objects implemented based on augmented reality technology, such as baseball, billiards, animals, monsters, shelters, caves, flames, magic, and so forth.
In the embodiment of the invention, after the three-dimensional space structure is formed, a virtual object can be displayed on a certain position node in the three-dimensional space structure. For example, at the time of initially displaying the virtual object, it may be displayed on a position node at the center of the three-space structure.
In the embodiment of the present invention, the three-dimensional space structure includes a plurality of position nodes, and at least one position node of the plurality of position nodes is located in a different plane from other position nodes.
Specifically, the three-dimensional space structure is used for displaying the virtual object, and the virtual object is displayed on the position nodes of the three-dimensional space structure, and at least one position node and other position nodes are included in the position nodes and are located in different planes, that is, at least two different planes (for example, planes with different heights or different distances) can be interacted with by a game player.
In the embodiment of the invention, the virtual object can move on a plurality of position nodes of the three-dimensional space structure body, namely can move from one position node to another position node, and in addition, the virtual object can move on the plurality of position nodes of the three-dimensional space structure body, namely can be step-like position jump or continuous position movement. Specifically, step-like position jumps refer to a jump of a virtual object from one position node to a position node that is not adjacent to the position node, e.g., from rightmost to leftmost; continuous position movement refers to movement of a virtual object from one position node to a position node adjacent to the position node, e.g., from a first left position node to a second left position node.
And step 103, controlling the skill release of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located.
As a specific example, skills may include capturing, shooting, stun, slow, frozen, burning, etc., and may also include virtual attacks using virtual weaponry. In the embodiment of the invention, the skill releasing to the virtual object can be controlled according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is positioned.
Specifically, after the game player reaches the designated position, the virtual object may be displayed on the terminal, and the skill to be released to the virtual object may be controlled, for example, whether to release the skill to the virtual object, what skill to release to the virtual object, and so on, according to the azimuth relationship between the shooting view direction of the terminal and the position node where the virtual object is located. For example, assuming that there is a capturing skill and a shooting skill, when the azimuth relationship between the shooting view direction of the terminal and the position node where the virtual object is located is far, the shooting skill may be released first, and then the capturing skill may be released after the game player approaches.
In the augmented reality interaction method, when a terminal shoots a real scene, a basic plane is formed according to a real object in the real scene, the basic plane is spatially mapped to form a three-dimensional space structure, a virtual object is displayed on a certain position node in the three-dimensional space structure, and then, the release skill of the virtual object is controlled according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located. According to the embodiment of the invention, the three-dimensional space structure body is formed based on the real object in the real scene, and the game player controls the release of skills on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located, so that the interaction between the player and the virtual object is realized, the space in the real scene is fully utilized, in addition, the three-dimensional space structure body comprises a plurality of position nodes, so that the game player can interact with the virtual object in a plurality of position nodes of the three-dimensional space structure body instead of point-to-point interaction with the virtual object in a fixed position, in addition, the plurality of position nodes of the three-dimensional space structure body at least comprise one position node and other position nodes which are in different planes, and therefore, the interaction between the game player and the virtual object is not limited to one plane, and the game experience of the game player is improved.
In an exemplary embodiment of the present invention, after displaying the virtual object on one of the position nodes in the three-dimensional space structure in the step 102, the method further includes:
in response to an adjustment operation of the photographing viewing angle direction, the photographing viewing angle direction is adjusted.
In a specific implementation, the game player can move the terminal, for example, move leftwards or rightwards, touch the terminal to perform an adjustment operation on the shooting visual angle direction, and adjust the shooting visual angle direction in response to the adjustment operation, so that the visual angle of the virtual object displayed in the display scene shot on the terminal is adjusted, and the game player can observe the virtual object and control the skill release on the virtual object.
In an exemplary embodiment of the present invention, the step 103 of controlling the release of skills on the virtual object according to the azimuth relationship between the shooting view angle direction of the terminal and the location node where the virtual object is located includes:
and when the shooting visual angle direction faces to the position node where the virtual object is located, controlling to release skills to the virtual object.
In the embodiment of the invention, when the shooting view angle direction faces to the position node where the virtual object is located, the game player can be regarded as being aligned to the virtual object, and the skill can be controlled to be released to the virtual object at the moment, so that the interaction with the virtual object is realized.
Specifically, referring to fig. 2, a bunny in the figure is a virtual object, and a position node where the bunny is located in the three-dimensional space structure is an identification hot zone; referring to fig. 3, the bunny can move on any position node in the three-dimensional space structure, can jump in a step-like position, can move continuously, and after the bunny moves, the recognition hot zone also changes into the position node after the bunny moves; referring to fig. 4, when the shooting view angle direction faces the position node where the bunny is located, a skill release control "capture" can be generated on the interface of the terminal, and at the moment, the game player clicks the skill release control, so that the capturing skill can be controlled to be released to the bunny. Of course, the skill release control may be displayed on the terminal interface all the time, or may be displayed on the terminal interface when the shooting view angle direction faces the position node where the virtual object is located, which is set according to the game requirement, and the embodiment of the present invention is not limited to this.
In an exemplary embodiment of the present invention, an auxiliary aiming control corresponding to the virtual object is displayed on the terminal, the auxiliary aiming control is associated with the shooting viewing angle direction, and the adjusting the shooting viewing angle direction in response to the adjusting operation of the shooting viewing angle direction includes:
And moving the auxiliary aiming control in response to the adjustment operation of the shooting visual angle direction, wherein the adjustment of the shooting visual angle direction enables the auxiliary aiming control to perform relevant movement.
The auxiliary aiming control is an auxiliary aiming control structure, such as a sight (quasi center), is usually displayed in the center of a terminal interface, and can facilitate a game player to adjust the shooting view angle direction of a virtual object, so that when the game player performs mobile adjustment operation on the terminal, the auxiliary aiming control is moved in response to the adjustment operation on the shooting view angle direction, and at the moment, the shooting view angle direction is also adjusted in association with the auxiliary aiming control.
Step 103, controlling the release of skills on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located, including:
and controlling the skill release of the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located.
As described above, the shooting view angle direction may move in association with the auxiliary aiming control, so that the skill for releasing the virtual object may be controlled according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located, for example, when the auxiliary aiming control is directed towards the position node where the virtual object is located (may be displayed on the terminal interface as a sight is close to or overlaps with the virtual object), the virtual object may be regarded as aligned, and at this time, the skill for releasing the virtual object may be controlled, so as to implement interaction with the virtual object.
In an exemplary embodiment of the invention, the skills include a first skill and a second skill; and controlling the skill release to the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located, comprising:
if the relative position distance between the auxiliary aiming control and the position node where the virtual object is located is smaller than or equal to a preset distance threshold value, releasing the first skill;
and if the relative position distance between the auxiliary aiming control and the position node where the virtual object is positioned is greater than a preset distance threshold, releasing the second skill.
In the embodiment of the invention, the skill released to the virtual object can be one or more.
As a specific example, the first skill may be a skill with a smaller scope of attack but a stronger killing, and thus may be specifically directed to a virtual object release at a shorter distance, and the second skill may be a skill with a larger scope of attack but a weaker killing, and thus may be specifically directed to a virtual object release at a longer distance. Specifically, the relative position between the auxiliary aiming control and the position node where the virtual object is located is detected in real time, if the relative position of the auxiliary aiming control and the position node where the virtual object is located is smaller than or equal to a preset distance threshold value, the auxiliary aiming control and the virtual object are indicated to be closer in distance, then the first skill can be used, and if the relative position of the auxiliary aiming control and the virtual object is larger than the preset distance threshold value, the auxiliary aiming control and the virtual object are indicated to be farther in distance, then the second skill can be used.
Of course, in addition to controlling the skill of releasing the virtual object according to the distance, the skill of releasing the virtual object may be controlled according to other manners, for example, whether the virtual object is aligned, specifically, the first skill may be a skill of releasing the virtual object when the virtual object is aligned, for example, shooting, capturing, etc., and the second skill may be a skill of releasing the virtual object when the virtual object is not aligned, for example, a skill of attracting, ironing, or dizziness, etc., which the embodiments of the present invention need not be limited.
In an exemplary embodiment of the present invention, the skills include at least one skill attribute corresponding thereto; step 103, controlling the release of skills on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located, including:
determining a target skill attribute from the at least one skill attribute according to a relative position distance between the auxiliary aiming control and a position node where the virtual object is located;
and releasing the skills for the virtual object according to the target skill attributes.
Among other skill attributes, may include, but are not limited to, probability of a storm, probability of capture, agility, size of injury, range, and the like.
In embodiments of the present invention, the target skill attributes may be determined based on the relative positional distance between the auxiliary targeting control and the positional node at which the virtual object is located. For example, assuming that the skill is a capturing skill, the capturing skill includes a capturing probability, the capturing probability includes a first capturing probability and a second capturing probability, the first capturing probability is smaller than the second capturing probability, when the relative position distance is far, the target skill attribute can be determined to be the first capturing probability, the capturing can only be released for the virtual object according to the first capturing probability, the capturing success rate is low, when the relative position distance is near, the target skill attribute can be determined to be the second capturing probability, the capturing can only be released for the virtual object according to the second capturing probability, and the capturing success rate is high.
In an exemplary embodiment of the present invention, the step 101 of forming a base plane according to a real object in the real scene, and performing spatial mapping on the base plane to form a three-dimensional spatial structure may include:
projecting three-dimensional structural light rays to the real scene, and collecting reflection signals reflected after the three-dimensional structural light rays are projected to the real object to obtain position information and depth information of the real object;
Determining corresponding nodes in the real scene according to the position information and the depth information;
connecting the nodes to form a basic plane, and performing space mapping on the nodes on the basic plane to form a plurality of replication nodes;
and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a space structure body.
The three-dimensional structure light is projected through structure light, the structure light is a system structure composed of a projector and a camera, and the structure light can be connected with the terminal or part of the terminal. Specifically, three-dimensional structural light is projected to a real object of a real scene through a projector, then a reflection signal reflected by the real object is collected through a camera to calculate information such as position information and depth of the real object, and then a plurality of nodes can be determined in the real scene according to the position information and the depth information. It will be appreciated that nodes are generated based on the position information and depth information of the real object, and thus the nodes are points on the surface of the real object, and if the nodes are connected one by one, a base plane corresponding to the surface of the real object can be formed. After the base plane is obtained, each node in the base plane is subjected to space mapping to form a plurality of replication nodes corresponding to the nodes, the nodes and the replication nodes are collectively called as position nodes, and then the position nodes are connected to form a three-dimensional space structure.
In the embodiment of the invention, more using methods are provided for the terrain space in the real scene, such as a sand table map in the real scene, the terrain structures of the sand table map can be restored through the structured light, and the terrain structures can be used as shelter to serve as shelter in the real fight, so that a real battlefield is simulated.
In the above-mentioned exemplary embodiment, three-dimensional structural light is projected to a real scene through structural light to obtain position information and depth information of a real object, so that corresponding nodes can be determined in the real scene according to the position information and the depth information, a base plane is formed by connecting the nodes to form a three-dimensional space structure body, the position information and the depth information of the real object can be accurately obtained through the structural light, and therefore the generated three-dimensional space structure body is more matched with the real scene, and the situation that a virtual object enters into the real scene, such as a half of the body of a virtual puppy is in the ground, which is unreal enough, is avoided, and the game experience of a game player is ensured.
In an exemplary embodiment of the present invention, the step 101 of forming a base plane according to a real object in the real scene, and performing spatial mapping on the base plane to form a three-dimensional spatial structure may include:
Identifying a reference object from the real objects of the real scene;
connecting the reference objects to form a basic plane, and performing space mapping on nodes on the basic plane to form a plurality of replication nodes;
and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
The reference object is also an object in the real scene, for example, a cup, an apple, a pen container and the like, and can be placed at a designated position in the real scene in advance by related staff, or can be placed in the real scene by a game player. In the embodiment of the invention, besides the above-mentioned passing structured light, a base plane may be formed by a reference object, and a three-dimensional space structure may be formed based on the base plane.
Specifically, a plurality of reference objects, for example, three reference objects which are not in the same straight line, are identified from a real scene, then the reference objects are connected, a base plane can be obtained, each node in the base plane is subjected to spatial mapping to form a plurality of replication nodes corresponding to the node, the nodes and the replication nodes are collectively called as position nodes, and then the position nodes are connected, so that a three-dimensional space structure can be formed.
Specifically, for the situation that related staff is placed at a designated position in a real scene in advance, firstly, the related staff arranges a plurality of objects in the real scene in advance as reference objects, such as a cup, an apple, a pen container and the like, then uploads an image of the reference objects to a server for storage, when a terminal shoots the real scene, the image of the reference objects is obtained from the server, then the image of the real objects which appear in the real scene is identified to be matched with the image of the reference objects, if the matching is successful, the real objects can be used as the reference objects, and a basic plane can be formed by connecting the reference objects.
Aiming at the situation that a game player places himself in a real scene, the player can randomly place a real object in the real scene by himself, then marks the real object as a reference object through a terminal, and the reference object is connected, so that a basic plane can be formed. For example, a cup, an apple, a pen container are arranged in a real scene, then the real scene shot at the moment can comprise the cup, the apple, the pen container, then the cup, the apple, the pen container are marked as reference objects, and the terminal is connected with the reference objects to form a base plane.
In the above exemplary embodiment, the reference object in the real scene is identified through the terminal, and then the reference object is connected to form the basic plane to form the three-dimensional space structure, wherein, the reference object can be placed by not only related staff, but also a game player, so that the limitation of the finally formed space structure is smaller, and the method can better meet different playing requirements of the game player.
In an exemplary embodiment of the present invention, the spatial mapping of the nodes on the base plane to form a plurality of replicated nodes may include:
generating a vertical axis perpendicular to the base plane at a node on the base plane;
and copying the nodes on the vertical axis according to a preset distance parameter and a preset node number to obtain copied nodes.
Specifically, each node in the base plane is generated with a vertical axis (z-axis), and then a plurality of replication nodes are replicated on the vertical axis equidistantly according to a preset distance parameter and a preset number of nodes.
For example, assuming that the preset distance parameter is d, referring to fig. 5, there is a node A1 in the base plane, a z-axis extends at A1, a replication node A2 is generated by replication at a position d from A1 on the z-axis, then replication at a position d from A2 on the z-axis is continued to generate a replication node A3, and so on, until the number of nodes reaches the preset number of nodes, for example, 10 nodes on one z-axis. Similarly, for other nodes on the base plane, such as B1, C1, D1 … …, in the manner of pair A1, a point-like connected three-dimensional space structure is formed by connecting A1 with A2, B1 with B2, C1 with C2, D1 with D2, and connecting A1, B1, C1, D1 with A2, B2, C2, D2 to form the replica nodes B2, C2, D2 … ….
The more the number of the duplicated nodes is, the larger the range of the formed space structure is, and the larger the interaction range of the game player with the virtual object is, meanwhile, if the number of the nodes is large, the movement of the virtual object in the three-dimensional space structure is more flexible, so that in practical application, the preset distance parameter and the preset node number can be set according to the performance and playing requirement of the terminal.
In the above exemplary embodiment, according to the preset distance parameter and the preset number of nodes on the node vertical axis of the base plane, the replication nodes obtain replication nodes, and then the nodes and the replication nodes are connected to form the three-dimensional space structure body, so that the three-dimensional space structure body is simple in generation mode and easy to implement.
In the above exemplary embodiment, the embodiment of the present invention is three-dimensional (3D) vision, a three-dimensional spatial structure may be formed in a real scene, and a relative position distance between the auxiliary aiming control and the virtual object may be calculated, and whether to trigger the movement of the virtual object at a node of the spatial structure is determined according to the relative position distance and a preset condition, so that game playing methods are more diversified, and game experience of game players is improved.
In an exemplary embodiment of the present invention, before displaying the virtual object on one of the location nodes in the spatial structure in the step 102, the method may further include:
when the virtual object enters a designated area and the three-dimensional space structure body is formed successfully, model parameters of the virtual object fed back by the server are received; wherein the model parameters are used to display the virtual object on one of the position nodes of the three-dimensional spatial structure.
Wherein the designated area may be an area pre-arranged by the relevant staff, such as an augmented reality experience library; the model parameters are used for correspondingly generating the virtual object through the terminal, and specifically, the model parameters can include parameters such as shape, size, texture and the like of the virtual object.
Specifically, when the game player reaches the designated area through the GPS technology, and when the space structure body is successfully formed, the server can be fed back to the server that the environment is built, so that the server returns the model parameters of the virtual object corresponding to the designated area, at the moment, the model parameters returned by the server are received at the terminal, and the terminal can display the corresponding virtual object on a node at a certain position of the three-dimensional space structure body according to the model parameters.
In the above exemplary embodiment, when the specified area is entered and the three-dimensional space structure is successfully formed, feedback is sent to the server to successfully form the three-dimensional space structure, and the server feeds back the model parameters of the virtual object corresponding to the specified area at this time, so that data interaction between the terminal and the server can be reduced, and particularly when the model parameters are large, excessive network resources can be avoided.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 6, a block diagram of an embodiment of an augmented reality interaction device according to the present invention is shown, where the embodiment of the present invention may specifically include the following modules:
the three-dimensional space structure forming module 601 is configured to form a base plane according to a real object in a real scene when the terminal shoots the real scene, and spatially map the base plane to form a three-dimensional space structure, wherein the three-dimensional space structure comprises a plurality of position nodes, and at least one position node and other position nodes are located in different planes;
A virtual object display module 602 for displaying a virtual object on one of the position nodes in the three-dimensional spatial structure, wherein the virtual object moves between a plurality of position nodes of the three-dimensional spatial structure;
and the skill releasing module 603 is configured to control the release of the skill on the virtual object according to the shooting view angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located.
In an exemplary embodiment of the invention, the apparatus further comprises:
and the shooting visual angle direction adjusting module is used for responding to the adjustment operation of the shooting visual angle direction and adjusting the shooting visual angle direction.
In an exemplary embodiment of the present invention, the skill releasing module 603 is configured to control to release a skill on the virtual object when the shooting view direction is towards a location node where the virtual object is located.
In an exemplary embodiment of the present invention, an auxiliary aiming control corresponding to the virtual object is displayed on the terminal, the auxiliary aiming control is associated with the shooting viewing angle direction, and the shooting viewing angle direction adjustment module is used for responding to the adjustment operation of the shooting viewing angle direction, and moving the auxiliary aiming control, wherein the adjustment of the shooting viewing angle direction enables the auxiliary aiming control to perform the associative movement; the skill releasing module 603 is configured to control the release of the skill on the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located.
In an exemplary embodiment of the invention, the skills include a first skill and a second skill; the skill releasing module 603 is configured to release the first skill if a relative position distance between the auxiliary aiming control and a position node where the virtual object is located is less than or equal to a preset distance threshold; and if the relative position distance between the auxiliary aiming control and the position node where the virtual object is positioned is greater than a preset distance threshold, releasing the second skill.
In an exemplary embodiment of the present invention, the skills include at least one skill attribute corresponding thereto; the skill release module 603 is configured to determine a target skill attribute from the at least one skill attribute according to a relative position distance between the auxiliary aiming control and a position node where the virtual object is located; and releasing the skills for the virtual object according to the target skill attributes.
In an exemplary embodiment of the present invention, the three-dimensional space structure forming module 601 is configured to project three-dimensional structural light to the real scene, collect a reflection signal reflected after the three-dimensional structural light is projected to the real object, and obtain position information and depth information of the real object; determining corresponding nodes in the real scene according to the position information and the depth information; connecting the nodes to form a basic plane, and performing space mapping on the nodes on the basic plane to form a plurality of replication nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
In an exemplary embodiment of the present invention, the three-dimensional space structure forming module 601 is configured to identify a reference object from real objects in the real scene; connecting the reference objects to form a basic plane, and performing space mapping on nodes on the basic plane to form a plurality of replication nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
In an exemplary embodiment of the present invention, the three-dimensional space structure forming module 601 is configured to generate a vertical axis perpendicular to the base plane at a node on the base plane; and copying the nodes on the vertical axis according to a preset distance parameter and a preset node number to obtain copied nodes.
In an exemplary embodiment of the invention, the apparatus further comprises: the model parameter acquisition module is used for receiving model parameters of the virtual object fed back by the server when the virtual object enters the designated area and the three-dimensional space structure body is formed successfully; wherein the model parameters are used to display the virtual object on one of the position nodes of the three-dimensional spatial structure.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the invention discloses an electronic device, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps in the embodiment of the augmented reality interaction method when being executed by the processor.
The embodiment of the invention discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program realizes the steps in the embodiment of the augmented reality interaction method when being executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above description of the present invention is a detailed description of an augmented reality interaction control method, an augmented reality interaction control device, an electronic apparatus and a storage medium, and specific examples are applied to illustrate the principles and embodiments of the present invention, and the description of the above examples is only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (21)

1. An augmented reality interactive control method, characterized in that the method comprises:
when a terminal shoots a real scene, forming a basic plane according to a real object in the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure body, wherein the three-dimensional space structure body comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
Controlling the release skill of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located;
wherein, according to the real object in the real scene forms a basic plane, the basic plane is spatially mapped to form a three-dimensional space structure body, which comprises: determining nodes on the surface of the real object; connecting the nodes to form a base plane; space mapping is carried out on the nodes on the basic plane to form a plurality of copy nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
2. The method of claim 1, wherein after displaying a virtual object on one of the location nodes in the three-dimensional spatial structure, the method further comprises:
in response to an adjustment operation of the photographing viewing angle direction, the photographing viewing angle direction is adjusted.
3. The method according to claim 1 or 2, wherein controlling the release of skills to the virtual object according to the azimuth relation between the shooting view direction of the terminal and the location node where the virtual object is located, comprises:
And when the shooting visual angle direction faces to the position node where the virtual object is located, controlling to release skills to the virtual object.
4. A method according to claim 3, wherein the terminal has displayed thereon an auxiliary aiming control corresponding to the virtual object, the auxiliary aiming control being associated with the shooting view direction, the adjusting the shooting view direction in response to the adjusting operation of the shooting view direction comprising:
moving the auxiliary aiming control in response to an adjustment operation of the shooting view angle direction, wherein the adjustment of the shooting view angle direction enables the auxiliary aiming control to perform associated movement;
the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
and controlling the skill release of the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located.
5. The method of claim 4, wherein the skills include a first skill and a second skill; and controlling the skill release to the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located, comprising:
If the relative position distance between the auxiliary aiming control and the position node where the virtual object is located is smaller than or equal to a preset distance threshold value, releasing the first skill;
and if the relative position distance between the auxiliary aiming control and the position node where the virtual object is positioned is greater than a preset distance threshold, releasing the second skill.
6. The method of claim 5, wherein the skills include at least one skill attribute corresponding thereto; the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
determining a target skill attribute from the at least one skill attribute according to a relative position distance between the auxiliary aiming control and a position node where the virtual object is located;
and releasing the skills for the virtual object according to the target skill attributes.
7. The method of claim 1, wherein the determining a node on the surface of the real object comprises:
projecting three-dimensional structural light rays to the real scene, and collecting reflection signals reflected after the three-dimensional structural light rays are projected to the real object to obtain position information and depth information of the real object;
And determining corresponding nodes in the real scene according to the position information and the depth information.
8. The method of claim 1, wherein the spatially mapping the nodes on the base plane forms a plurality of replicated nodes, comprising:
generating a vertical axis perpendicular to the base plane at a node on the base plane;
and copying the nodes on the vertical axis according to a preset distance parameter and a preset node number to obtain copied nodes.
9. The method of claim 1, wherein prior to displaying a virtual object on one of the location nodes in the three-dimensional spatial structure, the method further comprises:
when the virtual object enters a designated area and the three-dimensional space structure body is formed successfully, model parameters of the virtual object fed back by the server are received; wherein the model parameters are used to display the virtual object on one of the position nodes of the three-dimensional spatial structure.
10. An augmented reality interactive control method, characterized in that the method comprises:
when a terminal shoots a real scene, forming a basic plane according to a real object in the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure body, wherein the three-dimensional space structure body comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
Displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
controlling the release skill of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located;
wherein, according to the real object in the real scene forms a basic plane, the basic plane is spatially mapped to form a three-dimensional space structure body, which comprises: identifying a reference object from the real objects of the real scene; connecting the reference object to form a basic plane, and performing space mapping on nodes on the basic plane to form a plurality of copy nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
11. The method of claim 10, wherein after displaying a virtual object on one of the location nodes in the three-dimensional spatial structure, the method further comprises:
in response to an adjustment operation of the photographing viewing angle direction, the photographing viewing angle direction is adjusted.
12. The method according to claim 10 or 11, wherein controlling the release of skills to the virtual object according to the azimuth relation between the shooting view direction of the terminal and the location node where the virtual object is located, comprises:
and when the shooting visual angle direction faces to the position node where the virtual object is located, controlling to release skills to the virtual object.
13. The method of claim 12, wherein the terminal has displayed thereon an auxiliary aiming control corresponding to the virtual object, the auxiliary aiming control being associated with the shooting view direction, the adjusting the shooting view direction in response to the adjusting operation of the shooting view direction comprising:
moving the auxiliary aiming control in response to an adjustment operation of the shooting view angle direction, wherein the adjustment of the shooting view angle direction enables the auxiliary aiming control to perform associated movement;
the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
and controlling the skill release of the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located.
14. The method of claim 13, wherein the skills include a first skill and a second skill; and controlling the skill release to the virtual object according to the relative position distance between the auxiliary aiming control and the position node where the virtual object is located, comprising:
if the relative position distance between the auxiliary aiming control and the position node where the virtual object is located is smaller than or equal to a preset distance threshold value, releasing the first skill;
and if the relative position distance between the auxiliary aiming control and the position node where the virtual object is positioned is greater than a preset distance threshold, releasing the second skill.
15. The method of claim 14, wherein the skills include at least one skill attribute corresponding thereto; the controlling the release of the skill to the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located comprises the following steps:
determining a target skill attribute from the at least one skill attribute according to a relative position distance between the auxiliary aiming control and a position node where the virtual object is located;
and releasing the skills for the virtual object according to the target skill attributes.
16. The method of claim 10, wherein the spatially mapping the nodes on the base plane forms a plurality of replicated nodes, comprising:
generating a vertical axis perpendicular to the base plane at a node on the base plane;
and copying the nodes on the vertical axis according to a preset distance parameter and a preset node number to obtain copied nodes.
17. The method of claim 10, wherein prior to displaying a virtual object on one of the location nodes in the three-dimensional spatial structure, the method further comprises:
when the virtual object enters a designated area and the three-dimensional space structure body is formed successfully, model parameters of the virtual object fed back by the server are received; wherein the model parameters are used to display the virtual object on one of the position nodes of the three-dimensional spatial structure.
18. An augmented reality interactive control device, the device comprising:
the three-dimensional space structure forming module is used for forming a basic plane according to a real object in a real scene when the terminal shoots the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure, wherein the three-dimensional space structure comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
A virtual object display module for displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
the skill releasing module is used for controlling the skill releasing of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located;
wherein, according to the real object in the real scene forms a basic plane, the basic plane is spatially mapped to form a three-dimensional space structure body, which comprises: determining nodes on the surface of the real object; connecting the nodes to form a base plane; space mapping is carried out on the nodes on the basic plane to form a plurality of copy nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
19. An augmented reality interactive control device, the device comprising:
the three-dimensional space structure forming module is used for forming a basic plane according to a real object in a real scene when the terminal shoots the real scene, and performing space mapping on the basic plane to form a three-dimensional space structure, wherein the three-dimensional space structure comprises a plurality of position nodes, and at least one position node and other position nodes are positioned on different planes;
A virtual object display module for displaying a virtual object on one of the position nodes in the three-dimensional space structure, wherein the virtual object moves among a plurality of position nodes of the three-dimensional space structure;
the skill releasing module is used for controlling the skill releasing of the virtual object according to the shooting visual angle direction of the terminal and the azimuth relation between the position nodes where the virtual object is located;
wherein, according to the real object in the real scene forms a basic plane, the basic plane is spatially mapped to form a three-dimensional space structure body, which comprises: identifying a reference object from the real objects of the real scene; connecting the reference object to form a basic plane, and performing space mapping on nodes on the basic plane to form a plurality of copy nodes; and using the nodes and the replication nodes as position nodes, and connecting the position nodes to form a three-dimensional space structure.
20. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the augmented reality interaction control method of any one of claims 1 to 9 when executed by the processor.
21. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the augmented reality interaction control method according to any one of claims 1 to 9.
CN202110390746.8A 2021-04-12 2021-04-12 Augmented reality interaction control method and device, electronic equipment and storage medium Active CN113117327B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390746.8A CN113117327B (en) 2021-04-12 2021-04-12 Augmented reality interaction control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390746.8A CN113117327B (en) 2021-04-12 2021-04-12 Augmented reality interaction control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113117327A CN113117327A (en) 2021-07-16
CN113117327B true CN113117327B (en) 2024-02-02

Family

ID=76776302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390746.8A Active CN113117327B (en) 2021-04-12 2021-04-12 Augmented reality interaction control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113117327B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116414223A (en) * 2021-12-31 2023-07-11 中兴通讯股份有限公司 Interaction method and device in three-dimensional space, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095094A (en) * 2016-06-10 2016-11-09 北京行云时空科技有限公司 The method and apparatus that augmented reality projection is mutual with reality
CN106621324A (en) * 2016-12-30 2017-05-10 当家移动绿色互联网技术集团有限公司 Interactive operation method of VR game
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN109685884A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 A kind of three-dimensional modeling method and system based on virtual reality
CN111127612A (en) * 2019-12-24 2020-05-08 北京像素软件科技股份有限公司 Game scene node updating method and device, storage medium and electronic equipment
CN111324334A (en) * 2019-11-12 2020-06-23 天津大学 Design method for developing virtual reality experience system based on narrative oil painting works

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0704319D0 (en) * 2007-03-06 2007-04-11 Areograph Ltd Image capture and playback
US10062209B2 (en) * 2013-05-02 2018-08-28 Nintendo Co., Ltd. Displaying an object in a panoramic image based upon a line-of-sight direction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095094A (en) * 2016-06-10 2016-11-09 北京行云时空科技有限公司 The method and apparatus that augmented reality projection is mutual with reality
CN106621324A (en) * 2016-12-30 2017-05-10 当家移动绿色互联网技术集团有限公司 Interactive operation method of VR game
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN109685884A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 A kind of three-dimensional modeling method and system based on virtual reality
CN111324334A (en) * 2019-11-12 2020-06-23 天津大学 Design method for developing virtual reality experience system based on narrative oil painting works
CN111127612A (en) * 2019-12-24 2020-05-08 北京像素软件科技股份有限公司 Game scene node updating method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113117327A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US11948260B1 (en) Streaming mixed-reality environments between multiple devices
US9892563B2 (en) System and method for generating a mixed reality environment
US9299184B2 (en) Simulating performance of virtual camera
CN102884490B (en) On the stable Virtual Space of sharing, maintain many views
CN111744202B (en) Method and device for loading virtual game, storage medium and electronic device
CN107469343B (en) Virtual reality interaction method, device and system
US20110181601A1 (en) Capturing views and movements of actors performing within generated scenes
JP2022539289A (en) VIRTUAL OBJECT AIMING METHOD, APPARATUS AND PROGRAM
CN113181650A (en) Control method, device, equipment and storage medium for calling object in virtual scene
WO2017092432A1 (en) Method, device, and system for virtual reality interaction
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN111744180A (en) Method and device for loading virtual game, storage medium and electronic device
CN113117327B (en) Augmented reality interaction control method and device, electronic equipment and storage medium
CN114470775A (en) Object processing method, device, equipment and storage medium in virtual scene
CN110545363B (en) Method and system for realizing multi-terminal networking synchronization and cloud server
US10819952B2 (en) Virtual reality telepresence
Yavuz et al. Desktop Artillery Simulation Using Augmented Reality
CN116531765B (en) Shooting result generation method and device for shooting training of shooting range and readable storage medium
CN213426345U (en) Digital sand table interactive item exhibition device based on oblique photography
CN112807698B (en) Shooting position determining method and device, electronic equipment and storage medium
TW201840200A (en) Interactive method for 3d image objects, a system, and method for post-production of 3d interactive video
CN117861200A (en) Information processing method and device in game, electronic equipment and storage medium
CN113633991A (en) Virtual skill control method, device, equipment and computer readable storage medium
TW202419142A (en) Augmented reality interaction system, augmented reality interaction method, server and mobile device
CN111063034A (en) Time domain interaction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant