CN112862935B - Game role movement processing method and device, storage medium and computer equipment - Google Patents

Game role movement processing method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112862935B
CN112862935B CN202110282546.0A CN202110282546A CN112862935B CN 112862935 B CN112862935 B CN 112862935B CN 202110282546 A CN202110282546 A CN 202110282546A CN 112862935 B CN112862935 B CN 112862935B
Authority
CN
China
Prior art keywords
game
motion
dimensional
data
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110282546.0A
Other languages
Chinese (zh)
Other versions
CN112862935A (en
Inventor
曾浩强
林栋国
余婉
陈星雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Yake Interactive Technology Co ltd
Original Assignee
Tianjin Yake Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Yake Interactive Technology Co ltd filed Critical Tianjin Yake Interactive Technology Co ltd
Priority to CN202110282546.0A priority Critical patent/CN112862935B/en
Publication of CN112862935A publication Critical patent/CN112862935A/en
Priority to PCT/CN2021/121092 priority patent/WO2022193612A1/en
Application granted granted Critical
Publication of CN112862935B publication Critical patent/CN112862935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

The application discloses a game role movement processing method and device, a storage medium and computer equipment, wherein the method comprises the following steps: responding to an AR mode interactive operation request in a game client, and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene; and driving the game role controlled by the game player to perform non-interspersed motion in the three-dimensional game virtual scene according to real displacement data of the game player in the real world and the motion plane data file corresponding to the three-dimensional game virtual scene. The method and the device are not helpful for avoiding the situation that the game role is hard to wear the upper along the moving path of the player in the real world, and the game performance effect is improved.

Description

Game role movement processing method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a motion of a game character, a storage medium, and a computer device.
Background
With the rapid development of computer technology, AR (augmented Reality) technology is beginning to be applied in various industries in succession, including: military, medical, movie & TV, games, etc. In AR applications in the field of gaming, a player may walk in the real world holding a mobile device, driving a game virtual character to walk a corresponding distance in a corresponding direction in the game world.
In order to prevent the virtual character and the virtual scene from being interspersed, the existing method can make the virtual scene flat, no obstacle exists in the virtual scene, and the character is prohibited from jumping up and down. Otherwise, the virtual role and the virtual scene are alternated, and the help is alternated. Thus, AR play cannot be applied to all game scenes, and scenes must be specially made for AR play.
Disclosure of Invention
In view of this, the present application provides a motion processing method and apparatus for a game character, a storage medium, and a computer device, which do not need to establish a special game scene model for the AR mode, and at the same time, avoid the help of the game character that is hard and hard to follow the moving path of the player in the real world, improve the playability of the game, increase the game play, and ensure the game display effect.
According to an aspect of the present application, there is provided a motion processing method of a game character, including:
responding to an AR mode interactive operation request in a game client, and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene;
and driving the game role controlled by the game player to perform non-penetration movement in the three-dimensional game virtual scene according to real displacement data of the game player in real time in the real world.
Optionally, before the invoking of the game engine in the game client to render the real-time three-dimensional game virtual scene, the method further includes:
reading a motion surface data file corresponding to the three-dimensional game virtual scene, wherein the motion surface data file is used for indicating the movable position of the three-dimensional game virtual scene;
the driving, according to real-time real displacement data of a game player in a real world, a game character controlled by the game player to perform non-insertion movement in the three-dimensional game virtual scene specifically includes:
acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating target positions of game characters in the three-dimensional game virtual scene frame by frame according to the real displacement data and initial positions of the game characters in the three-dimensional game virtual scene;
and according to the motion surface data file and the target position, rendering the motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine.
Optionally, the acquiring, at a preset frame rate, real displacement data corresponding to the game client, and determining, frame by frame, a target position of the game character in the three-dimensional game virtual scene according to the real displacement data and an initial position of the game character in the three-dimensional game virtual scene specifically include:
acquiring positioning data acquired by a positioning device of the game client according to the preset frame rate, and calculating real displacement data corresponding to the game client frame by frame;
and determining virtual displacement data of the game role corresponding to the real displacement data according to a preset proportionality coefficient, and determining a target position of the game role in the three-dimensional game virtual scene according to the virtual displacement data and the initial position.
Optionally, the obtaining of the positioning data acquired by the positioning device of the game client according to the preset frame rate, and calculating the real displacement data corresponding to the game client frame by frame specifically include:
acquiring first positioning data and second positioning data in the positioning data, wherein a difference between a first sampling frame corresponding to the first positioning data and a second sampling frame corresponding to the second positioning data is a preset number;
and performing interpolation based on the first positioning data and the second positioning data to obtain interpolation position data matched with the preset quantity, and calculating interpolation displacement data corresponding to the game client according to the interpolation position data, wherein the real displacement data comprises the interpolation displacement data.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene; the reading of the motion surface data file corresponding to the three-dimensional game virtual scene specifically includes:
and loading a motion surface corresponding to the motion surface data file in the three-dimensional game virtual scene, wherein the motion surface is hidden when the three-dimensional game virtual scene is displayed.
Optionally, the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
generating a dotting ray based on the target position, and emitting the dotting ray to the moving surface to perform ray dotting, wherein the dotting ray is perpendicular to the plane of the target position;
if the dotting ray intersects with the motion surface, rendering a first motion image frame of the game role moving to the target position, wherein the motion image frame comprises the first motion image frame;
and if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the second motion image frame.
Optionally, if the dotting ray intersects with the motion plane, generating a first motion image frame of the game character moving to the target position, specifically including:
acquiring the height of an intersection point of the dotting ray and the motion surface;
if the intersection point height is matched with the AR mode riding state information of the three-dimensional game virtual scene, rendering a third motion image frame of the game role moving to the target position according to the riding state corresponding to the intersection point height, wherein the first motion image frame comprises the third motion image frame;
and if the intersection point height is not matched with the AR mode riding state information, rendering a fourth moving image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the first moving image frame comprises the fourth moving image frame.
Optionally, the motion surface comprises a two-dimensional navigation grid motion surface; the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the two-dimensional navigation grid motion surface, acquiring preset riding information of the target position corresponding to the two-dimensional navigation grid motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, and rendering a fifth motion image frame of the game role moving to the target position in the target riding state based on the two-dimensional navigation grid motion surface, wherein the motion image frame comprises the fifth motion image frame;
if the target position is not matched with the two-dimensional navigation grid motion surface, rendering a sixth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule,
wherein the moving image frame comprises the sixth moving image frame.
Optionally, the motion surface comprises a three-dimensional voxel motion surface; the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the three-dimensional voxel motion surface, acquiring preset riding information of the target position corresponding to the three-dimensional voxel motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, determining path finding information of the game role moving from the current position to the target position based on the three-dimensional voxel motion surface, and rendering a seventh moving image frame of the game role moving to the target position in the target riding state according to the path finding information, wherein the moving image frame comprises the seventh moving image frame;
if the target position is not matched with the three-dimensional voxel motion surface, according to a preset collision motion rule, an eighth motion image frame of the game role moving at the corresponding current position is rendered, wherein the motion image frame comprises the eighth motion image frame.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene; the motion surface data file can be modified according to the real-time environment of the three-dimensional game scene;
the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the plane coordinates of the movable position, generating a ninth motion image frame of the game character moving to the target position according to height data corresponding to the target position, wherein the motion image frame comprises the ninth motion image frame;
if the plane coordinates of the target position and the movable position are not matched, generating a tenth motion image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the motion image frame comprises the tenth motion image frame.
Optionally, the generating, frame by frame, a motion image frame corresponding to the game character in the three-dimensional game virtual scene according to the motion plane data file and the target position specifically includes:
acquiring visual angle data obtained by an angular motion detection device of the game client according to the preset frame rate;
and rendering the motion image frame of the game role matched with the visual angle data frame by frame according to the motion surface data file and the target position, wherein the motion image frame is used for displaying the three-dimensional game virtual scene matched with the visual angle data and the motion action of the game role.
Optionally, the obtaining, according to the preset frame rate, the perspective data obtained by the angular motion detection device of the game client specifically includes:
acquiring angular motion data acquired by the angular motion detection device according to the preset frame rate;
and generating the visual angle data corresponding to the angular motion data according to the mirror motion category corresponding to the three-dimensional game virtual scene.
Optionally, in a case that the mirror movement category includes a first person mirror movement, the perspective data is a first person perspective of the game character in the three-dimensional game virtual scene, and the perspective data changes correspondingly with the angular movement data based on a preset initial perspective of the game character;
and under the condition that the mirror moving category comprises a third person called mirror moving, viewing the third person called view angle of the game role according to a preset position and a preset angle corresponding to the position and the direction of the game role, wherein the direction of the game role is correspondingly changed along with the angular movement data on the basis of a preset initial direction of the game role.
According to another aspect of the present application, there is provided a motion processing apparatus of a game character, including:
the request response module is used for responding to an AR mode interactive operation request in a game client and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene;
and the motion driving module is used for driving the game role controlled by the game player to perform non-intrusive motion in the three-dimensional game virtual scene according to real displacement data of the game player in real time in the real world.
Optionally, the apparatus further comprises: the file reading module is used for reading a motion surface data file corresponding to a real-time three-dimensional game virtual scene before calling a game engine in the game client to render the real-time three-dimensional game virtual scene, wherein the motion surface data file is used for indicating the movable position of the three-dimensional game virtual scene;
the motion driving module is specifically configured to: acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating target positions of game characters in the three-dimensional game virtual scene frame by frame according to the real displacement data and initial positions of the game characters in the three-dimensional game virtual scene; and according to the motion surface data file and the target position, rendering the motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine.
Optionally, the motion driving module is further configured to: acquiring positioning data acquired by a positioning device of the game client according to the preset frame rate, and calculating real displacement data corresponding to the game client frame by frame; and determining virtual displacement data of the game role corresponding to the real displacement data according to a preset proportionality coefficient, and determining a target position of the game role in the three-dimensional game virtual scene according to the virtual displacement data and the initial position.
Optionally, the motion driving module is further configured to: acquiring first positioning data and second positioning data in the positioning data, wherein a difference between a first sampling frame corresponding to the first positioning data and a second sampling frame corresponding to the second positioning data is a preset number; and performing interpolation based on the first positioning data and the second positioning data to obtain interpolation position data matched with the preset quantity, and calculating interpolation displacement data corresponding to the game client according to the interpolation position data, wherein the real displacement data comprises the interpolation displacement data.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene;
the file reading module is specifically configured to: and loading a moving surface corresponding to the moving surface data file in the three-dimensional game virtual scene, wherein the moving surface is hidden when the three-dimensional game virtual scene is displayed.
Optionally, the motion driving module is further configured to: generating a dotting ray based on the target position, and emitting the dotting ray to the moving surface to perform ray dotting, wherein the dotting ray is perpendicular to the plane of the target position; if the dotting ray intersects with the motion surface, a first motion image frame of the game role moving to the target position is rendered, wherein the motion image frame comprises the first motion image frame; and if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the second motion image frame.
Optionally, if the dotting ray intersects with the motion surface, the motion driving module is further configured to: acquiring the height of an intersection point of the dotting ray and the motion surface; if the intersection point height is matched with the AR mode riding state information of the three-dimensional game virtual scene, rendering a third motion image frame of the game role moving to the target position according to the riding state corresponding to the intersection point height, wherein the first motion image frame comprises the third motion image frame; and if the intersection point height is not matched with the AR mode riding state information, rendering a fourth moving image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the first moving image frame comprises the fourth moving image frame.
Optionally, the motion surface comprises a two-dimensional navigation grid motion surface; the motion driving module is further configured to: if the target position is matched with the two-dimensional navigation grid motion surface, acquiring preset riding information of the target position corresponding to the two-dimensional navigation grid motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, and rendering a fifth motion image frame of the game role moving to the target position in the target riding state based on the two-dimensional navigation grid motion surface, wherein the motion image frame comprises the fifth motion image frame; and if the target position is not matched with the two-dimensional navigation grid motion surface, rendering a sixth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the sixth motion image frame.
Optionally, the motion surface comprises a three-dimensional voxel motion surface; the motion driving module is further configured to: if the target position is matched with the three-dimensional voxel motion surface, acquiring preset riding information of the target position corresponding to the three-dimensional voxel motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, determining path finding information of the game role moving from the current position to the target position based on the three-dimensional voxel motion surface, and rendering a seventh moving image frame of the game role moving to the target position in the target riding state according to the path finding information, wherein the moving image frame comprises the seventh moving image frame; if the target position is not matched with the three-dimensional voxel motion surface, according to a preset collision motion rule, an eighth motion image frame of the game role moving at the corresponding current position is rendered, wherein the motion image frame comprises the eighth motion image frame.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on landform data of the three-dimensional game virtual scene; the motion surface data file can be modified according to the real-time environment of the three-dimensional game scene;
the motion driving module is further configured to: if the target position is matched with the plane coordinates of the movable position, generating a ninth motion image frame of the game character moving to the target position according to height data corresponding to the target position, wherein the motion image frame comprises the ninth motion image frame; if the plane coordinates of the target position and the movable position are not matched, generating a tenth motion image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the motion image frame comprises the tenth motion image frame.
Optionally, the motion driving module is further configured to: acquiring visual angle data obtained by an angular motion detection device of the game client according to the preset frame rate; and rendering the motion image frame of the game role matched with the visual angle data frame by frame according to the motion surface data file and the target position, wherein the motion image frame is used for displaying the three-dimensional game virtual scene matched with the visual angle data and the motion action of the game role.
Optionally, the motion driving module is further configured to: acquiring angular motion data acquired by the angular motion detection device according to the preset frame rate; and generating the visual angle data corresponding to the angular motion data according to the mirror motion category corresponding to the three-dimensional game virtual scene.
Optionally, in a case that the mirror movement category includes a first person mirror movement, the perspective data is a first person perspective of the game character in the three-dimensional game virtual scene, and the perspective data changes correspondingly with the angular movement data based on a preset initial perspective of the game character; and under the condition that the mirror movement category comprises a third person called mirror movement, the visual angle data refers to a third person called visual angle which looks at the game role according to a preset position and a preset angle which correspond to the position and the direction of the game role, and the direction of the game role is correspondingly changed along with the angular movement data on the basis of a preset initial direction of the game role.
According to still another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the motion processing method of the above-described game character.
According to still another aspect of the present application, there is provided a computer device including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the motion processing method of the game character described above when executing the program.
By means of the technical scheme, the game role movement processing method and device, the storage medium and the computer device respond to an AR mode interactive operation request in the game client, the three-dimensional game virtual scene is rendered in real time through the game engine, meanwhile, real displacement data of a game player in real time in the real world is collected through the game client, and the game role is driven to move in the game without interpenetration by using the real displacement data and the movement plane data file of the game scene. Compared with the prior art that the existing scene is used for running the AR mode to cause the motion performance effect to be used for helping, or the existing scene is used for avoiding helping to independently carry out scene modeling for the AR mode, a special game scene model does not need to be established for the AR mode, only the motion surface data file which is established in advance is needed to be used for analyzing whether the game role can follow the motion track of the player in the real world in the three-dimensional game virtual scene, the method is simple to realize and low in threshold, the rendering of the motion image frame is carried out, the help caused by the fact that the game role can naturally follow the moving path of the player in the real world in the three-dimensional game virtual scene is avoided, the game playability is improved, the game playing method is increased, the game display effect is guaranteed, and the player is provided with virtual and real combined extraordinary game experience through the AR game mode.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart illustrating a method for processing movement of a game character according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a dotting ray emission provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram illustrating a motion processing device for a game character according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In this embodiment, a method for processing a motion of a game character is provided, as shown in fig. 1, the method including:
step 101, responding to an AR mode interactive operation request in a game client, and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene;
step 102, driving a game character controlled by a game player to perform non-intrusive motion in a three-dimensional game virtual scene according to real displacement data of the game player in a real world and a motion plane data file corresponding to the three-dimensional game virtual scene.
The embodiment of the application can be applied to game clients, and the game clients can comprise intelligent electronic equipment such as smart phones and tablet computers. When the game client runs the game, responding to an AR mode interactive operation request of the three-dimensional game virtual scene in the game, and rendering the corresponding three-dimensional game virtual scene in real time through the game engine.
Optionally, before performing image rendering, the method may further include: and 103, reading a motion surface data file corresponding to the three-dimensional game virtual scene, wherein the motion surface data file is used for indicating the movable position of the three-dimensional game virtual scene.
The three-dimensional game virtual scene may be a specific game world in the game, for example, a player-specific home in the game, and the motion surface data file is a set of data that is created in advance based on the topographic data of the three-dimensional game virtual scene and is used for reflecting the movable position of the three-dimensional game virtual scene. The motion surface data file may specifically contain data of different meanings, for example, the motion surface data file may include information of a passable point, a non-passable point, a landform boundary, a landform height, and the like in the three-dimensional game virtual scene, and for example, the motion surface data file may include only passable points in the three-dimensional game virtual scene. The feasibility of the movement of the game role in the three-dimensional game virtual scene can be judged through the movement plane data file, and whether the game role can move to a certain position or not can be specifically determined.
In this embodiment of the present application, the real displacement data generated when the player holds the game client to move may be combined to analyze a target position that the game character in the AR mode should reach if simulating the real movement of the player, that is, in the AR mode, the player controls the movement of the game character in the virtual world by moving by holding the game client, where a parameter for controlling the movement of the game character may specifically be the real displacement data acquired by the game client at a preset frame rate.
Optionally, step 102 may specifically include:
102-1, acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating a target position of a game role in the three-dimensional game virtual scene frame by frame according to the real displacement data and an initial position of the game role in the three-dimensional game virtual scene;
and 102-2, according to the motion surface data file and the target position, rendering a motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine.
In the embodiment of the application, the game role is controlled to move in the three-dimensional game virtual scene according to the real displacement data, and the game role is controlled to move to the corresponding target position according to the moving direction and the moving distance indicated by the real displacement data by taking the initial position of the game role in the three-dimensional game virtual scene as an initial control point. In different application scenes, the determination manner of the initial position is different, for example, before the game client responds to the AR mode interactive operation request, the game character is already in the three-dimensional game virtual scene, that is, the AR mode interactive operation request is used to indicate that the AR mode is run in the current game scene where the game character is located, then the initial position may be the position where the game character is located when the game client responds to the AR mode interactive operation request, and for example, before the client responds to the AR mode interactive operation request, the game character is located in another game scene, that is, the AR mode interactive operation request is used to indicate that the game character switches from the current game scene to the three-dimensional game virtual scene and runs the AR mode in the three-dimensional game virtual scene, and the initial position at this time may be a transfer point position in the three-dimensional game virtual scene preset by the game developer.
In addition, in the above-mentioned motion control method for a game character, since a player moves in the real world, the walking environment of the real world is different from the walking environment of the game world, and if the motion of the game character in the game world is controlled completely according to the real displacement data of the game client, situations such as the game character crossing barriers such as walls, and going out of a map range may occur, in order to solve such problems of moving and wearing, and poor game experience of the player, in the present embodiment, a target position of the game character in the three-dimensional game virtual scene corresponding to each real displacement data is calculated according to the real displacement data of the game client and an initial position of the game character in the three-dimensional game virtual scene, and then whether the game character can move to the target position is analyzed according to the movable position of the three-dimensional game virtual scene indicated by the motion plane data file, that is whether the target position belongs to the movable position in the three-dimensional game virtual scene, and a motion image frame is obtained by image rendering based on the analysis result, for example, when it is determined that the game character can move to the target position, the game character can render an image frame in the target image frame when it does not move to the target position, and the image frame can be rendered in the image frame can be rendered, and the image frame can be rendered when it is not rendered. Thus, the process is completed. The game role can not appear in the game animation seen by the player on the display screen of the game client, the effects of crossing barriers, moving to the outside of a map range and the like are shown, and the reality of the motion of the game role is improved, so that the game role in the three-dimensional game virtual scene in the AR mode can not only follow the movement of the player in the real world, but also avoid the help penetration caused by the fact that the game role is hard to follow the moving path of the player in the real world under the condition of not developing a special map for the AR mode again, the game playability is improved, the game playing method is increased, and meanwhile, the game display effect is ensured.
By applying the technical scheme of the embodiment, in response to the AR mode interactive operation request, the motion surface data file corresponding to the three-dimensional game virtual scene indicated by the AR mode interactive operation request is read, meanwhile, the real displacement data corresponding to the game client side is obtained according to the preset frame rate, the target position of the game role in the three-dimensional game virtual scene is determined frame by frame based on the real displacement data, whether the target position is movable or not is analyzed by using the read motion surface data file, and the motion image frame corresponding to the game role is rendered based on the analysis result. Compared with the prior art that the existing scene is utilized to run the AR mode to cause the motion performance effect of helping, or the AR mode is independently used for carrying out scene modeling and the like, the method and the device do not need to establish a special game scene model for the AR mode, and only need to utilize the pre-established motion surface data file to analyze whether the motion track of the player in the real world can be followed by the game role in the three-dimensional game virtual scene, so that the motion image frame is rendered, help caused by the fact that the game role in the three-dimensional game virtual scene is hard to follow the moving path of the player in the real world is avoided, game playability is improved, game playing methods are increased, game display effects are guaranteed, and unreal and real combined game experience is brought to the player through the AR game mode.
In this embodiment of the present application, optionally, step 102-1 may specifically include:
102-1.1, acquiring positioning data acquired by a positioning device of the game client according to the preset frame rate, and calculating real displacement data corresponding to the game client frame by frame;
and 102-1.2, determining virtual displacement data of the game role corresponding to the real displacement data according to a preset proportional coefficient, and determining a target position of the game role in the three-dimensional game virtual scene according to the virtual displacement data and the initial position.
In this embodiment, a positioning device (e.g., a GPS module) in the game client may be used to collect positioning data according to a preset frame rate, obtain the positioning data of the game client, and calculate the real displacement data of the game client frame by frame based on the collected positioning data, where the positioning data collected by the positioning device may be longitude and latitude data, and the real displacement data may reflect the moving direction and the moving distance of the user. Furthermore, the game character may move along with the real movement of the player in the real world, and specifically, virtual displacement data corresponding to the real displacement data may be calculated according to a preset proportionality coefficient, for example, if the preset proportionality coefficient is 1, the virtual displacement data is the same as the real displacement data, the player moves 1 meter in a certain direction in the real world, the virtual displacement data also represents that the player moves 1 meter in the direction, the preset proportionality coefficient is 10, the player moves 1 meter in the real world, and the game character moves 10 meters in the game, so that the game character can proportionally follow the movement track of the player in the real world, and an immersive game experience is brought to the player. The difference between the method of determining the target position based on the virtual displacement data and the initial position and the method of generating the target position according to the real displacement data and the initial position in the above embodiment is that the real displacement data is converted into corresponding virtual displacement data, and then the target position is calculated according to the virtual displacement data obtained by conversion, and the specific calculation method is not described herein again.
In addition, when the preset scaling factor is 1, the target position may be calculated directly based on the real displacement data and the initial position, that is, step 102-1 may be replaced with: and determining the target position of the game role in the three-dimensional game virtual scene according to the real displacement data and the initial position.
In the embodiment of the application, in order to improve the motion expression effect of the game role in the three-dimensional game virtual scene and make the motion of the game role smoother, interpolation can be performed on the real displacement data, and the target position can be determined by using the interpolation result. Optionally, "acquiring the real displacement data corresponding to the game client according to the preset frame rate" may be: acquiring first positioning data and second positioning data in the positioning data, wherein a difference between a first sampling frame corresponding to the first positioning data and a second sampling frame corresponding to the second positioning data is a preset number; and performing interpolation based on the first positioning data and the second positioning data to obtain interpolation position data matched with the preset quantity, and calculating interpolation displacement data corresponding to the game client according to the interpolation position data, wherein the real displacement data comprises the interpolation displacement data.
In this embodiment, first positioning data and second positioning data can be obtained from the positioning data according to a preset number, for example, the preset number is 10, the 1 st positioning data of sampling is used as the first positioning data, the 10 th positioning data of sampling is used as the second positioning data, then interpolation calculation is performed on the first positioning data and the second positioning data, 8 interpolation data between the first positioning data and the second positioning data are calculated, the first positioning data, 8 interpolation data and the second positioning data are used as 10 interpolation displacement data, and when virtual displacement data and target positions are subsequently calculated, interpolation displacement data obtained through interpolation are used for calculation. When the game role is controlled to move subsequently, the control can be carried out according to the interpolation displacement data, so that the motion trail of the game role is more smooth, and the motion expression effect is improved.
It should be noted that the first positioning data and the second positioning data may also be directly acquired at fixed time intervals, for example, the preset frame rate is 40 hz, the preset number is 10, and the fixed time interval is 0.25 seconds, so that the first positioning data may be directly acquired at 0 second, and the second positioning data may be acquired at 0.25 second, so as to reduce the data acquisition amount of the positioning apparatus.
In addition, in order to ensure the performance of the game character in different motion phases, a player can acquire first positioning data and second positioning data in different motion phases in the real world, for example, the first positioning data and the second positioning data are acquired in a starting phase of the player, so that the situation that the game character moves in the game world due to stop and go when the player starts is avoided, and the situation that the game character stops suddenly due to sudden stop of the player is avoided, and the performance is poor due to the fact that the player stops suddenly and the first positioning data and the second positioning data are acquired in a stopping phase of the player.
In the embodiment of the present application, optionally, the motion surface data file includes plane coordinates and height data corresponding to a movable position in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene; step 103 may specifically include: and loading a moving surface corresponding to the moving surface data file in the three-dimensional game virtual scene, wherein the moving surface is hidden when the three-dimensional game virtual scene is displayed.
In the above embodiment, the motion plane data file may specifically be generated in advance according to the geomorphologic data of the three-dimensional game virtual scene, and the file may specifically include plane coordinates and height data of each movable position in the three-dimensional game virtual scene, where the plane coordinates and corresponding height data indicate a movable height of a certain movable position in the three-dimensional game virtual scene space, for example, a step in the three-dimensional game virtual scene is a movable position, and the plane coordinates and height data corresponding to the step reflect a position on which a player character steps when moving to the step. In response to the AR mode interactive operation request, a motion surface corresponding to the motion surface data file can be loaded in the game based on the motion surface data file, the motion surface can be hidden in the game, a player cannot see the motion surface through a display screen of a game client, the motion surface is only used for analyzing whether a certain target position can move or not, the motion surface can also be displayed in the game, the player can see the motion surface, and the player can be helped to avoid moving to the position of a non-motion surface as far as possible so as to ensure smooth and smooth visiting in a game world.
In this embodiment of the present application, whether the target position can be moved may be determined by performing ray dotting on the moving surface, and optionally, the step 102-2 may specifically include:
102-2-A1, generating a dotting ray based on the target position, and emitting the dotting ray to the motion surface for ray dotting, wherein the dotting ray is vertical to a plane where the target position is located;
step 102-2-A2, if the dotting ray intersects with the motion surface, rendering a first motion image frame of the game role moving to the target position, wherein the motion image frame comprises the first motion image frame;
and 102-2-A3, if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the second motion image frame.
In the above embodiment, a dotting ray is generated according to a target position in a vertical direction (an end point of the dotting ray may be the highest point of a three-dimensional game virtual scene, the direction is vertically downward, an end point may also be taken at the lowest point, and the direction is vertically upward), and the dotting ray is emitted to perform dotting, if the dotting ray can hit a motion surface, that is, the dotting ray intersects the motion surface, as shown in fig. 2, assuming that an initial position of a game character is point a, a target position is point B, and the motion surface is point S, the dotting ray is emitted upward from the point B, and the dotting ray intersects the point S to form an intersection point B ', which indicates that the game character can move to the target position, a first motion image frame in which the game character moves to the target position (the target position is specifically point B') is rendered, and height data corresponding to the target position should be considered when the game character moves to the target position, so that the game character can move based on a landform of the scene when moving. If the dotting ray does not hit the motion surface, that is, the dotting ray does not intersect with the motion surface, it is indicated that the game character may wear the upper if moving to the target position, and at this time, the second motion image frame may be rendered according to the preset collision motion rule, where the preset collision motion rule may specifically be that, under the condition, the game character walks in place, or stands still in place, and so on. The method can ensure the non-interpenetration movement effect of the game role in the game scene by a dotting ray detection mode, and has low system overhead and high efficiency.
In this embodiment of the application, in different game scenes, there may be a certain difference between the corresponding riding state information in the normal mode and the riding state information in the AR mode, and optionally, the step 102-2-A2 may specifically include:
102-2-A2.1, acquiring the height of an intersection point of the dotting ray and the motion surface;
step 102-2-a2.2, if the intersection height matches the AR mode riding state information of the three-dimensional game virtual scene, rendering a third moving image frame that the game character moves to the target position according to the riding state corresponding to the intersection height, wherein the first moving image frame includes the third moving image frame;
and 102-2-A2.3, if the intersection height is not matched with the AR mode riding state information, rendering a fourth moving image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the first moving image frame comprises the fourth moving image frame.
In this embodiment, when the dotting ray intersects with the motion surface, it may further analyze whether the target position is a movable position by using height information of an intersection point generated by the intersection, specifically obtain the height of the intersection point, determine whether the height of the intersection point matches with the AR mode riding state information corresponding to the three-dimensional game virtual scene, and if the height of the intersection point matches with the AR mode riding state information, render a third motion image frame in which the game character moves to the target position in a riding state corresponding to the height of the intersection point, where the three-dimensional game virtual scene is, for example, a garden or a city pool in the game, and the like, and support ground surface traveling motions such as walking and sitting and riding of the game character in a normal mode, and also support aerial traveling manners such as light work and aircrafts for the game character, only support ground surface traveling manners such as walking and riding in an AR mode, and the height of the intersection point is a height range supporting the ground surface traveling manners, and render an image frame in which the game character moves to the target position in the ground surface traveling manners such as walking and riding. If the intersection height does not match with the AR mode riding state information, it indicates that the target position is a movable position in the normal mode, but moves to the target position in the AR mode, and a corresponding fourth moving image frame should be rendered according to a preset collision motion rule, for example, the intersection height is a height range supporting an air travel mode.
In addition, the motion surface may also be loaded as a navigation motion surface in the embodiment of the present application, and the specific motion surface may include a two-dimensional navigation grid motion surface and a three-dimensional voxel motion surface, where the two-dimensional navigation grid motion surface may indicate that a game character in a three-dimensional game virtual scene may move thereon in ground surface traveling manners such as walking, riding, and the three-dimensional voxel motion surface may reflect a communication condition of each movable voxel in the three-dimensional game virtual scene, that is, the connected voxel grids may move in various manners such as ground surface traveling manners and air traveling manners. Based on the two-dimensional navigation grid motion surface or the three-dimensional voxel motion surface, the path finding of the game role in the game world can be realized.
Optionally, when the motion plane includes a two-dimensional navigation grid motion plane, step 102-2 may specifically include:
102-2-B1, if the target position is matched with the two-dimensional navigation grid motion surface, acquiring preset ride information of the target position corresponding to the two-dimensional navigation grid motion surface, determining a target ride state of the game character according to the preset ride information and a current ride state of the game character, and rendering a fifth motion image frame of the game character moving to the target position in the target ride state based on the two-dimensional navigation grid motion surface, wherein the motion image frame comprises the fifth motion image frame;
and 102-2-B2, if the target position is not matched with the two-dimensional navigation grid motion surface, rendering a sixth motion image frame of the game role moving at the corresponding current position, wherein the motion image frame comprises the sixth motion image frame.
In this embodiment, for a target position represented by a plane coordinate in a game, when a projection of a two-dimensional navigation grid motion surface on a plane includes the target position, or when a dotting ray emitted based on the target position exists at an intersection with the two-dimensional navigation grid motion surface, it is considered that the target position matches the two-dimensional navigation grid motion surface, and this indicates that a game character may move to the target position, and when it is determined that the target position matches the two-dimensional navigation grid motion surface, preset ride information corresponding to a vertical projection position point of the target position point on the two-dimensional navigation grid motion surface is obtained, for example, the preset ride information may include a ground-surface travel manner such as walking, riding, rowing, and the like, and it is further determined whether the current ride state of the game character belongs to a ride state indicated by the preset ride information, if the current ride state of the game character is taken as the target position, otherwise, the target ride state of the game character is switched to a travel manner indicated by the preset ride information, for example, from the walking state to a ride state, and thus rendering the game character moving to the target position in the target position (here, the target ride position refers to a fifth motion position of the projection of the two-dimensional navigation grid motion surface on the two-dimensional navigation grid motion surface). And when the projection of the two-dimensional navigation grid motion surface on the plane does not include the target position, or when the dotting ray emitted based on the target position does not have an intersection point with the two-dimensional navigation grid motion surface, the target position is considered to be not matched with the two-dimensional navigation grid motion surface, at this moment, the game role cannot move to the target position, then a sixth motion image frame is rendered according to a preset collision motion rule, specifically, the same manner as the manner for rendering the second motion image frame is adopted, and the description is omitted here.
Optionally, when the motion plane includes a three-dimensional voxel motion plane, step 102-2 may specifically include:
102-2-C1, if the target position is matched with the three-dimensional voxel motion surface, acquiring preset ride information of the target position corresponding to the three-dimensional voxel motion surface, determining a target ride state of the game character according to the preset ride information and a current ride state of the game character, determining route seeking information of the game character moving from the current position to the target position based on the three-dimensional voxel motion surface, and rendering a seventh moving image frame of the game character moving to the target position in the target ride state according to the route seeking information, wherein the moving image frame comprises the seventh moving image frame;
step 102-2-C2, if the target position does not match the three-dimensional voxel motion surface, rendering an eighth motion image frame of the game character moving at the corresponding current position, wherein the motion image frame includes the eighth motion image frame.
In this embodiment, the three-dimensional voxel motion surface contains connectable voxels in the three-dimensional game virtual scene, and when the projection of the three-dimensional voxel motion surface on the plane includes the target position or the dotting ray emitted based on the target position can hit the three-dimensional voxel motion surface, the target position is considered to be matched with the three-dimensional voxel motion surface, which indicates that the game character can move to the target position. When the target position is determined to be matched with the three-dimensional voxel motion surface, preset riding information corresponding to a vertical projection position point of the target position point on the three-dimensional voxel motion surface is acquired, for example, the preset riding information may include ground surface traveling manners such as walking, riding, rowing and the like, and may also include aerial traveling manners such as light power, aircrafts and the like, and further, the target riding state when the game character moves to the target position is determined according to the current riding state of the game character in combination with the preset riding information. After the target riding state is determined, path finding information of the game role moving from the current position to the target position is generated based on the three-dimensional voxel moving surface, and a seventh moving image frame of the game role moving to the target position in the target riding state (the target position refers to a vertical projection voxel position of the target position on the three-dimensional voxel moving surface, and if the vertical projection voxel positions corresponding to the target position include a plurality of positions, one of the vertical projection voxel positions can be selected as the target voxel position) is rendered according to the determined path finding information. And in the same way as the step 103-B-2, when the target position is not matched with the three-dimensional voxel motion surface, the game character cannot move to the target position, and then the eighth motion image frame is rendered according to the preset collision motion rule.
In this embodiment of the application, in order to enable the player to obtain a more immersive experience, optionally, the step 102-2 may further include:
102-2-D1, acquiring visual angle data obtained by an angular motion detection device of the game client according to the preset frame rate;
and 102-2-D2, rendering the motion image frame of the game role matched with the view angle data frame by frame according to the motion surface data file and the target position, wherein the motion image frame is used for displaying the three-dimensional game virtual scene matched with the view angle data and the motion action of the game role.
In this embodiment, the game client may not only obtain the real displacement data, but also collect the angular motion data through an angular motion detection device (e.g., a gyroscope), and process the angular motion data into corresponding perspective data, so as to render a motion image frame matched with the perspective data, so that the angular motion condition of the game client may be reflected through the motion image frame, and a player may change a perspective at which the game world is desired to be observed by rotating a mobile phone (when the game client is a mobile phone), so that the motion image frame may show an environment picture in a three-dimensional game virtual scene matched with a real-time perspective of the user, so that the player may better browse the game world, and user experience is improved.
It should be noted that the steps 102-2-D1 and D2 may be combined with the steps 102-2-a 2.1-102-2-a 2.3, the steps 102-2-B1-B2, and the steps 102-2-C1-C2 to render the moving image frame matched with the target position (or the current position, the preset collision motion rule), the riding state, and the view angle data. So that the player can meet the requirement of randomly rotating the visual angle while the game role does not wear the upper during movement.
In the above embodiment, optionally, step 102-2-D1 may specifically include: acquiring angular motion data acquired by the angular motion detection device according to the preset frame rate; and generating the visual angle data corresponding to the angular motion data according to the mirror moving type corresponding to the three-dimensional game virtual scene. When the mirror movement category comprises a first person mirror movement, the view angle data is a first person view angle of the game role in the three-dimensional game virtual scene, and the view angle data generates corresponding change along with the angular movement data on the basis of a preset initial view angle of the game role; when the mirror movement category comprises a third person mirror movement, the visual angle data is a third person visual angle which is viewed to the game role according to a preset position and a preset angle which correspond to the position and the direction of the game role, and the direction of the game role is correspondingly changed along with the angular movement data on the basis of the preset initial direction of the game role.
In this embodiment, the angle-of-view data is processed by the angular motion data collected by the angular motion detection device, wherein the angle-of-view data should be calculated based on the shooting scope category corresponding to the three-dimensional game virtual scene or selected by the user. The moving mirror type is a first-person moving mirror, the moving image frame shows that the image is the image corresponding to the visual angle of the game role as the first person, namely the game world environment seen by the game role, the visual angle data of the first-frame moving image frame is the preset initial visual angle of the game role, and the subsequent visual angle data correspondingly changes according to the angular movement data on the basis of the preset initial visual angle. The operation type is that the third person is named as a moving mirror, the image displayed by the moving image frame is an image corresponding to a third person named as a visual angle of a 'looking' game role, namely, the game role and the surrounding environment of the game role which are 'seen' by the virtual third person are displayed, the third person named as the visual angle takes the position and the direction of the game role as a reference, the virtual third person is 'located' at the preset position and the preset angle of the game role, and the position and the angle of the 'looking' game role of the virtual third person are relatively kept unchanged from the position and the angle of the game role. Namely, the view angle of the third person changes the position and the angle of the virtual third person under the game world coordinate along with the displacement and the view angle rotation of the game role in the game world, but the virtual third person keeps still relative to the coordinate system which is established by taking the game role as the origin. The direction of the game role is correspondingly changed along with the angular motion data on the basis of the preset initial direction of the game role (the angular motion data indicate how much the game role rotates and how much the game role rotates), and the third person calls that the visual angle is changed along with the change of the above rule. Thereby providing richer AR mode display effects and more game playing methods.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene; the motion surface data file can be modified according to the real-time environment of the three-dimensional game scene;
step 102-2 may specifically include:
102-2-E1, if the target position matches the plane coordinates of the movable position, generating a ninth moving image frame of the game character moving to the target position according to height data corresponding to the target position, wherein the moving image frame includes the ninth moving image frame;
and 102-2-E2, if the plane coordinates of the target position and the movable position are not matched, generating a tenth motion image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the motion image frame comprises the tenth motion image frame.
In this embodiment, the game character may be driven to move to the height data position corresponding to the target position without intervening movement based on the data in the movement plane data file. In addition, the plane coordinates and height data of the movable position stored in the motion surface data file can be modified, added and deleted according to the real-time change of the game scene, for example, if a table is placed in a certain movable area and the table placement position is changed into a position where the movement cannot be performed, the related data in the motion surface data file can be modified, so that when the game role is driven to move by using the file, the game role can be ensured to perform non-interspersed movement even if the environment of the game scene is changed, and the movement connectivity of the game role is calculated by using the motion surface data file which can be modified in real time so as to drive the role to move, so that the movement of the game role in the virtual world is more real.
Further, in the AR mode of the embodiment of the present application, in addition to that the game character can move in the three-dimensional game virtual scene following the real displacement of the player in the real world, a game playing method for entering the real world from the three-dimensional game virtual scene can be provided. Optionally, the method may further include: s1, responding to a transmission gate opening request from a three-dimensional game virtual scene to a real world, acquiring a first real-time real image frame corresponding to the real world, storing the first real-time real image frame as a first map, and acquiring a first real-time virtual image frame (the first real-time virtual image frame is a motion image frame) corresponding to the three-dimensional game virtual scene, wherein the first map is used for rendering a preset transmission gate model corresponding to a transmission gate; and S2, according to the first map, the preset transfer gate model and the first real-time virtual image frame, rendering to obtain a first real-time rendering image frame containing the transfer gate.
The method comprises the steps of storing images collected in the real world in a device memory in a map form, rendering the images to a preset transfer door model by a game engine through a rendering technology, displaying real-time environments of the real world inside and outside a door, and rendering the effect of transmitting the images for displaying the real-time environment of a three-dimensional game virtual scene outside the door, wherein the images displayed inside the game have the advantages of integrity and no cracking sense, the real-time environments of the real world and the virtual world inside and outside the door can be displayed in real time, a more real transfer door effect is created, the world environment and the position of the transfer door can be observed in real time by a player through the transfer door, the experience sense of the player is improved, the display effect of a game picture is improved, the game playability is improved, the casual game experience of the player is brought through an AR game mode, and technical support is provided for increasing the game playing method.
In addition, in this embodiment, based on the position and direction of the game character and the transfer gate in the game world, both cases of including the virtual transfer gate and not including the virtual transfer gate in the game screen can be exhibited. One of them is a case where the virtual transfer gate is visible from the perspective of a game character in the virtual world, and the other is a case where the virtual transfer gate is not visible from the perspective of a game character in the virtual world. The position and the direction of the virtual transfer door in the three-dimensional game virtual scene can be specifically determined by the selection of a player, and under the condition that the position and the direction of the virtual transfer door are determined, the shape and the size of the virtual transfer door displayed in the game can change along with different positions and directions of game characters in the three-dimensional game virtual scene.
Further, as a specific implementation of the method in fig. 1, an embodiment of the present application provides a motion processing apparatus for a game character, as shown in fig. 3, the apparatus includes:
the request response module is used for responding to an AR mode interactive operation request in a game client and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene;
and the motion driving module is used for driving the game role controlled by the game player to perform non-penetration motion in the three-dimensional game virtual scene according to real displacement data of the game player in real time in the real world.
Optionally, the apparatus further comprises: the file reading module is used for reading a motion surface data file corresponding to a real-time three-dimensional game virtual scene before calling a game engine in the game client to render the real-time three-dimensional game virtual scene, wherein the motion surface data file is used for indicating the movable position of the three-dimensional game virtual scene;
the motion driving module is specifically configured to: acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating target positions of game characters in the three-dimensional game virtual scene frame by frame according to the real displacement data and initial positions of the game characters in the three-dimensional game virtual scene; and according to the motion surface data file and the target position, rendering the motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine.
Optionally, the motion driving module is further configured to: acquiring positioning data acquired by a positioning device of the game client according to the preset frame rate, and calculating real displacement data corresponding to the game client frame by frame; and determining virtual displacement data of the game role corresponding to the real displacement data according to a preset proportionality coefficient, and determining a target position of the game role in the three-dimensional game virtual scene according to the virtual displacement data and the initial position.
Optionally, the motion driving module is further configured to: acquiring first positioning data and second positioning data in the positioning data, wherein a difference between a first sampling frame corresponding to the first positioning data and a second sampling frame corresponding to the second positioning data is a preset number; and performing interpolation based on the first positioning data and the second positioning data to obtain interpolation position data matched with the preset quantity, and calculating interpolation displacement data corresponding to the game client according to the interpolation position data, wherein the real displacement data comprises the interpolation displacement data.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene;
the file reading module is specifically configured to: and loading a moving surface corresponding to the moving surface data file in the three-dimensional game virtual scene, wherein the moving surface is hidden when the three-dimensional game virtual scene is displayed.
Optionally, the motion driving module is further configured to: generating a dotting ray based on the target position, and emitting the dotting ray to the moving surface to perform ray dotting, wherein the dotting ray is perpendicular to the plane of the target position;
if the dotting ray intersects with the motion surface, rendering a first motion image frame of the game role moving to the target position, wherein the motion image frame comprises the first motion image frame;
and if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the second motion image frame.
Optionally, if the dotting ray intersects with the motion surface, the motion driving module is further configured to: acquiring the height of an intersection point of the dotting ray and the motion surface;
if the intersection point height is matched with the AR mode riding state information of the three-dimensional game virtual scene, rendering a third motion image frame of the game role moving to the target position according to the riding state corresponding to the intersection point height, wherein the first motion image frame comprises the third motion image frame;
and if the intersection point height is not matched with the AR mode riding state information, rendering a fourth moving image frame of the game role moving at the corresponding current position according to the preset collision movement rule, wherein the first moving image frame comprises the fourth moving image frame.
Optionally, the motion surface comprises a two-dimensional navigation grid motion surface; the motion driving module is further configured to:
if the target position is matched with the two-dimensional navigation grid motion surface, acquiring preset riding information of the target position corresponding to the two-dimensional navigation grid motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, and rendering a fifth motion image frame of the game role moving to the target position in the target riding state based on the two-dimensional navigation grid motion surface, wherein the motion image frame comprises the fifth motion image frame;
if the target position is not matched with the two-dimensional navigation grid motion surface, rendering a sixth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule,
wherein the moving image frame comprises the sixth moving image frame.
Optionally, the motion surface comprises a three-dimensional voxel motion surface; the motion driving module is further configured to:
if the target position is matched with the three-dimensional voxel motion surface, acquiring preset riding information of the target position corresponding to the three-dimensional voxel motion surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, determining path finding information of the game role moving from the current position to the target position based on the three-dimensional voxel motion surface, and rendering a seventh moving image frame of the game role moving to the target position in the target riding state according to the path finding information, wherein the moving image frame comprises the seventh moving image frame;
and if the target position is not matched with the three-dimensional voxel motion surface, rendering an eighth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the eighth motion image frame.
Optionally, the motion surface data file comprises plane coordinates and height data corresponding to movable positions in the three-dimensional game virtual scene determined based on the landform data of the three-dimensional game virtual scene; the motion surface data file can be modified according to the real-time environment of the three-dimensional game scene;
the motion driving module is further configured to:
if the target position is matched with the plane coordinates of the movable position, generating a ninth motion image frame of the game character moving to the target position according to height data corresponding to the target position, wherein the motion image frame comprises the ninth motion image frame;
if the plane coordinates of the target position and the movable position are not matched, generating a tenth motion image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the motion image frame comprises the tenth motion image frame.
Optionally, the motion driving module is further configured to: acquiring visual angle data obtained by an angular motion detection device of the game client according to the preset frame rate; and rendering the motion image frame of the game role matched with the visual angle data frame by frame according to the motion surface data file and the target position, wherein the motion image frame is used for displaying the three-dimensional game virtual scene matched with the visual angle data and the motion action of the game role.
Optionally, the motion driving module is further configured to: acquiring angular motion data acquired by the angular motion detection device according to the preset frame rate; and generating the visual angle data corresponding to the angular motion data according to the mirror moving type corresponding to the three-dimensional game virtual scene.
Optionally, in a case that the mirror movement category includes a first person mirror movement, the perspective data is a first person perspective of the game character in the three-dimensional game virtual scene, and the perspective data changes correspondingly with the angular movement data based on a preset initial perspective of the game character;
and under the condition that the mirror moving category comprises a third person called mirror moving, viewing the third person called view angle of the game role according to a preset position and a preset angle corresponding to the position and the direction of the game role, wherein the direction of the game role is correspondingly changed along with the angular movement data on the basis of a preset initial direction of the game role.
It should be noted that other corresponding descriptions of the functional units involved in the motion processing apparatus for a game character provided in the embodiment of the present application may refer to the corresponding descriptions in the methods in fig. 1 to fig. 2, and are not repeated herein.
Based on the methods shown in fig. 1 to 2, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the motion processing method for the game character shown in fig. 1 to 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the above methods shown in fig. 1 to fig. 2 and the virtual device embodiment shown in fig. 3, in order to achieve the above object, an embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the computer device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the motion processing method of the game character as shown in fig. 1 to 2.
Optionally, the computer device may also include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the present embodiment provides a computer device architecture that is not limiting of the computer device, and that may include more or fewer components, or some components in combination, or a different arrangement of components.
The storage medium can also comprise an operating system and a network communication module. An operating system is a program that manages and maintains the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among the components in the storage medium and communication with other hardware and software in the entity device.
Through the description of the above embodiments, those skilled in the art may clearly understand that the present application may be implemented by software plus a necessary general hardware platform, or may also be implemented by hardware, in response to an AR mode interactive operation request, read a motion plane data file corresponding to a three-dimensional game virtual scene indicated by the AR mode interactive operation request, at the same time, obtain real displacement data corresponding to a game client at a preset frame rate, determine a target position of a game character in the three-dimensional game virtual scene frame by frame based on the real displacement data, analyze whether the target position is movable by using the read motion plane data file, and render a motion image frame corresponding to the game character based on an analysis result. Compared with the prior art that the existing scene is used for running the AR mode to cause the motion performance effect, or the AR mode is used for carrying out scene modeling separately for avoiding the help-wearing, the method and the device do not need to establish a special game scene model for the AR mode, only need to use the pre-established motion surface data file to analyze whether the game role can follow the motion track of the player in the real world in the three-dimensional game virtual scene, accordingly, the motion image frame is rendered, the help-wearing caused by the fact that the game role in the three-dimensional game virtual scene is hard to follow the motion path of the player in the real world is avoided, the game playability is improved, the game playing method is increased, the game display effect is guaranteed, and the AR game mode brings the casual game experience of virtual-real combination to the player.
Those skilled in the art will appreciate that the drawings are merely schematic representations of preferred embodiments and that the blocks or flowchart illustrations are not necessary to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial number is merely for description and does not represent the superiority and inferiority of the implementation scenario. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (15)

1. A method for processing a motion of a game character, comprising:
in response to an AR mode interactive operation request in a game client, calling a game engine in the game client to render a real-time three-dimensional game virtual scene, loading a motion surface corresponding to a motion surface data file of the three-dimensional game virtual scene, wherein the motion surface data file is used for indicating a movable position of the three-dimensional game virtual scene, and the motion surface data file comprises a plane coordinate and height data which are determined based on landform data of the three-dimensional game virtual scene and correspond to the movable position in the three-dimensional game virtual scene;
acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating target positions of game characters in the three-dimensional game virtual scene frame by frame according to the real displacement data and initial positions of the game characters in the three-dimensional game virtual scene;
according to the motion surface data file and the target position, rendering a motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine, wherein a dotting ray vertical to a plane where the target position is located is generated based on the target position, and the dotting ray is emitted to the motion surface for ray dotting; if the dotting ray intersects with the motion surface, rendering a first motion image frame of the game role moving to the target position; and if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule.
2. The method of claim 1, wherein prior to invoking the game engine within the game client to render the real-time three-dimensional game virtual scene, the method further comprises:
and reading a moving surface data file corresponding to the three-dimensional game virtual scene.
3. The method according to claim 2, wherein the acquiring, at a preset frame rate, real displacement data corresponding to the game client, and determining, frame by frame, a target position of the game character in the three-dimensional game virtual scene according to the real displacement data and an initial position of the game character in the three-dimensional game virtual scene specifically includes:
acquiring positioning data acquired by a positioning device of the game client according to the preset frame rate, and calculating real displacement data corresponding to the game client frame by frame;
and determining virtual displacement data of the game role corresponding to the real displacement data according to a preset proportionality coefficient, and determining a target position of the game role in the three-dimensional game virtual scene according to the virtual displacement data and the initial position.
4. The method according to claim 3, wherein the acquiring the positioning data collected by the positioning device of the game client at the preset frame rate and calculating the real displacement data corresponding to the game client frame by frame specifically comprises:
acquiring first positioning data and second positioning data in the positioning data, wherein a difference between a first sampling frame corresponding to the first positioning data and a second sampling frame corresponding to the second positioning data is a preset number;
and performing interpolation based on the first positioning data and the second positioning data to obtain interpolation position data matched with the preset quantity, and calculating interpolation displacement data corresponding to the game client according to the interpolation position data, wherein the real displacement data comprises the interpolation displacement data.
5. The method of claim 2, wherein the motion surface is hidden when the three-dimensional game virtual scene is displayed.
6. The method according to claim 1, wherein the generating a first moving image frame of the game character moving to the target position if the dotting ray intersects with the moving surface includes:
acquiring the height of an intersection point of the dotting ray and the motion surface;
if the intersection point height is matched with the AR mode riding state information of the three-dimensional game virtual scene, rendering a third motion image frame of the game role moving to the target position according to the riding state corresponding to the intersection point height, wherein the first motion image frame comprises the third motion image frame;
and if the intersection point height is not matched with the AR mode riding state information, rendering a fourth moving image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the first moving image frame comprises the fourth moving image frame.
7. The method of claim 5, wherein the motion surface comprises a two-dimensional navigation grid motion surface; the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the two-dimensional navigation grid motion surface, acquiring preset riding information of the target position corresponding to the two-dimensional navigation grid motion surface, determining the target riding state of the game role according to the preset riding information and the current riding state of the game role, rendering a fifth motion image frame of the game role moving to the target position in the target riding state based on the two-dimensional navigation grid motion surface, wherein the motion image frame comprises the fifth motion image frame, and when the projection of the two-dimensional navigation grid motion surface on a plane comprises the target position or when a dotting ray emitted based on the target position and the two-dimensional navigation grid motion surface have an intersection point, determining that the target position is matched with the two-dimensional navigation grid motion surface;
and if the target position is not matched with the two-dimensional navigation grid motion surface, rendering a sixth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the sixth motion image frame.
8. The method of claim 5, wherein the motion surface comprises a three-dimensional voxel motion surface; the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the three-dimensional voxel moving surface, acquiring preset riding information of the target position corresponding to the three-dimensional voxel moving surface, determining a target riding state of the game role according to the preset riding information and the current riding state of the game role, determining path finding information of the game role moving from the current position to the target position based on the three-dimensional voxel moving surface, rendering a seventh motion image frame of the game role moving to the target position in the target riding state according to the path finding information, wherein the motion image frame comprises the seventh motion image frame, and when the projection of the three-dimensional voxel moving surface on a plane comprises the target position or a dotting ray emitted based on the target position can be shot on the three-dimensional voxel moving surface, determining that the target position is matched with the three-dimensional voxel moving surface;
and if the target position is not matched with the three-dimensional voxel motion surface, rendering an eighth motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule, wherein the motion image frame comprises the eighth motion image frame.
9. The method of claim 1, wherein the athletic surface data file includes planar coordinate and elevation data corresponding to a movable location in the three-dimensional gaming virtual scene determined based on topographical data of the three-dimensional gaming virtual scene; the motion surface data file can be modified according to the real-time environment of the three-dimensional game scene;
the rendering, by a game engine, the motion image frame corresponding to the game character in the three-dimensional game virtual scene frame by frame according to the motion plane data file and the target position specifically includes:
if the target position is matched with the plane coordinates of the movable position, generating a ninth motion image frame of the game character moving to the target position according to height data corresponding to the target position, wherein the motion image frame comprises the ninth motion image frame;
if the plane coordinates of the target position and the movable position are not matched, generating a tenth motion image frame of the game role moving at the corresponding current position according to the preset collision motion rule, wherein the motion image frame comprises the tenth motion image frame.
10. The method according to claim 1, wherein the generating, frame by frame, a motion image frame corresponding to the game character in the three-dimensional game virtual scene according to the motion plane data file and the target position specifically includes:
acquiring visual angle data obtained by an angular motion detection device of the game client according to the preset frame rate;
and rendering the motion image frame of the game role matched with the visual angle data frame by frame according to the motion surface data file and the target position, wherein the motion image frame is used for displaying the three-dimensional game virtual scene matched with the visual angle data and the motion action of the game role.
11. The method according to claim 10, wherein the obtaining of the perspective data obtained by the angular motion detection device of the game client at the preset frame rate specifically includes:
acquiring angular motion data acquired by the angular motion detection device according to the preset frame rate;
and generating the visual angle data corresponding to the angular motion data according to the mirror motion category corresponding to the three-dimensional game virtual scene.
12. The method of claim 11,
under the condition that the mirror movement category comprises a first person mirror movement, the view angle data is a first person view angle of the game role in the three-dimensional game virtual scene, and the view angle data generates corresponding change along with the angular movement data on the basis of a preset initial view angle of the game role;
and under the condition that the mirror moving category comprises a third person called mirror moving, viewing the third person called view angle of the game role according to a preset position and a preset angle corresponding to the position and the direction of the game role, wherein the direction of the game role is correspondingly changed along with the angular movement data on the basis of a preset initial direction of the game role.
13. A game character motion processing apparatus, comprising:
the request response module is used for responding to an AR mode interactive operation request in a game client and calling a game engine in the game client to render to obtain a real-time three-dimensional game virtual scene;
the file reading module is used for loading a moving surface corresponding to a moving surface data file of the three-dimensional game virtual scene, the moving surface data file is used for indicating a movable position of the three-dimensional game virtual scene, and the moving surface data file comprises plane coordinates and height data corresponding to the movable position in the three-dimensional game virtual scene determined based on landform data of the three-dimensional game virtual scene;
the motion driving module is used for acquiring real displacement data corresponding to the game client according to a preset frame rate, and generating a target position of a game role in the three-dimensional game virtual scene frame by frame according to the real displacement data and an initial position of the game role in the three-dimensional game virtual scene; according to the motion surface data file and the target position, rendering a motion image frame corresponding to the game role in the three-dimensional game virtual scene frame by frame through a game engine, wherein a dotting ray vertical to a plane where the target position is located is generated based on the target position, and the dotting ray is emitted to the motion surface for ray dotting; if the dotting ray intersects with the motion surface, rendering a first motion image frame of the game role moving to the target position; and if the dotting ray does not intersect with the motion surface, rendering a second motion image frame of the game role moving at the corresponding current position according to a preset collision motion rule.
14. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 12.
15. A computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 12 when executing the computer program.
CN202110282546.0A 2021-03-16 2021-03-16 Game role movement processing method and device, storage medium and computer equipment Active CN112862935B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110282546.0A CN112862935B (en) 2021-03-16 2021-03-16 Game role movement processing method and device, storage medium and computer equipment
PCT/CN2021/121092 WO2022193612A1 (en) 2021-03-16 2021-09-27 Motion processing method and apparatus for game character, and storage medium and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110282546.0A CN112862935B (en) 2021-03-16 2021-03-16 Game role movement processing method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112862935A CN112862935A (en) 2021-05-28
CN112862935B true CN112862935B (en) 2023-03-17

Family

ID=75994795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110282546.0A Active CN112862935B (en) 2021-03-16 2021-03-16 Game role movement processing method and device, storage medium and computer equipment

Country Status (2)

Country Link
CN (1) CN112862935B (en)
WO (1) WO2022193612A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862935B (en) * 2021-03-16 2023-03-17 天津亚克互动科技有限公司 Game role movement processing method and device, storage medium and computer equipment
CN113821345B (en) * 2021-09-24 2023-06-30 网易(杭州)网络有限公司 Method and device for rendering moving track in game and electronic equipment
CN114125552A (en) * 2021-11-30 2022-03-01 完美世界(北京)软件科技发展有限公司 Video data generation method and device, storage medium and electronic device
TWI799195B (en) * 2021-12-10 2023-04-11 宅妝股份有限公司 Method and system for implementing third-person perspective with a virtual object
CN116414223A (en) * 2021-12-31 2023-07-11 中兴通讯股份有限公司 Interaction method and device in three-dimensional space, storage medium and electronic device
CN116036601B (en) * 2023-01-28 2023-06-09 腾讯科技(深圳)有限公司 Game processing method and device, computer equipment and storage medium
CN116328309B (en) * 2023-03-27 2023-10-13 广州美术学院 High-order demand game interaction method aiming at visual impairment crowd Wen Nong travel virtual scene
CN117357894B (en) * 2023-11-01 2024-03-29 北京畅游天下网络技术集团有限公司 Three-dimensional scene generation method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107441714A (en) * 2017-06-01 2017-12-08 杨玉苹 A kind of image processing method and its device, shooting game fighting system and its method of work for realizing AR first person shooting games
CN111744202A (en) * 2020-06-29 2020-10-09 完美世界(重庆)互动科技有限公司 Method and device for loading virtual game, storage medium and electronic device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101780321B (en) * 2009-12-30 2012-01-25 永春至善体育用品有限公司 Method for making high-presence virtual reality of exercise fitness equipment, and interactive system and method based on virtual reality
CN104658038B (en) * 2015-03-12 2019-01-18 南京梦宇三维技术有限公司 3-dimensional digital content intelligence production method and manufacturing system based on motion capture
US10022628B1 (en) * 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
JP6719308B2 (en) * 2016-07-13 2020-07-08 株式会社バンダイナムコエンターテインメント Simulation system and program
CN107479699A (en) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 Virtual reality exchange method, apparatus and system
CN108303719A (en) * 2018-01-30 2018-07-20 上海电力学院 A method of judging whether monitoring client dynamic position exceeds virtual fence
CN108427501B (en) * 2018-03-19 2022-03-22 网易(杭州)网络有限公司 Method and device for controlling movement in virtual reality
EP3644322B1 (en) * 2018-10-25 2023-12-27 Tata Consultancy Services Limited Method and system for interpreting neural interplay involving proprioceptive adaptation during a dual task paradigm
CN110280014B (en) * 2019-05-21 2022-09-13 西交利物浦大学 Method for reducing dizziness in virtual reality environment
CN110665219A (en) * 2019-10-14 2020-01-10 网易(杭州)网络有限公司 Operation control method and device for virtual reality game
CN110772791B (en) * 2019-11-05 2023-07-21 网易(杭州)网络有限公司 Route generation method, device and storage medium of three-dimensional game scene
CN111167120A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Method and device for processing virtual model in game
CN111249729B (en) * 2020-02-18 2023-10-20 网易(杭州)网络有限公司 Game character display method and device, electronic equipment and storage medium
CN111318022B (en) * 2020-03-19 2023-04-14 网易(杭州)网络有限公司 Game scene generation method and device in game, electronic device and storage medium
CN112316424B (en) * 2021-01-06 2021-03-26 腾讯科技(深圳)有限公司 Game data processing method, device and storage medium
CN112862935B (en) * 2021-03-16 2023-03-17 天津亚克互动科技有限公司 Game role movement processing method and device, storage medium and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107441714A (en) * 2017-06-01 2017-12-08 杨玉苹 A kind of image processing method and its device, shooting game fighting system and its method of work for realizing AR first person shooting games
CN111744202A (en) * 2020-06-29 2020-10-09 完美世界(重庆)互动科技有限公司 Method and device for loading virtual game, storage medium and electronic device

Also Published As

Publication number Publication date
WO2022193612A1 (en) 2022-09-22
CN112862935A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112862935B (en) Game role movement processing method and device, storage medium and computer equipment
US10188949B2 (en) Game object control system and program
US20210252398A1 (en) Method and system for directing user attention to a location based game play companion application
US11887258B2 (en) Dynamic integration of a virtual environment with a physical environment
US11250617B1 (en) Virtual camera controlled by a camera control device
CN110665230B (en) Virtual role control method, device, equipment and medium in virtual world
CN112933606B (en) Game scene conversion method and device, storage medium and computer equipment
CN109314802B (en) Game play companion application based on in-game location
Vidal Jr et al. MAGIS: mobile augmented-reality games for instructional support
CN109314800B (en) Method and system for directing user attention to location-based game play companion application
CN113209618B (en) Virtual character control method, device, equipment and medium
Kasapakis et al. Occlusion handling in outdoors augmented reality games
EP3995190A1 (en) Virtual environment image display method and apparatus, device and medium
CN110812841B (en) Method, device, equipment and medium for judging virtual surface in virtual world
CN112891940B (en) Image data processing method and device, storage medium and computer equipment
CN112699208B (en) Map way finding method, device, equipment and medium
CN109417651B (en) Generating challenges using location-based gaming companion applications
KR102317103B1 (en) Battlefield online game implementing augmented reality using iot device
CN111973984A (en) Coordinate control method and device for virtual scene, electronic equipment and storage medium
Garcia et al. Modifying a game interface to take advantage of advanced I/O devices
Chung Metaverse XR Components
CN114247132B (en) Control processing method, device, equipment, medium and program product for virtual object
US20230277943A1 (en) Mapping traversable space in a scene using a three-dimensional mesh
CN110314377B (en) Method and device for randomly generating object moving path in three-dimensional space
CN116764215A (en) Virtual object control method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant