CN111905365B - Method and device for dragging game scene and electronic equipment - Google Patents
Method and device for dragging game scene and electronic equipment Download PDFInfo
- Publication number
- CN111905365B CN111905365B CN202010840505.4A CN202010840505A CN111905365B CN 111905365 B CN111905365 B CN 111905365B CN 202010840505 A CN202010840505 A CN 202010840505A CN 111905365 B CN111905365 B CN 111905365B
- Authority
- CN
- China
- Prior art keywords
- camera
- dragging
- game scene
- determining
- plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000001960 triggered effect Effects 0.000 claims abstract description 26
- 230000004044 response Effects 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a method, a device and electronic equipment for dragging a game scene, wherein a first position is determined on a designated plane of a three-dimensional game space along a direction determined by acquiring the position of a camera and the initial position in response to dragging operation aiming at the game scene; determining a second position on the designated plane along the direction determined by the obtained camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation. According to the method, a simple geometric relation is obtained according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, so that the positions of the cameras after the dragging operation are determined, and further, corresponding game scenes are obtained.
Description
Technical Field
The present invention relates to the field of game technologies, and in particular, to a method, an apparatus, and an electronic device for dragging a game scene.
Background
A virtual camera is generally provided in a three-dimensional space of a game. In the three-dimensional space, the virtual camera is positioned at a specific position, and the three-dimensional space is shot based on a specific angle, so that a corresponding game scene is shot, and the game scene is provided for a user. During a game, a user often needs to drag the game scene with a touch screen, so that the game scene provided in the screen changes with the drag of the user in order to observe surrounding of the scene. The position of the virtual camera is calculated in real time according to the dragging operation of the user, and a game scene matched with the dragging operation is provided for the user by changing the position of the virtual camera, but the algorithm for calculating the position of the virtual camera based on the dragging operation of the user in the related technology is complex, the calculated amount is large, more memory resources are occupied, the change of the game scene is easy to cause the occurrence of the blocking, and the game experience degree of the user is reduced.
Disclosure of Invention
Accordingly, the present invention is directed to a method, an apparatus, and an electronic device for dragging a game scene, so as to reduce complexity of an algorithm, reduce calculation amount, occupy less memory resources, thereby improving fluency of a game scene change and improving game experience of a user.
In a first aspect, an embodiment of the present invention provides a method for dragging a game scene, where a graphical user interface is provided by a terminal device, where the graphical user interface includes at least the game scene; the game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space; the method comprises the following steps: responding to the dragging operation aiming at the game scene, and acquiring the starting position and the ending position of the dragging operation and the camera position of the virtual camera when the dragging operation is triggered; determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation.
Further, when the drag operation is terminated, the touch point of the drag operation is located at the same position in the game scene as the position of the touch point in the game scene when the drag operation is started.
Further, the step of acquiring the start position and the end position of the drag operation includes: acquiring a starting position and an ending position of a touch point of a dragging operation; the touch control point is positioned in the graphical user interface; determining a starting position of a drag operation in the three-dimensional game space based on the starting position of the touch point; based on the termination position of the touch point, a termination position of the drag operation is determined in the three-dimensional game space.
Further, the specified plane of the three-dimensional game space includes: in a coordinate system corresponding to the three-dimensional game space, a plane is formed by coordinate points with zero values in the height direction.
Further, the step of determining the first position in the specified plane of the three-dimensional game space along the direction determined by the camera position and the starting position includes: and calculating a first intersection point of a straight line determined by the position of the camera and the initial position and a designated plane, and determining the first intersection point as a first position.
Further, the step of calculating a first intersection of the straight line determined by the camera position and the start position and the designated plane includes: determining a first target vector taking the camera position as a starting point and taking a first intersection point as an end point based on the direction of a vector formed by the camera position and the starting position and the height of the camera position from a designated plane; a first intersection point is determined based on the first target vector and the camera position.
Further, the step of determining a first intersection point based on the first target vector and the camera position includes: the first intersection point is determined by the following formula: p (P) 1 =v+cur_camera_pos; wherein P is 1 Is a first intersection point; v is a first target vector; v=forward (cur_camera_pos.y/dot); forward is the direction of the vector consisting of the camera position and the starting position; cur_camera_pos.y is the height of the camera position from the designated plane; dot is the direction of a vector formed by the position of the camera and the initial position, and the included angle between the position of the camera and the height direction of the appointed plane; cur_camera_pos is the camera position.
Further, the step of determining the second position in the designated plane along the direction determined by the camera position and the termination position includes: and calculating a second intersection point of the straight line determined by the position of the camera and the termination position and the designated plane, and determining the second intersection point as a second position.
Further, the step of calculating a second intersection of the straight line determined by the camera position and the termination position with the designated plane includes: determining a second target vector with the camera position as a starting point and a second intersection point as an end point based on the direction of the vector formed by the camera position and the end position and the height of the camera position from the designated plane; a second intersection point is determined based on the second target vector and the camera position.
Further, the step of determining a second intersection point based on the second target vector and the camera position, comprises: the second intersection point is determined by the following formula: p (P) 2 =m+cur_camera_pos; wherein P is 2 Is the second intersection point; m is a second target vector; m=forward'; forward' is the direction of the vector consisting of the camera position and the end position; cur_camera_pos.y is the height of the camera position from the designated plane; dot' is the direction of the vector formed by the camera position and the end position, and the included angle between the camera position and the height direction of the appointed plane; cur_camera_pos is the camera position.
Further, the step of controlling the movement of the virtual camera based on the first position and the second position includes: calculating an offset vector taking the second position as a starting point and taking the first position as an end point; the virtual camera is controlled to move from the camera position to a new position along the offset vector.
In a second aspect, an embodiment of the present invention provides an apparatus for dragging a game scene, where a graphical user interface is provided by a terminal device, where the graphical user interface includes at least the game scene; the game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space; the device comprises: the acquisition module is used for responding to the dragging operation aiming at the game scene, acquiring the starting position and the ending position of the dragging operation and the camera position of the virtual camera when the dragging operation is triggered; a determining module, configured to determine a first position on a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and the moving module is used for controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the dragging operation.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of dragging a game scene of any one of the first aspects.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of dragging a game scene of any one of the first aspects.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method, a device and electronic equipment for dragging a game scene, which are used for responding to the dragging operation aiming at the game scene and acquiring the starting position and the ending position of the dragging operation and the camera position of a virtual camera when the dragging operation is triggered; determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation. According to the method, a simple geometric relation is obtained according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, so that the positions of the cameras after the dragging operation are determined, and further, corresponding game scenes are obtained.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for dragging a game scene according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for dragging a game scene according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for dragging a game scene according to an embodiment of the present invention;
FIG. 4 is a flowchart of another method for dragging a game scene according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for dragging a game scene according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the current three-dimensional game scene, when a user clicks a game screen to move, the user needs to drag along with fingers, namely, before and after the camera moves, the points of the game scene pointed by the finger clicks are unchanged. The method, the device and the electronic equipment for dragging the game scene provided by the embodiment of the invention can be applied to various game scenes, and particularly can be applied to the game scene with the dragging operation function.
The method of dragging a game scene in one embodiment of the present disclosure may be run on a terminal device or a server. The terminal device may be a local terminal device. When the method for dragging the game scene runs on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, a running main body of the game program and a game picture presentation main body are separated, the storage and running of a method for dragging a game scene are completed on a cloud game server, and the function of a client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the present invention provides a method for dragging a game scene, and a graphical user interface is provided through a first terminal device, where the first terminal device may be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system.
For the convenience of understanding the present embodiment, first, a method for dragging a game scene disclosed in the present embodiment will be described in detail, where the method may be applied to a terminal device, and a graphical user interface is provided through the terminal device, where the graphical user interface includes at least the game scene; the game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space; the terminal equipment can be mobile phones, computers, notebooks, tablet computers and other equipment; the image user interface can be an interface which is currently displayed in the game and comprises a three-dimensional space scene, wherein the interface can specifically comprise game roles, game scenes and the like; the game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space, a graphical user interface can be dragged in the game process, and the virtual camera is controlled to move, so that the corresponding game scene is obtained.
The execution main body of the method is a terminal device, as shown in fig. 1, and the method comprises the following steps:
step S102, responding to the dragging operation aiming at the game scene, and acquiring the starting position and the ending position of the dragging operation and the camera position of the virtual camera when the dragging operation is triggered;
The game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space; the drag operation is generally a movement operation of a finger on a graphical user interface provided by the terminal device, during which the finger is in contact with the screen until the drag operation ends; for example, when a user clicks a screen of the terminal device and moves on the screen, a game scene moves along with a finger; the initial position of the drag operation refers to the initial position of the drag operation of the finger in the three-dimensional game space before dragging the game scene; similarly, the end position of the drag operation refers to the end position of the drag operation in the three-dimensional game space after the drag game scene is completed; the virtual camera can present the current game scene on the screen of the terminal equipment, so that a user can observe the current game scene.
Specifically, when a user's finger performs a drag operation on a screen of the terminal device, the terminal device may trigger the drag operation of the game scene, and in response to the trigger operation, the start position and the end position of the finger drag may be obtained through a movement track of the finger drag, and the start position and the end position of the drag operation in the corresponding three-dimensional game space may be obtained; in addition, when the drag operation is triggered, the camera position of the virtual camera in the three-dimensional game space can be obtained; the above-mentioned position is usually a world coordinate in a three-dimensional game space, and may include coordinate values of an X-axis, a Y-axis, and a Z-axis, for example.
Step S104, determining a first position on a designated plane of the three-dimensional game space along the direction determined by the position of the camera and the initial position; determining a second position at the designated plane along the direction determined by the camera position and the termination position;
the specified plane of the three-dimensional game space may be a ground in which a finger points; the first position is a position where a finger corresponding to a start position of a drag operation points to the ground before the virtual camera moves; the second position is a position where a finger corresponding to the termination position of the drag operation points to the ground before the virtual camera moves; wherein the ground generally comprises a game scene; the first position and the second position are world coordinates in a three-dimensional game space; specifically, the direction in which the camera position points to the starting position can be determined according to the coordinates of the camera position and the starting position, and the first position coordinate of the appointed plane in the three-dimensional game space is determined through the pointing of the direction; likewise, the direction in which the camera position points to the end position may be determined from the coordinates of the camera position and the end position, and by the direction of this direction, the second position coordinates in the specified plane of the three-dimensional game space are determined.
Wherein the designated plane of the three-dimensional game space includes: in a coordinate system corresponding to the three-dimensional game space, a plane is formed by coordinate points with zero values in the height direction.
The coordinate system corresponding to the three-dimensional game space can be understood as world coordinates of the game scene; for example, the coordinate system corresponding to the three-dimensional game space may include an X axis, a Y axis, and a Z axis, where the Y axis may be a height direction, a plane formed by the X axis and the Z axis may be a game scene plane, and a plane formed by a coordinate point with a value of zero in the height direction may be understood as a plane with y=0 in the coordinate system.
And step S106, controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the dragging operation.
The game scene after the triggering of the drag operation can be the same as the game scene before the drag operation or different from the game scene before the drag operation; specifically, the position coordinates of the virtual camera after the triggering of the drag operation can be determined according to the coordinates of the first position and the second position in the three-dimensional game space; and controlling the virtual camera to move to the camera position triggered by the drag operation according to the camera position coordinates of the virtual camera and the camera position coordinates triggered by the drag operation, and shooting a game scene triggered by the drag operation in the three-dimensional game space at the position.
The embodiment of the invention provides a method for dragging a game scene, which is used for responding to dragging operation aiming at the game scene, acquiring a starting position and an ending position of the dragging operation and a camera position of a virtual camera when the dragging operation is triggered; determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation. In the mode, according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, a simple geometric relationship is obtained, so that the game scene after triggering the dragging operation is determined, the complexity of an algorithm is reduced, the calculated amount is reduced, less memory resources are occupied, and the fluency of the change of the game scene and the game experience of a user are improved.
When the drag operation is terminated, the touch point of the drag operation is positioned at the same position in the game scene as the position of the touch point in the game scene when the drag operation is started.
The touch point of the drag operation may be a game screen of the terminal device; it can be understood that when the user's finger clicks on the game screen to move, the game scene needs to move along with the dragging of the finger, i.e. the position of the game scene pointed by the finger clicking on the screen is unchanged before and after the virtual camera moves.
The embodiment provides another method for dragging a game scene, which is implemented on the basis of the above embodiment, and mainly describes a specific implementation process of the step of acquiring the start position and the end position of the dragging operation; as shown in fig. 2, the method comprises the steps of:
step S202, acquiring a starting position and an ending position of a touch point of a dragging operation; the touch control point is positioned in the graphical user interface;
the touch control point of the dynamic operation can be positioned in a graphical user interface and can be understood as a screen of the terminal equipment; the starting position and the ending position of the touch point of the drag operation can be the screen coordinates or the positions in the coordinates of the graphical user interface, and the coordinates of the starting position and the ending position are two-dimensional space coordinates; referring to the schematic diagram of the drag game scene shown in fig. 3, where S1 is a start position of a touch point of the drag operation, and S2 is a termination position of the touch point of the drag operation; the plane where S1 and S2 are located is a touch point of a dynamic operation, i.e. a graphical user interface, which can also be understood as a screen of the terminal device.
Specifically, screen coordinates of a start position and an end position of a touch point of a drag operation may be directly obtained according to a graphical user interface provided by the terminal device.
Step S204, determining the starting position of a dragging operation in a three-dimensional game space based on the starting position of the touch point; determining a termination position of the drag operation in the three-dimensional game space based on the termination position of the touch point;
specifically, the starting position of the drag operation can be determined in the three-dimensional game space based on the starting position of the touch point and the camera position of the virtual camera; determining a termination position of the drag operation in the three-dimensional game space based on the termination position of the touch point and the camera position of the virtual camera; referring to the schematic diagram shown in fig. 3, the world coordinate pre_world_pos of the start position of the drag operation is calculated based on the two-dimensional coordinate pre_pos of the start position S1 of the touch point and the world coordinate cur_camera_pos of the camera position C of the virtual camera; calculating world coordinates cur_world_pos of the termination position of the drag operation based on two-dimensional coordinates cur_pos of the termination position S2 of the touch point and world coordinates cur_camera_pos of the camera position C of the virtual camera; each game engine may have its own different algorithm to obtain the above result, and the calculation method may use the algorithm in the prior art to calculate, for example, a method of converting screen coordinates into a function of three-dimensional space, and the like.
Step S206, determining a first position on a designated plane of the three-dimensional game space along the direction determined by the position of the camera and the initial position; determining a second position at the designated plane along the direction determined by the camera position and the termination position;
step S208, based on the first position and the second position, the virtual camera is controlled to move so as to obtain a game scene triggered by the dragging operation.
In the embodiment of the invention, a starting position and an ending position of a touch point of a drag operation are obtained in response to the drag operation aiming at a game scene; determining a starting position of a drag operation in the three-dimensional game space based on the starting position of the touch point; determining the termination position of the dragging operation in the three-dimensional game space based on the termination position of the touch point, and acquiring the camera position of the virtual camera when the dragging operation is triggered; determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation. According to the method, a simple geometric relation is obtained according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, so that the positions of the cameras after the dragging operation are determined, and further, corresponding game scenes are obtained.
The present embodiment provides another method for dragging a game scene, where the present embodiment is implemented on the basis of the foregoing embodiment, and the first position is determined on a specified plane of the three-dimensional game space along a direction determined by the camera position and the starting position is described with emphasis; a specific implementation of the step of determining the second position in the designated plane along the direction determined by the camera position and the termination position, and a specific implementation of the step of determining the second position in the designated plane along the direction determined by the camera position and the termination position; as shown in fig. 4, the method comprises the steps of:
step S402, acquiring a starting position and an ending position of a touch point of a dragging operation; the touch control point is positioned in the graphical user interface;
step S404, determining the starting position of the dragging operation in the three-dimensional game space based on the starting position of the touch point; determining a termination position of the drag operation in the three-dimensional game space based on the termination position of the touch point;
step S406, calculating a straight line determined by the position and the initial position of the camera, and determining a first intersection point of the straight line and a designated plane as a first position;
the first position is positioned on a designated plane; the straight line formed by the position coordinates of the camera and the initial position coordinates can be calculated according to the world coordinate system corresponding to the three-dimensional game space, the straight line can be a straight line equation or a direction vector of a direct coordinate system, the straight line can be infinitely prolonged, and the intersection point of the straight line and a designated plane, namely a plane formed by coordinate points with zero values in the height direction in the world coordinate system, is referred to as an intersection point P1 with the designated plane, namely a first position, shown in fig. 3.
For the above step S406, in one embodiment:
a1, determining a first target vector which takes the camera position as a starting point and takes a first intersection point as an end point based on the direction of a vector formed by the camera position and the starting position and the height of the camera position from a designated plane;
referring to fig. 3, the direction of the vector formed by the camera position C and the start position S1 is the same as the direction from C to P1, so that the direction of the first target vector can be calculated by subtracting the world coordinate pre_world_pos of the start position S1 from the world coordinate cur_camera_pos of the camera position C; the height of the camera position from the designated plane can be obtained from the value of the height coordinates of the world coordinates cur_camera_pos of the camera position C; specifically, the angle between the straight line CS1 and the direction perpendicular to the specified plane can be calculated by the direction in which the camera position C points to the starting position S1 and the direction vector perpendicular to the specified plane; by the angle, the height of the camera position from the designated plane, and the direction of the vector composed of the camera position and the start position, a first target vector with the camera position as the start point and the first intersection point as the end point can be obtained.
Step A2, determining a first intersection point based on the first target vector and the camera position.
Since the first target vector is a vector having the camera position as a start point and the first intersection point as an end point, the world coordinate of the first intersection point can be obtained by adding the world coordinate of the camera position and the first target vector.
For step A2, in one embodiment, the first intersection point is determined by the following formula: p (P) 1 =v+cur_camera_pos; wherein P is 1 Is a first intersection point; v is a first target vector; v=forward (cur_camera_pos.y/dot); forward is the direction of the vector consisting of the camera position and the starting position; cur_camera_pos.y is the height of the camera position from the designated plane; dot is the direction of a vector formed by the position of the camera and the initial position, and the included angle between the position of the camera and the height direction of the appointed plane; cur_camera_pos is the camera position.
Wherein cur_camera_pos.y refers to the y value in the world coordinates of the camera position C, i.e. the height of the camera position from the designated plane; the height direction of the above camera position from the specified plane may be set to h_forward= (0, -1, 0); the world coordinates of camera position C can be calculated minus the start The world coordinates of the location S1 are obtained as forward; the dot product of h_forward and forward can be calculated to obtain the dot. Specifically, the distance from C to P1 can be obtained through cur_camera_pos.y/dot, and the first target vector from C to P1 can be obtained through v=forward (cur_camera_pos.y/dot); in this vector, the origin is the camera position, so the coordinate of the first intersection point is P 1 =V+cur_camera_pos。
Step S408, calculating a straight line determined by the position of the camera and the termination position, and determining a second intersection point of the straight line and the designated plane as a second position;
the second position is located on a designated plane; the straight line formed by the position coordinates of the camera and the end position coordinates can be calculated according to the world coordinate system corresponding to the three-dimensional game space, the straight line can be a straight line equation or a direction vector of the world coordinate system, the straight line can be infinitely prolonged, and the intersection point of the straight line and a designated plane, namely, a plane formed by coordinate points with zero values in the height direction in the world coordinate system, is referred to as an intersection point P2 with the designated plane, namely, a second position, shown in fig. 3.
For step S408, in one embodiment:
step B1, determining a second target vector which takes the camera position as a starting point and takes a second intersection point as an end point based on the direction of a vector formed by the camera position and the end position and the height of the camera position from a designated plane;
Referring to fig. 3, the direction of the vector formed by the camera position C and the termination position S2 is the same as the direction from C to P2, so that the direction of the second target vector can be calculated by subtracting the world coordinate cur_world_pos of the termination position S2 from the world coordinate cur_camera_pos of the camera position C; the height of the camera position from the designated plane can be obtained from the value of the height coordinates of the world coordinates cur_camera_pos of the camera position C; specifically, the angle between the straight line CS2 and the direction perpendicular to the specified plane can be calculated by the direction in which the camera position C points to the termination position S2 and the direction vector perpendicular to the specified plane; by the angle, the height of the camera position from the designated plane, and the direction of the vector composed of the camera position and the end position, a second target vector with the camera position as the start point and the second intersection point as the end point can be obtained.
And step B2, determining a second intersection point based on the second target vector and the camera position.
Since the second target vector is a vector having the camera position as a start point and the second intersection point as an end point, the world coordinate of the second intersection point can be obtained by adding the world coordinate of the camera position and the second target vector.
For step B2 above, in one embodiment, the second intersection point is determined by the following formula: p (P) 2 =m+cur_camera_pos; wherein P is 2 Is the second intersection point; m is a second target vector; m=forward'; forward' is the direction of the vector consisting of the camera position and the end position; cur_camera_pos.y is the height of the camera position from the designated plane; dot' is the direction of the vector formed by the camera position and the end position, and the included angle between the camera position and the height direction of the appointed plane; cur_camera_pos is the camera position.
Wherein cur_camera_pos.y refers to the y value in the world coordinates of the camera position C, i.e. the height of the camera position from the designated plane; the height direction of the above camera position from the specified plane may be set to h_forward= (0, -1, 0); the world coordinates of the camera position C can be calculated minus the world coordinates of the end position S2 to obtain the forward'; the dot product of h_forward and forward 'can be calculated to obtain the dot'. Specifically, the distance from C to P2 can be obtained through cur_camera_pos.y/dot ', and the second target vector from C to P2 can be obtained through m=forward'; in the vector, the starting point is the camera position, so the coordinates of the second intersection point are P 2 =M+cur_camera_pos。
Step S410, calculating an offset vector taking the second position as a starting point and taking the first position as an end point; the virtual camera is controlled to move from the camera position to a new position along the offset vector.
The offset vector may include an offset angle and an offset distance; referring to fig. 3, since the offset vector from the second position to the first position corresponds to the offset vector before and after the movement of the virtual camera, the offset vector offest=p1-p2 from the second position P2 to the first position P1 is calculated; according to the offset vector, the virtual camera can be controlled to move from the camera position to a New position along the offset vector, the coordinates of the New position of the virtual camera are new_camera_pos=cur_camera_pos+offset, the calculated coordinates of the New position of the virtual camera are reassigned to the virtual camera, and a display result of a correct game scene can be obtained.
In the embodiment, first, acquiring a starting position and a stopping position of a dragging operation, calculating a straight line determined by the position and the starting position of a camera, and determining a first intersection point of the straight line and a designated plane as the first position; calculating a straight line determined by the position of the camera and the termination position, and determining a second intersection point of the straight line and the designated plane as a second position; calculating an offset vector taking the second position as a starting point and taking the first position as an end point; controlling the virtual camera to move from the camera position to a new position along the offset vector; in the mode, according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, the geometric rules of the technical problem are obtained, and a simple geometric formula is deduced through a mathematical method, so that the game scene after the triggering of the dragging operation is determined.
Referring to the schematic diagram shown in fig. 3, C is a position before the movement of the virtual camera, S1 is a start position of the drag operation, i.e., a screen position before the movement of the finger, and S2 is an end position of the drag operation, i.e., a screen position after the movement of the finger; p1 and P2 are the positions where the fingers S1 and S2 point to the game scene before the virtual camera moves; in order to keep the position of the finger pointing to the game scene at P1, at this time, the virtual camera position needs to be moved to the right until the position C 'is reached, the position of the finger S2' pointing to the game scene is P1, i.e., S2 '=s2, P2' =p1 in the screen coordinate system; since the vector CP2 is equal to the vector C 'P2', and CC 'is parallel to the vector P2', the vector CC 'can be obtained by only calculating the vector P2' according to the geometric relationship, and the position of C 'is finally obtained according to the coordinates of the position of C, and the position of C' is the position result of the final virtual camera.
Corresponding to the above method embodiment, the present embodiment provides a device for dragging a game scene, where the device is set in a terminal device, and a graphical user interface is provided by the terminal device, where the graphical user interface includes at least the game scene; the game scene is obtained by shooting a three-dimensional game space by a virtual camera in the three-dimensional game space; as shown in fig. 5, the apparatus includes:
An obtaining module 51, configured to obtain a start position and an end position of a drag operation, and a camera position of a virtual camera when the drag operation is triggered, in response to the drag operation for a game scene;
a determining module 52 for determining a first position in a specified plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position;
and the moving module 53 is configured to control the virtual camera to move based on the first position and the second position, so as to obtain a game scene triggered by the drag operation.
The embodiment of the invention provides a device for dragging a game scene, which responds to dragging operation aiming at the game scene, and acquires a starting position and an ending position of the dragging operation and a camera position of a virtual camera when the dragging operation is triggered; determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along the direction determined by the camera position and the termination position; and controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the drag operation. In the mode, according to the starting position and the ending position of the dragging operation and the position rules of the cameras before and after dragging, a simple geometric relationship is obtained, so that the game scene after triggering the dragging operation is determined, the complexity of an algorithm is reduced, the calculated amount is reduced, less memory resources are occupied, and the fluency of the change of the game scene and the game experience of a user are improved.
Further, in the above device, when the drag operation is terminated, the touch point of the drag operation is located at the same position in the game scene as the position of the touch point in the game scene when the drag operation is started.
Further, the above-mentioned acquisition module is further configured to: acquiring a starting position and an ending position of a touch point of a dragging operation; the touch control point is positioned in the graphical user interface; determining a starting position of a drag operation in the three-dimensional game space based on the starting position of the touch point; based on the termination position of the touch point, a termination position of the drag operation is determined in the three-dimensional game space.
Further, the specified plane of the three-dimensional game space includes: in a coordinate system corresponding to the three-dimensional game space, a plane is formed by coordinate points with zero values in the height direction.
Further, the determining module is further configured to: and calculating a first intersection point of a straight line determined by the position of the camera and the initial position and a designated plane, and determining the first intersection point as a first position.
Further, the determining module is further configured to: determining a first target vector taking the camera position as a starting point and taking a first intersection point as an end point based on the direction of a vector formed by the camera position and the starting position and the height of the camera position from a designated plane; a first intersection point is determined based on the first target vector and the camera position.
Further, the determining module is further configured to: the first intersection point is determined by the following formula: p (P) 1 =v+cur_camera_pos; wherein P is 1 Is a first intersection point; v is a first target vector; v=forward (cur_camera_pos.y/dot); forward is the direction of the vector consisting of the camera position and the starting position; cur_camera_pos.y is the height of the camera position from the designated plane; dot is the composition of the camera position and the starting positionAn included angle between the vector of (2) and the height direction of the camera position from the designated plane; cur_camera_pos is the camera position.
Further, the determining module is further configured to: and calculating a second intersection point of the straight line determined by the position of the camera and the termination position and the designated plane, and determining the second intersection point as a second position.
Further, the determining module is further configured to: determining a second target vector with the camera position as a starting point and a second intersection point as an end point based on the direction of the vector formed by the camera position and the end position and the height of the camera position from the designated plane; a second intersection point is determined based on the second target vector and the camera position.
Further, the determining module is further configured to: the second intersection point is determined by the following formula: p (P) 2 =m+cur_camera_pos; wherein P is 2 Is the second intersection point; m is a second target vector; m=forward'; forward' is the direction of the vector consisting of the camera position and the end position; cur_camera_pos.y is the height of the camera position from the designated plane; dot' is the direction of the vector formed by the camera position and the end position, and the included angle between the camera position and the height direction of the appointed plane; cur_camera_pos is the camera position.
Further, the mobile module is further configured to: calculating an offset vector taking the second position as a starting point and taking the first position as an end point; the virtual camera is controlled to move from the camera position to a new position along the offset vector.
The device for dragging the game scene provided by the embodiment of the invention has the same technical characteristics as the method for dragging the game scene provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment also provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the method for dragging a game scene.
Referring to fig. 6, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the above-described method of dragging a game scene.
Further, the electronic device shown in fig. 6 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 6, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The present embodiments also provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the above-described method of dragging a game scene.
The method, the apparatus and the computer program product of the electronic device for dragging a game scene provided in the embodiments of the present invention include a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be repeated herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.
Claims (12)
1. A method for dragging a game scene, characterized in that a graphical user interface is provided by a terminal device, wherein the graphical user interface at least comprises the game scene; the game scene is obtained by shooting the three-dimensional game space by a virtual camera in the three-dimensional game space; the method comprises the following steps:
Responding to the dragging operation aiming at the game scene, and acquiring a starting position and an ending position of the dragging operation and a camera position of the virtual camera when the dragging operation is triggered;
determining a first position in a designated plane of the three-dimensional game space along a direction determined by the camera position and the starting position; determining a second position at the designated plane along a direction determined by the camera position and the termination position;
controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the dragging operation;
the designated plane of the three-dimensional game space includes: in a coordinate system corresponding to the three-dimensional game space, a plane is formed by coordinate points with values of zero in the height direction;
a step of controlling movement of the virtual camera based on the first position and the second position, comprising:
calculating an offset vector with the second position as a starting point and the first position as an ending point;
the virtual camera is controlled to move from the camera position to a new position along the offset vector.
2. The method of claim 1, wherein the touch point of the drag operation is located at the same position in the game scene as the drag operation is initiated when the drag operation is terminated.
3. The method of claim 1, wherein the step of obtaining a start position and an end position of the drag operation comprises:
acquiring a starting position and a termination position of a touch point of the dragging operation; wherein the touch point is located in the graphical user interface;
determining a starting position of the dragging operation in the three-dimensional game space based on the starting position of the touch point; and determining the termination position of the dragging operation in the three-dimensional game space based on the termination position of the touch point.
4. The method of claim 1, wherein the step of determining a first location at a designated plane of the three-dimensional game space along the direction determined by the camera location and the starting location comprises:
and calculating a first intersection point of the straight line determined by the camera position and the initial position and the designated plane, and determining the first intersection point as a first position.
5. The method of claim 4, wherein the step of calculating a first intersection of the line determined by the camera position and the start position with the designated plane comprises:
determining a first target vector taking the camera position as a starting point and taking the first intersection point as an end point based on the direction of a vector formed by the camera position and the starting position and the height of the camera position from the appointed plane;
The first intersection point is determined based on the first target vector and the camera position.
6. The method of claim 5, wherein determining the first intersection point based on the first target vector and the camera position comprises:
the first intersection point is determined by the following formula:
P 1 =V+cur_camera_pos;
wherein P is 1 Is the first intersection point; v is the first target vector; v=forward (cur_camera_pos.y/dot); forward is the direction of the vector consisting of the camera position and the starting position; cur_camera_pos.y is the height of the camera position from the specified plane; dot is the included angle between the direction of the vector formed by the camera position and the initial position and the height direction of the camera position from the appointed plane; cur_camera_pos is the camera position.
7. The method of claim 1, wherein the step of determining a second location at the designated plane along the direction determined by the camera location and the termination location comprises:
and calculating a second intersection point of the straight line determined by the camera position and the termination position and the designated plane, and determining the second intersection point as a second position.
8. The method of claim 7, wherein the step of calculating a second intersection of the line determined by the camera position and the termination position with the designated plane comprises:
determining a second target vector with the camera position as a starting point and the second intersection point as an end point based on the direction of the vector formed by the camera position and the end position and the height of the camera position from the appointed plane;
the second intersection point is determined based on the second target vector and the camera position.
9. The method of claim 8, wherein the step of determining the second intersection point based on the second target vector and the camera position comprises:
the second intersection point is determined by the following formula:
P 2 =M+cur_camera_pos;
wherein P is 2 Is the second intersection point; m is the second target vector; m=forward'; forward' is the direction of the vector of the camera position and the termination position; cur_camera_pos.y is the height of the camera position from the specified plane; dot' is the angle between the direction of the vector formed by the camera position and the end position and the height direction of the camera position from the appointed plane; cur_camera_pos is the camera position.
10. A device for dragging a game scene, characterized in that a graphical user interface is provided through a terminal device, wherein the graphical user interface at least comprises the game scene; the game scene is obtained by shooting the three-dimensional game space by a virtual camera in the three-dimensional game space; the device comprises:
the acquisition module is used for responding to the dragging operation aiming at the game scene, and acquiring the starting position and the ending position of the dragging operation and the camera position of the virtual camera when the dragging operation is triggered;
a determining module, configured to determine a first position on a specified plane of the three-dimensional game space along a direction determined by the camera position and the start position; determining a second position at the designated plane along a direction determined by the camera position and the termination position;
the moving module is used for controlling the virtual camera to move based on the first position and the second position so as to obtain a game scene triggered by the dragging operation;
the designated plane of the three-dimensional game space includes: in a coordinate system corresponding to the three-dimensional game space, a plane is formed by coordinate points with values of zero in the height direction;
The mobile module is further configured to:
calculating an offset vector with the second position as a starting point and the first position as an ending point;
the virtual camera is controlled to move from the camera position to a new position along the offset vector.
11. An electronic device comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of dragging a game scene of any of claims 1-9.
12. A machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method of dragging a game scene of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010840505.4A CN111905365B (en) | 2020-08-19 | 2020-08-19 | Method and device for dragging game scene and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010840505.4A CN111905365B (en) | 2020-08-19 | 2020-08-19 | Method and device for dragging game scene and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111905365A CN111905365A (en) | 2020-11-10 |
CN111905365B true CN111905365B (en) | 2023-11-24 |
Family
ID=73278289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010840505.4A Active CN111905365B (en) | 2020-08-19 | 2020-08-19 | Method and device for dragging game scene and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111905365B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012158432A2 (en) * | 2011-05-09 | 2012-11-22 | Aptima Inc | Systems and methods for scenario generation and monitoring |
CN105597310A (en) * | 2015-12-24 | 2016-05-25 | 网易(杭州)网络有限公司 | Game control method and device |
CN109675310A (en) * | 2018-12-19 | 2019-04-26 | 网易(杭州)网络有限公司 | The method and device of virtual lens control in a kind of game |
CN110575671A (en) * | 2019-10-08 | 2019-12-17 | 网易(杭州)网络有限公司 | Method and device for controlling view angle in game and electronic equipment |
-
2020
- 2020-08-19 CN CN202010840505.4A patent/CN111905365B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012158432A2 (en) * | 2011-05-09 | 2012-11-22 | Aptima Inc | Systems and methods for scenario generation and monitoring |
CN105597310A (en) * | 2015-12-24 | 2016-05-25 | 网易(杭州)网络有限公司 | Game control method and device |
CN109675310A (en) * | 2018-12-19 | 2019-04-26 | 网易(杭州)网络有限公司 | The method and device of virtual lens control in a kind of game |
CN110575671A (en) * | 2019-10-08 | 2019-12-17 | 网易(杭州)网络有限公司 | Method and device for controlling view angle in game and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111905365A (en) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3951721A1 (en) | Method and apparatus for determining occluded area of virtual object, and terminal device | |
CN106651987B (en) | Paths planning method and device | |
CN111135556B (en) | Virtual camera control method and device, electronic equipment and storage medium | |
CN107213636B (en) | Lens moving method, device, storage medium and processor | |
CN110850961B (en) | Calibration method of head-mounted display device and head-mounted display device | |
JP6360509B2 (en) | Information processing program, information processing system, information processing method, and information processing apparatus | |
CN109189302B (en) | Control method and device of AR virtual model | |
CN111833403B (en) | Method and apparatus for spatial localization | |
CN111437604A (en) | Game display control method and device, electronic equipment and storage medium | |
CN108553895B (en) | Method and device for associating user interface element with three-dimensional space model | |
JP7518168B2 (en) | Method, device, electronic device, and computer-readable storage medium for displaying an object in a video | |
CN112118358B (en) | Shot picture display method, terminal and storage medium | |
JP2014235657A (en) | Information processing program, information processing device, information processing system, and information processing method | |
CN112807692A (en) | Information control method and device in game and terminal equipment | |
US11574450B2 (en) | Systems for augmented reality sketching | |
CN110688002A (en) | Virtual content adjusting method and device, terminal equipment and storage medium | |
CN111905365B (en) | Method and device for dragging game scene and electronic equipment | |
CN110548289B (en) | Method and device for displaying three-dimensional control | |
CN116020119A (en) | Visual field control method and device in virtual scene and electronic terminal | |
CN115738230A (en) | Game operation control method and device and electronic equipment | |
CN112473138B (en) | Game display control method and device, readable storage medium and electronic equipment | |
CN115624754A (en) | Interaction control method and device for releasing skills and electronic equipment | |
JP7513262B2 (en) | Terminal device, virtual object operation method, and virtual object operation program | |
CN111766947A (en) | Display method, display device, wearable device and medium | |
CN117298578A (en) | Display control method and device of skill indicator and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |