CN116099195A - Game display control method and device, electronic equipment and storage medium - Google Patents

Game display control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116099195A
CN116099195A CN202310127723.7A CN202310127723A CN116099195A CN 116099195 A CN116099195 A CN 116099195A CN 202310127723 A CN202310127723 A CN 202310127723A CN 116099195 A CN116099195 A CN 116099195A
Authority
CN
China
Prior art keywords
virtual
area
game
user interface
graphical user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310127723.7A
Other languages
Chinese (zh)
Inventor
王艺辉
范勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310127723.7A priority Critical patent/CN116099195A/en
Publication of CN116099195A publication Critical patent/CN116099195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/828Managing virtual sport teams
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a game display control method, a game display control device, an electronic device and a storage medium, wherein a graphical user interface comprises a first game picture obtained by observing part of virtual scenes from a first view angle in a virtual fight task, and the game display control method comprises the following steps: in the process of controlling the first virtual character to perform the virtual fight task, responding to the rotation operation aiming at the terminal equipment, acquiring the rotation information of the terminal equipment, displaying the view angle expansion area on the graphical user interface, determining the target scene area from the virtual scene according to the rotation information of the terminal equipment, and controlling the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle. According to the method and the device, the game pictures obtained by observing the virtual scene from different visual angles can be displayed in a superimposed mode under the condition that the current interactive operation of the user is not interrupted, so that the continuity of the operation of the user in the game is guaranteed, the game operation efficiency is improved, and the game experience of the user is improved.

Description

Game display control method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of game technologies, and in particular, to a game display control method, device, electronic apparatus, and storage medium.
Background
With the gradual increase in game combat rhythm, in a multiplayer online tactical athletic game (Multiplayer Online Battle Arena, MOBA), a user-controlled virtual character is required to pay attention to the surrounding game situation for supporting or killing in addition to the regular operations such as monster clearing.
In the prior art, if a user wants to view the game situation of surrounding enemies or teammates, the user generally presses and drags the thumbnail map with a finger for a long time to change the viewing angle of the virtual scene to switch the game screen. However, when the user is in combat or performs other complicated operations, the user's expectations are often not met by adopting the operation mode. On one hand, the continuity of the game behavior of the virtual character in the virtual scene is easily broken by a user, the continuity of the game is influenced, and the game operation efficiency of the user is low; on the other hand, when the current game screen corresponding to the virtual character is easily switched to another game screen, the user may feel a sense of deviating from the current game behavior, resulting in poor user experience.
Disclosure of Invention
Accordingly, an object of the present application is to provide a game display control method, apparatus, electronic device, and storage medium, which can superimpose and display virtual scenes under different viewing angles without interrupting the current interactive operation of a user, thereby ensuring the continuity of the operation of the user in the game, improving the game operation efficiency, and improving the game experience of the user.
In a first aspect, an embodiment of the present application provides a display control method for a game, where a graphical user interface is provided by a terminal device, where the graphical user interface includes a first game screen obtained by observing a part of a virtual scene at a first viewing angle in a virtual fight task, and the method includes:
responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment, and displaying the visual angle expansion area on a graphical user interface;
determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and controlling the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle.
In a second aspect, an embodiment of the present application further provides a display control apparatus for a game, where a graphical user interface is provided by a terminal device, where the graphical user interface includes a first game screen obtained by observing a part of a virtual scene at a first viewing angle in a virtual fight task, and the apparatus includes:
the rotation display module is used for responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment and displaying the visual angle expansion area on the graphical user interface;
The direction determining module is used for determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and the visual angle moving module is used for controlling the visual angle expansion area to display a second game picture obtained by observing the target scene area at a second visual angle.
In a third aspect, embodiments of the present application further provide an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the memory are communicated through the bus, and the processor executes the machine-readable instructions to execute the steps of the display control method of the game.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the display control method of a game as described above.
The scheme of the application has the following beneficial effects:
compared with the method for changing the visual angle of the virtual scene by long-pressing and dragging the thumbnail map by fingers to switch the game picture in the prior art, the method for changing the visual angle of the virtual scene by long-pressing and dragging the thumbnail map by fingers determines the target scene area from the virtual scene by controlling the rotation of the terminal equipment, and displays the second game picture obtained by observing the target scene area at the second visual angle in real time in the visual angle expansion area displayed on the graphical user interface, on one hand, a user controlling a first virtual role can pay attention to the game situation around the first virtual role controlled by the user in time by observing the second game picture obtained by observing the target scene area at the second visual angle in the visual angle expansion area under the condition that the first game picture obtained by observing part of the virtual scene at the first visual angle is unchanged, so as to decide whether to support or kill the game is carried out or not, and the continuity of the game behavior of the first virtual role in the virtual scene is completed by the user is not interrupted, thereby ensuring the continuity of the game. On the other hand, the graphical user interface simultaneously comprises a first game picture obtained by observing part of the virtual scene at a first visual angle and a second game picture obtained by observing the target scene area at a second visual angle, so that when a user observes the second game picture displayed on the graphical user interface, the first game picture where the first virtual character is always displayed on the graphical user interface, the user can not feel a sense of being separated from the current game behavior, and the game experience sense of the user is further improved. From the above, the present application can superimpose and display the game images obtained by observing the virtual scene at different viewing angles under the condition that the current interactive operation of the user is not interrupted (such as that the user is in battle or performs other complex operations, etc.), which not only ensures the continuity of the operation of the user in the game, improves the operation efficiency of the game, but also improves the game experience of the user.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of interactions in a prior art game;
FIG. 2 is a flowchart of a method for controlling display of a game according to an embodiment of the present disclosure;
fig. 3 is a schematic rotation diagram of a mobile terminal according to an embodiment of the present application;
FIG. 4 is a schematic diagram of dividing a large map in a game according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of rotation axis definition of a mobile terminal according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a map of another game according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of region division of a graphical user interface according to an embodiment of the present application;
Fig. 8 is a schematic diagram of partitioning a virtual scene in a game according to an embodiment of the present application;
FIG. 9 is a schematic diagram of interaction provided in an embodiment of the present application;
FIG. 10 is a flowchart of another method for controlling the display of a game according to an embodiment of the present application;
FIG. 11 is another interaction diagram provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a display control device for a game according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a display control device for another game according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. Based on the embodiments of the present application, every other embodiment that a person skilled in the art would obtain without making any inventive effort is within the scope of protection of the present application.
First, the names involved in the embodiments of the present application will be briefly described:
(1) Terminal equipment
The terminal device in the embodiment of the application mainly refers to a terminal which is used for providing a graphical user interface and can control and operate a virtual character. The terminal device may be a local terminal device mentioned below, or a client device in a cloud interaction system. The terminal device may include, but is not limited to, any of the following: smart phones, tablet computers, palm top computers, personal Digital Assistants (PDAs), etc. The terminal device has installed and running therein an application program supporting a game, such as an application program supporting a three-dimensional game or a two-dimensional game. In the embodiment of the present application, the application program is described as a game application, and optionally, the application program may be a stand-alone application program, such as a stand-alone 3D game program, or a network online application program.
(2) Graphic user interface
The graphical User Interface comprises a User Interface (UI) and a game picture, wherein the UI is a medium for human-computer interaction and information exchange between the system and the User, and the UI can display the system information in a human acceptable form, so that the User can conveniently and effectively operate the computer to achieve bidirectional human-computer interaction. The user operation interface may be composed of visual elements such as controls, text, graphics, images, icons, input boxes, and the like. In alternative embodiments, a minimap, skill release control, movement control, scoring panel, etc. may be included in the user interface. In an alternative embodiment, the game screen is a display screen corresponding to the virtual scene displayed by the terminal device, and the game screen may include virtual characters such as a game character, an NPC character, an AI character, and the like that execute game logic in the virtual scene.
(3) Virtual combat task
The virtual fight task comprises a virtual scene, a first virtual role controlled by a user in the virtual scene, a second virtual role controlled by other users and a third virtual role. The first virtual character is a game character controlled by a user using the first terminal device, and the second virtual character and the third virtual character are game characters controlled by other users who perform the virtual combat task together in the virtual combat task. In this embodiment of the present application, the first virtual character and the second virtual character belong to the same camp, and the first virtual character and the third virtual character belong to different camps.
(4) Virtual scene
The virtual scene is a game scene that an application program displays (or provides) when running on a terminal or a server, i.e., a scene used in normal game play. That is, the virtual scene is a virtual game control carrying a virtual character during the game, and the virtual character can be controlled by an operation instruction issued by a user (i.e., a player) to the terminal device in the game scene to perform actions such as movement and skill release. Optionally, the game scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The game scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene. The game scene is a scene of a complete game logic of a virtual character controlled by a user. Alternatively, the game scenario may also be used for a game scenario fight between at least two virtual characters, with virtual resources available for use by the at least two virtual characters in the game scenario. By way of example, a game scene may include any one or more of the following elements: game background elements, game character elements, game prop elements, game material elements, and the like.
(5) Viewing angle
The visual angle refers to a shooting visual angle preset by a virtual camera in the game, and a virtual scene can be observed through the visual angle of the virtual camera. In this embodiment of the present application, the first view angle refers to a shooting view angle of a first virtual camera bound to a first virtual character, the second view angle refers to a shooting view angle of a second virtual camera for shooting a target scene area, and the third view angle refers to a shooting view angle of a third virtual camera for global shooting. The position of the first virtual camera can be changed along with the position change of the first virtual character, the position of the second virtual camera can be changed along with the position change of the target scene area, and the third virtual camera does not need to be changed due to the fact that all virtual scene shooting is carried out. Here, the virtual camera refers to a three-dimensional model around the virtual character in the virtual scene, and is located near or at the head of the virtual character when the first-person perspective is adopted. When a third person is employed, called the perspective, the virtual camera is located behind the virtual character. In one embodiment, the first person viewing angle may be used to display, and the displayed virtual scene includes only the hand, arm, prop held in the hand, or the like of the virtual character, so that the effect of viewing the virtual scene through the viewing angle of the virtual character can be simulated. In another embodiment, a third person viewing angle may be further used for displaying, where the third person viewing angle is consistent with the first person viewing angle, and only the third person viewing angle displays a virtual character facing away from the terminal screen in the virtual scene, so that the user can see the action, the environment, and the like of the virtual character controlled by the user in the virtual scene. The shooting direction of the virtual camera is an observation direction when the virtual scene is observed at a first person perspective or a third person perspective of the virtual character.
(6) Virtual character
A virtual character refers to a dynamic object that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a cartoon character, or the like. The virtual object is a character that a user controls through an input device, or is artificial intelligence (Artificial Intelligence, AI) set in a virtual environment fight by training. Optionally, the avatar is a avatar playing an athletic activity in the virtual scene. Optionally, the number of virtual characters in the virtual scene fight is preset, or dynamically determined according to the number of clients joining the fight, which is not limited in the embodiment of the present application. In one possible implementation, a user can control the virtual character to move in the virtual scene, e.g., control the virtual character to run, jump, crawl, etc., and also control the virtual character to fight with other lineup virtual characters using skills, virtual props, etc., provided by the application. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual characters may be three-dimensional virtual models, each having its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the avatar may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited by embodiments of the present application.
The game display control method provided by the embodiment of the application can be operated on the local terminal equipment or the server. When the display control method of the game runs on the server, the display control method of the game can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the game display control method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting game pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the display control of the game is a cloud game server in the cloud. When playing the game, the user operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with a user through a graphical user interface, namely, conventionally downloading and installing a game program through the electronic device and running the game program. The way in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Secondly, application scenes applicable to the application are described. As the game fight rhythm increases gradually, the fight situation changes over time. In a multiplayer online tactical athletic game (Multiplayer Online Battle Arena, MOBA), a user-controlled avatar is concerned about the surrounding game situation to support or kill in addition to performing regular operations such as monster clearing. In addition, the situation that whether a monster or a soldier line is harvested in the game or not can be paid attention to at all times.
In the prior art, the visual field range of the user in the virtual fight task is of a fixed size, and if the user wants to view the surrounding visual field, such as the fight situation of surrounding enemies or teammates, the following operation mode of viewing the surrounding visual field is generally used: the viewing angle of the virtual scene is changed by long-pressing and dragging the thumbnail map (small map) by the finger to switch game pictures, so that a user can check the game situation of surrounding enemies or teammates to decide whether to support or kill. In order to timely acquire the game situation of surrounding enemies or teammates, the operation mode of checking the surrounding vision is needed to be frequently performed.
However, when the user is in combat or performs other complicated operations, the user's expectations are often not met by adopting the operation mode. On the one hand, the continuity of the game behavior of the virtual character in the virtual scene controlled by the user is easily broken, the continuity of the game is influenced, and the game operation efficiency of the user is low; on the other hand, when the current game picture corresponding to the virtual character is switched to other game pictures, the user can generate a sense of deviating from the current game behavior, so that the experience of the user is poor. Specifically, when the surrounding view is viewed, the current game screen corresponding to the virtual character is switched to the game screen corresponding to the surrounding view, so that the user can feel a sense of deviating from the game battle, and the game experience is poor.
Illustratively, as shown in fig. 1, in the related game, the thumbnail map 101 is disposed at the upper left corner of the graphical user interface 100, and the character identifications of the virtual characters, such as the character identification of the first virtual character 103 currently operated by the user, the character identification of the second virtual character 104 currently operated by one of teammates, the character identification of the third virtual character 105 currently operated by one of enemy, the second object identification of the monster (106, 107), the identification of the defensive tower 108, the first object identification of the soldier line 109, and the like, may be displayed on the thumbnail map 101; when a teammate (the character identification corresponding to the second virtual character 104) fights against an enemy (the character identification corresponding to the third virtual character 105), in order to observe a game screen when they fight, the user can switch to the game screen when they fight by long-pressing the character identification of the second virtual character 104 of the teammate on the thumbnail map 101, and continue to observe a surrounding screen when they fight by dragging the thumbnail map. Or, the user slides in the blank area 130 of the graphic user interface 100 to switch to a game screen when observing a teammate battle. The user may also switch to a game screen for viewing monster 106 or 107 by long pressing the second object identification of monster 106 or 107, to a game screen for viewing defensive tower 108 by long pressing the identification of defensive tower 108, and to a game screen for viewing defensive tower 108 by long pressing the first object identification of soldier line 109. Further, a movement control 110 is disposed at the lower left corner of the graphical user interface 100, and a user can control the virtual character 103 to move in the virtual scene by dragging the movement control 110; a skill release control 120 is provided in the lower right corner of the graphical user interface 100 and the user can control the virtual character 103 to release skill by clicking on the skill release control 120.
However, in an actual virtual fight task, if the user wants to view the surrounding view, in one embodiment, the user can change the view angle of the observed virtual scene by pressing and dragging the thumbnail map for a long time by the left hand thereof to switch game images, for example, the user who manipulates the first virtual character 103 can only observe the position of the third virtual character 105 manipulated by the enemy through the thumbnail map 101 without observing the blood volume state of the third virtual character 105 manipulated by the enemy, if the user wants to view the blood volume state of the third virtual character 105 manipulated by the enemy, so that the user can conveniently judge whether to support or withdraw before or after going to stop and need to press the thumbnail map 101 for a long time, and the current game image can be switched to the game image of the third virtual character 105 manipulated by the enemy. However, such an operation interrupts the operation flow of the user to control the first virtual character to move in the virtual scene, so that the moving process of the first virtual character in the virtual scene is interrupted, thereby affecting the continuity of the user to control the first virtual character to move in the virtual scene, easily leading to that the user to control the first virtual character can view the surrounding view only under the condition that the first virtual character needs to be kept in place in the virtual scene, and being not friendly to the virtual character with higher mobility in the game, such as the hero of the spike needs to continuously perform the wild play and high-degree parameter, and the operation mode of viewing the current surrounding view interrupts the walking position of the hero of the spike in the virtual scene, thereby not only reducing the operation efficiency of the game, but also leading to poor experience of the user. In another embodiment, the user can change the viewing angle of the observed virtual scene by pressing and dragging the thumbnail map by the right hand of the user for switching the game picture, but the operation is easy to cause the blocking of the sight of the user, so that the user cannot observe and clear the situation around the first virtual character, and the result of killing by the enemy is caused, so that the experience of the user is poor; in addition, the operation of pressing and dragging the thumbnail map by the right hand for a long time inevitably interrupts the action of controlling the first virtual character to release the skill by the user by clicking the skill release control by the right hand, so that the game operation efficiency of the user is low.
It should be added that in the actual game scene, the viewing angle of the virtual scene can be changed by sliding in a blank area of the graphical user interface to switch the game screen, wherein the blank area is generally referred to as an area to the right of the middle dividing line of the graphical user interface. In one embodiment, a user drags a movement control by a left hand to control a first virtual character to move in a virtual scene, slides in a blank area of a graphical user interface by a right hand to change a view angle for observing the virtual scene, although the operation does not interrupt the movement behavior of the first virtual character in the virtual scene, the blank area of the graphical user interface for switching game pictures is limited, the switchable game pictures are also limited, once the sliding distance of a finger on the blank area reaches the boundary position of the graphical user interface, the current game picture is not controlled to move to a position which the user wants to view, the user needs to slide on the blank area again, the operation is troublesome, the interactive operation efficiency of the user is reduced, and when the current game picture corresponding to the first virtual character is still switched to other game pictures, the user still generates a sense of being separated from the current game behavior, so that the game experience of the user is poor; moreover, the sliding operation of the right hand in the blank area of the graphical user interface inevitably breaks the action of the user to control the virtual character to release the skill by clicking the skill release control by the right hand, so that the game operation efficiency of the user is lower.
Based on the above, the embodiments of the present application provide a game display control method, apparatus, electronic device, and storage medium, which can superimpose and display virtual scenes observed at different viewing angles under the condition of not interrupting the current interactive operation of a user, thereby ensuring the continuity of the operation of the user in the game, improving the operation efficiency of the game, and improving the game experience of the user.
The game display control method provided by the embodiment of the application provides a graphical user interface through the terminal equipment, wherein the graphical user interface comprises a first game picture obtained by observing part of virtual scenes at a first visual angle in a virtual fight task. Here, the terminal device is a mobile terminal. The mobile terminal may include, but is not limited to, any of the following: smart phones, tablet computers, palm top computers, personal Digital Assistants (PDAs), etc. Here, the first view angle refers to a photographing view angle of the first virtual camera bound to the first virtual character. For example, the first virtual camera may be located at an associated position of the first virtual character, which refers to a position that moves as the first virtual character moves, which may be a head-above, a back-above, or the like of the first virtual character. Here, the first virtual character in the first game screen is a game character controlled by the user through the mobile terminal. In this embodiment, the mobile terminal is preconfigured with an angle sensing module, which is configured to obtain a rotation direction or a rotation angle of the mobile terminal when the mobile terminal rotates.
In one embodiment of the present application, the angle sensing module may be a gyroscope, where the gyroscope is used for detecting azimuth data of the mobile terminal, and the gyroscope, that is, an angular velocity sensor, is different from an accelerometer, and the measured physical quantity is a rotational angular velocity during deflection and tilting. The azimuth of the mobile terminal can be determined by detecting the azimuth of the gyroscope, the azimuth comprises the position and the direction of the mobile terminal, the gyroscope can detect not only the change of the position data of the mobile terminal, but also the change of the direction data of the mobile terminal, and the rotation direction or the rotation angle of the mobile terminal is determined by the change of the position data of the mobile terminal detected by the gyroscope and the change of the direction data.
In another embodiment of the present application, the angle sensing module may be a gravity sensor (gravity sensor), where the gravity sensor may be capable of completing the conversion from gravity change to an electrical signal, and further obtain gravity sensing information. Currently, most smart phones and tablet computers are internally provided with gravity sensors, such as iOS series products and Android series products. Specifically, the change of the rotation direction or rotation angle of the mobile terminal can be determined through gravity sensing information, such as a transverse screen, a vertical screen or a tilt.
For example, the game in the embodiment of the present application may be a MOBA game, where a plurality of points are provided in the virtual world, and users in different camps control the virtual characters to fight in the virtual world, occupy the points or destroy the hostile points. For example, a MOBA game may divide a user into two hostile camps, disperse user-controlled avatars in the virtual world competing with each other to destroy or preempt all points of the hostile as a winning condition. The MOBA game is in units of plays, and the duration of a play of the MOBA game is from the time when the game starts to the time when the winning condition is achieved. Alternatively, a game of MOBA game to which the method shown in the embodiment of the present application is applied is referred to as a virtual combat task.
The following describes in detail a display control method of a game provided in an embodiment of the present application.
Referring to fig. 2, fig. 2 is a flowchart of a game display control method according to an embodiment of the present application. As shown in fig. 2, a method provided in an embodiment of the present application includes:
s201, responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment, and displaying the visual angle expansion area on the graphical user interface.
S202, determining a target scene area from the virtual scene according to the rotation information of the terminal equipment.
And S203, controlling the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle.
Taking the example that the game display control method is executed on a local terminal device (hereinafter referred to as a terminal device), the steps of the foregoing examples provided in the embodiments of the present application are respectively described as follows:
in step S201, in the process of controlling the first virtual character to perform the virtual fight task, rotation information of the terminal device is acquired in response to the rotation operation for the terminal device, and the view angle expansion area is displayed on the graphic user interface. Here, the rotation information may include at least one of a rotation direction and a rotation angle, wherein the view angle expansion area is used to present a second game screen in which the partial virtual scene is viewed at a second view angle.
In the above step, the virtual combat task includes one of the following cases: (1) The virtual fight task is in a state of controlling the first virtual character to move in the virtual scene, and specifically comprises the following steps: responding to touch operation acted on the character movement control, and controlling the first virtual character to move in the virtual scene according to the touch position of the touch operation; (2) The virtual fight task is in a state that the first virtual character is stationary in the virtual scene; (3) The virtual fight task is in a state that a first virtual role fights in a virtual scene; the method specifically comprises the following steps: responding to a first touch operation acting on a character movement control, and controlling a first virtual character to move in a virtual scene according to the touch position of the first touch operation; responding to a second touch operation acting on the skill release control, and controlling the first virtual character to release game skills in the moving process; (4) The virtual fight task is in the following complex state, specifically including: responding to a first touch operation acting on a character movement control, and controlling a first virtual character to move in a virtual scene according to the touch position of the first touch operation; and responding to a third touch operation acting on the store control, and controlling the first virtual character to select the virtual prop in the moving process.
In the process of controlling the first virtual character to perform the virtual fight task, responding to the rotation operation issued by the user and aiming at the terminal equipment, acquiring the rotation information of the terminal equipment through an angle sensing module pre-configured on the terminal equipment so as to determine a target scene area from the virtual scene according to the rotation information of the terminal equipment; and when the terminal equipment is controlled to rotate, displaying a visual angle expansion area on the graphical user interface, wherein the visual angle expansion area is used for displaying a second game picture of a part of virtual scene observed at a second visual angle, and the second visual angle is different from the first visual angle. Here, the second angle of view refers to an angle of view of a second virtual camera for photographing the target scene area, wherein the second virtual camera does not move with the movement of the first virtual character but can move with the rotation of the terminal device, i.e., the second virtual camera changes with the position change of the target scene area to achieve photographing of the target scene area.
In step S202, a target scene area is determined from the virtual scene according to the rotation information of the terminal device.
In one embodiment, step S202 specifically includes: and determining the moving direction of the second virtual camera according to the rotation information of the terminal equipment, controlling the second virtual camera to move according to the moving direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area. Here, the rotation information may include at least one of a rotation direction and a rotation angle.
Illustratively, the rotation information includes a direction of rotation. In one embodiment, the rotation direction includes a direction of rotating around an X-direction rotation axis of the terminal device and a direction of rotating around a Y-direction rotation axis of the terminal device, where the X-direction is a direction in which a long side of an outer contour of the terminal device is located, and the Y-direction is a direction in which a short side of the outer contour of the terminal device is located, specifically, step S202 includes:
if the terminal equipment rotates around the X-direction rotating shaft of the terminal equipment, determining the moving direction of the second virtual camera as the Y-direction, controlling the second virtual camera to move in the Y-direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area;
if the terminal equipment rotates around the Y-direction rotation axis of the terminal equipment, determining the moving direction of the second virtual camera as the X-direction, controlling the second virtual camera to move in the X-direction, and determining the area shot by the moved second virtual camera from the virtual scene as the target scene area.
For example, the current mobile terminals are provided with an angle sensing module by default, and any action of the user on the mobile terminal can be detected by using the angle sensing module based on a three-dimensional space formed by three-dimensional coordinate axes (X/Y/Z three axes). In practical application, the rotation angle of the mobile terminal is calculated by the angle formed by three axes or any two axes. As shown in fig. 3 and 4, in the embodiment of the present application, the rotation angle of the mobile terminal is calculated by the angle formed by the X axis and the Y axis, and simultaneously, the whole map in the game is divided into A, B, C, D four map areas, and the rotation angle of the mobile terminal corresponds to the four map areas. When the mobile terminal rotates clockwise around the Y-direction rotation axis (the mobile terminal tilts leftwards), the X-axis is gradually decreased due to the fact that gravity is applied to the negative direction of the X-axis, and the second virtual camera moves towards the area corresponding to the A and the B; when the mobile terminal rotates anticlockwise around the Y-direction rotation axis (the mobile terminal tilts rightwards), the X-axis increases gradually due to the fact that gravity is applied to the positive direction of the X-axis, and the second virtual camera moves towards the area corresponding to C and D; similarly, when the mobile terminal rotates around the X-direction rotation axis towards the user (the mobile terminal tilts inwards), the Y-axis decreases due to the application of gravity to the negative direction of the Y-axis, and the second virtual camera moves towards the region corresponding to B and D; when the mobile terminal rotates in a direction away from the user (the mobile terminal is tilted outward) about the X-axis, the Y-axis increases gradually due to gravity applied to the positive direction of the Y-axis, and the second virtual camera moves toward the region corresponding to a and C.
In another embodiment, the direction of rotation includes a direction of rotation about an X-direction rotation axis of the terminal device, a direction of rotation about a Y-direction rotation axis of the terminal device, a direction of rotation about a first diagonal rotation axis of the terminal device, and a direction of rotation about a second diagonal rotation axis of the terminal device; the X direction is the direction of the long side of the outer contour of the terminal equipment, the Y direction is the direction of the short side of the outer contour of the terminal equipment, the first diagonal direction is the direction of the connecting line of two diagonal vertexes of the outer contour of the terminal equipment, and the second diagonal direction is the direction of the connecting line of the other two diagonal vertexes of the outer contour of the terminal equipment. The step S202 specifically includes:
if the terminal equipment rotates around the X-direction rotating shaft of the terminal equipment, determining the moving direction of the second virtual camera as the Y-direction, controlling the second virtual camera to move in the Y-direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area;
if the terminal equipment rotates around a Y-direction rotating shaft of the terminal equipment, determining the moving direction of the second virtual camera as an X-direction, controlling the second virtual camera to move in the X-direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area;
If the terminal equipment rotates around a first diagonal rotation axis of the terminal equipment, determining that the moving direction of the second virtual camera is a second diagonal direction, controlling the second virtual camera to move towards the second diagonal direction, and determining that the area shot by the moved second virtual camera is a target scene area from the virtual scene;
if the terminal equipment rotates around a second diagonal rotation axis of the terminal equipment, determining the moving direction of the second virtual camera to be a first diagonal direction, controlling the second virtual camera to move towards the first diagonal direction, and determining the area shot by the moved second virtual camera from the virtual scene to be a target scene area.
For example, as shown in fig. 5 and 6, a first diagonal rotation axis is defined as a W1 axis, a second diagonal rotation axis is defined as a W2 axis, and in this embodiment, the rotation angle of the mobile terminal is calculated by the angle formed by the X axis, the Y axis, the W1 axis and the W2 axis, and meanwhile, the whole map in the game is set to be eight map areas A1, B1, C1, D1, E1, F1, G1 and H1, and the rotation angle of the mobile terminal corresponds to the eight map areas. When the mobile terminal rotates clockwise around the Y-direction rotation axis (the mobile terminal tilts leftwards), the X-axis is gradually decreased due to the fact that gravity is applied to the negative direction of the X-axis, and the second virtual camera moves towards the area corresponding to the A1, the B1, the H1 and the G1; when the mobile terminal rotates anticlockwise around the Y-direction rotation axis (the mobile terminal tilts rightwards), the X-axis increases gradually due to the fact that gravity is applied to the positive direction of the X-axis, and the second virtual camera moves towards the area corresponding to C1, D1, E1 and F1; similarly, when the mobile terminal rotates around the X-direction rotation axis towards the user (the mobile terminal tilts inwards), the Y-axis decreases due to the fact that gravity is applied to the negative direction of the Y-axis, and the second virtual camera moves towards the region corresponding to H1, G1, F1 and E1; when the mobile terminal rotates in a direction away from the user (the mobile terminal tilts outwards) around the X-direction rotation axis, the Y-axis increases gradually as gravity is applied to the positive direction of the Y-axis, and the second virtual camera moves towards the area corresponding to A1, B1, C1 and D1; when the mobile terminal rotates clockwise around the W1 axis (the mobile terminal tilts leftwards and downwards), the W2 axis is gradually decreased due to the fact that gravity is applied to the negative direction of the W2 axis, and the second virtual camera moves towards the area corresponding to the A1, H1, G1 and F1; when the mobile terminal rotates anticlockwise around the W1 axis (the mobile terminal tilts upwards to the right), the W2 axis increases progressively as gravity is applied to the positive direction of the W2 axis, and the second virtual camera angles move towards the region corresponding to B1, C1, D1 and E1; similarly, when the mobile terminal rotates around the W2 axis in the direction of the user (the mobile terminal tilts downward and rightward), the W1 axis decreases due to the application of gravity to the negative direction of the W1 axis, and the second virtual camera moves toward the region corresponding to G1, F1, E1, D1; when the mobile terminal rotates in a direction away from the user about the W2 axis (the mobile terminal is tilted up to the left), the W1 axis increases gradually due to the gravity applied to the positive direction of the W1 axis, and the second virtual camera moves toward the area corresponding to A1, B1, C1, H1.
In the above manner, the rotation direction of the terminal device is determined by using the angle sensing module preset by the terminal device, the rotation angle of the mobile terminal corresponds to the plurality of map areas, and then the target scene area to be shot by the second virtual camera is accurately determined according to the rotation information of the terminal device, and the second virtual camera is controlled to move towards the target scene area, so that the second game picture obtained by observing the target scene area at the second viewing angle is accurately displayed in the viewing angle expansion area. The corresponding relation between the rotation information of the terminal equipment and the position change of the second virtual camera is convenient for the user to understand, the accuracy in the direction control process can be indirectly improved, and the user interaction experience is improved.
In the related scheme, due to the limitation of the size of the thumbnail map, scene objects displayed in the map are relatively narrow, so that a user can easily locate inaccurately when dragging the thumbnail map, and cannot accurately see the expected game picture, the adjustment of the picture content can be completed only through dragging for a plurality of times, and repeated dragging operations are relatively complicated, so that the game experience is affected. Based on this, the embodiment of the application may divide the virtual scene into a plurality of view field observation regions in advance, where the graphical user interface includes a plurality of sub-interface regions, and the position distribution of the plurality of sub-interface regions in the graphical user interface corresponds to the position distribution of the plurality of view field observation regions in the virtual scene one by one.
For example, the virtual scene may be divided into a plurality of field-of-view observation regions, such as a field-of-view observation region where a monster is located, a field-of-view observation region where a defensive tower is located, or the like, according to the historical game data with reference to the combat frequency counted from the historical game data; the division is helpful for users to quickly observe the game pictures which are likely to fight, reduces the search time of the users, and indirectly improves the operation efficiency of the game. The whole virtual scene can be divided evenly to obtain a plurality of visual field observation areas, so that the division is simple, a user can easily understand a game operation mechanism, and the game experience of the user is improved.
In this embodiment, step S202 includes: determining a target sub-interface area corresponding to the rotation information from a plurality of sub-interface areas of the graphical user interface; and determining the visual field observation area corresponding to the target sub-interface area as a target scene area according to the position distribution of the visual field observation areas in the virtual scene.
Here, a plurality of sub-interface regions are displayed on the graphical user interface, each sub-interface region corresponding to a field of view viewing region; and receiving rotation information input by the terminal equipment, determining a direction instruction according to the rotation information, moving an area dotted line frame of the sub-interface area according to the determined direction instruction, selecting a target sub-interface area from the plurality of sub-interface areas, highlighting the area dotted line frame of the target sub-interface area, and after the target sub-interface area is selected, determining a view field observation area corresponding to the target sub-interface area as a target scene area according to the one-to-one correspondence between the position distribution of the plurality of sub-interface areas in the graphical user interface and the position distribution of the plurality of view field observation areas in the virtual scene, so as to control the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle.
Specifically, determining a direction instruction according to the rotation information, sequentially starting from a regional dashed box at a default position according to the direction instruction, moving the regional dashed box until the regional dashed box is positioned in the target sub-interface area, and highlighting the regional dashed box corresponding to the target sub-interface area.
Wherein the default location comprises: a centered position, a position where a first region located in the upper left corner of the plurality of sub-interface regions is located, or a user-defined position.
Further, the area dashed box corresponding to the target sub-interface area is highlighted in at least one of the following ways: thickening the lines of the regional dashed boxes; changing the line of the regional dashed box into a solid line; the background color or line color of the regional dashed box corresponding to the target sub-interface region is different from the background color or line color of the regional dashed box corresponding to other sub-interface regions.
For example, eight direction instructions of up, down, left, right, up left, down left, up right, down right may be determined according to rotation information of the terminal device. As shown in fig. 7, the graphical user interface 100 of the terminal device is divided into a plurality of sub-interface areas, as shown in fig. 8, the virtual scene is divided into a plurality of view observation areas, each sub-interface area corresponds to one view observation area, for example, the sub-interface area 202 corresponds to one view observation area 302, the sub-interface area 203 corresponds to one view observation area 303, and the sub-interface area 204 corresponds to one view observation area 304. The dashed boxes of areas may be displayed on the graphical user interface, each dashed box of areas corresponding to a sub-interface area.
For example, when the default terminal device faces the user, the target sub-interface area is the sub-interface area 213 located at the central position of the gui, and at this time, the area dashed box corresponding to the target sub-interface area may be bolded and displayed, and at the same time, it is determined that the field of view observation area 313 corresponding to the target sub-interface area is the target scene area. As shown in fig. 7 and fig. 8, if the control terminal device rotates upward, the target sub-interface area becomes the sub-interface area 208, and a region dotted line frame (not shown in the figure) of the sub-interface area 208 is thickened and displayed, and at the same time, it is determined that the field of view observation area 308 corresponding to the target sub-interface area is the target scene area; if the terminal device continues to be controlled to rotate leftwards, the target sub-interface area becomes a sub-interface area 207, and an area dotted line frame (not shown in the figure) of the sub-interface area 207 is thickened and displayed, and meanwhile, a visual field observation area 307 corresponding to the target sub-interface area is determined to be a target scene area; if the terminal device continues to be controlled to rotate leftwards and downwards, the target sub-interface area becomes a sub-interface area 211, and an area dotted line frame (not shown in the figure) of the sub-interface area 211 is thickened and displayed, and meanwhile, it is determined that the field of view observation area 311 corresponding to the target sub-interface area is the target scene area.
In the process of issuing the rotation operation to the terminal equipment, if the pause time is greater than a first preset pause threshold value and not greater than a second preset pause threshold value, taking the area where the current target sub-interface area is located as an initial area for the next movement; if the terminal device is controlled to rotate upwards, the target sub-interface area becomes sub-interface area 208, and then the terminal device is controlled to rotate leftwards, the target sub-interface area becomes sub-interface area 207. The first preset pause threshold is smaller than the second preset pause threshold, and only when the pause time reaches the first preset pause threshold in the process of rotating the terminal equipment, the target sub-interface area can be determined to be selected once. If the dwell time exceeds the second preset dwell threshold, the current rotation operation is controlled to be ended, and the target sub-interface area is controlled to be restored to the sub-interface area 213 located at the default position of the graphical user interface.
Through the mode, the virtual scene is divided into the plurality of visual field observation areas in advance, the user can directly position the visual field observation areas which are divided in advance through controlling the rotation of the terminal equipment, the user can accurately view the expected game picture without dragging operation for many times, the positioning accuracy is improved, and the game operation is convenient and fast.
In step S203, a second game screen obtained by observing the target scene area at a second angle of view is displayed in the control angle-of-view expansion area.
Here, the second game picture obtained by observing the target scene area at the second view angle is displayed in the view angle expansion area in real time, so that the user controlling the first virtual character can pay attention to the game situation around the first virtual character controlled by the user in real time under the condition that the first game picture displayed on the graphical user interface and observed by the first view angle is unchanged, and the user can not interrupt the continuity of the game behavior of the first virtual character in the virtual scene controlled by the user, thereby improving the man-machine interaction efficiency.
In an alternative embodiment, in the process of controlling the terminal device to execute the rotation operation, responding to the pause operation aiming at the terminal device, and determining the region acted by the pause operation as a target scene region; controlling the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle; wherein the dwell time of the quiescing operation exceeds a preset time threshold.
In this way, in the process of controlling the terminal device to execute the rotation operation, the user can independently select the visual field area to be checked through the pause operation, so that the interaction of the game is richer, the playability of the game is improved, and the user experience is improved. In addition, through pause operation, accurate positioning of a user on a field to be checked can be realized, the operation is simple and feasible, the complexity of player operation is reduced, and the man-machine interaction efficiency is improved.
In this embodiment of the present application, the graphical user interface further displays a thumbnail map, where the thumbnail map displays a third game screen obtained by observing all virtual scenes at a third viewing angle, and the third game screen includes azimuth information of each virtual character located in the virtual scene. Here, the third angle of view refers to an angle of view of a third virtual camera for global photographing, which does not need to perform a position change due to performing all virtual scene photographing, and further, does not move with the movement of the first virtual character nor with the rotation of the terminal device.
In addition, the second game screen includes attribute information of each virtual character located in the target scene area. Exemplary attribute information includes at least one of a life value, a combat force value, economy, combat information, and equipped props. Wherein the vital value comprises a blood bar characterizing the vitality of the virtual object. The battle force value is a comprehensive numerical index given by the game system through the algorithm of the game system according to equipment carried by the virtual character and the like. Economic, i.e. the number of gold coins the avatar has. The fight information includes the number of enemies hit by the virtual character, the number of deaths, the number of helpers, and the like.
Further, the display form of displaying the view angle expansion region on the graphical user interface includes one of the following:
(1) And displaying the visual angle expansion area in a superimposed manner on the thumbnail map.
Here, the view angle expansion area is displayed superimposed on the thumbnail map, wherein the area of the view angle expansion area is the same as the area of the thumbnail map, and the view angle expansion area covers the thumbnail map entirely.
In this way, the newly added view angle expansion area on the graphical user interface can not block the functional area on the graphical user interface, and can not cause sight interference to the user to observe the virtual scene in the first game picture, thereby solving the problem of interference of the visual field, facilitating the user to grasp the fighter in time to win the win, and further improving the user experience.
However, considering that in the above scheme, the view angle expansion area covers the thumbnail map entirely, when the view angle expansion area is displayed, the user cannot continuously observe the thumbnail map again, and further cannot know the relative position of the teammate or enemy and the user, because the game fight rhythm is faster, the above scheme makes the user inconvenient to control the movement of the first virtual character, and based on this, the following scheme is proposed:
and displaying the visual angle expansion area on the thumbnail map in a superimposed mode according to the preset transparency, wherein the area of the visual angle expansion area is the same as that of the thumbnail map. The preset transparency refers to transparency corresponding to a thumbnail map which can be clearly displayed and is covered by the view angle expansion area. Here, the user can observe not only the relative positions between the first virtual character shown on the thumbnail map and the second virtual character in the same camp and the third virtual character in a different camp, but also the second game screen shown in the perspective expansion area in which the target scene area is observed at the second perspective.
Therefore, the problem of visual field interference can be solved, the user can be guaranteed to pay attention to the relative position relationship between the second virtual role in the same camp and the third virtual role in different camps and the first virtual role controlled by the user in real time, and further the rotating information for controlling the terminal equipment to rotate is timely adjusted according to the relative position relationship which changes in real time, so that the moving direction of the second virtual camera is adjusted in real time to update the second game picture displayed in the visual angle expansion area, and the game operation efficiency is improved.
(2) The view angle expansion area is displayed at an arbitrary position on the graphic user interface other than the thumbnail map in accordance with the designated transparency.
The visual angle expansion area is displayed at any position except the thumbnail map on the graphical user interface in a designated transparency mode, wherein the designated transparency mode refers to transparency corresponding to a first game picture covered by the visual angle expansion area, so that a user can observe a second game picture obtained by observing a target scene area at a second visual angle and displayed on the visual angle expansion area and a first game picture displayed on the graphical user interface and observed by observing part of virtual scenes at the first visual angle, and further, the user can timely adjust rotation information for controlling rotation of terminal equipment according to relative positions between the first virtual character displayed on the thumbnail map and the second virtual character in the same camp and the third virtual character in different camps under the condition that the functional area and the first game picture are not blocked, and further, the movement direction of the second virtual camera is adjusted, so that the user can pay attention to the situation around the first virtual character controlled by the user in time, the user can accurately control the movement direction of the first virtual character by combining the thumbnail map and the visual angle expansion area, and game operation efficiency is improved.
It should be noted that, whether the view angle expansion area is displayed superimposed on the virtual map or displayed at any position on the graphical user interface other than the virtual map according to the specified transparency, if the view angle expansion area includes the virtual object, the attribute information of the virtual object is highlighted. Here, the virtual object includes one or more of a virtual character, a non-neutral active combat unit, and a neutral passive combat unit, and the virtual character may include a second virtual character and/or a third virtual character, and the non-neutral active combat unit may be a soldier line in a game and the neutral passive combat unit may be a monster in the game, for example.
Preferably, the attribute information is a life value. Thus, if the virtual object is a virtual character, the life value of the virtual character in the visual angle expansion area is highlighted, so that the user is beneficial to focusing on the life value of the second virtual character and/or the third virtual character in time, if the life value of the third virtual character is lower, the user can control the first virtual character to kill the third virtual character with lower life value before so as to improve the self economy, and if the life value of the second virtual character is lower, the user can decide whether to control the first virtual character to support before. In addition, if the virtual object is a non-neutral active combat unit or a neutral passive combat unit, the life value of the non-neutral active combat unit or the neutral passive combat unit in the visual angle expansion area is highlighted, and the user can decide whether to control the first virtual character to kill before according to the displayed life value. Therefore, the life value of the virtual object in the visual angle expansion area is highlighted, so that the user can adjust and control the moving direction of the first virtual character in time, and the man-machine interaction efficiency is improved.
Aiming at other contents included in the attribute information, such as a fight value, economy, fight information, equipped props and the like, the technical effects similar to those of the attribute information including a life value can be generated, so that a user can timely decide whether to control the first virtual character to support before, and further, the moving direction of the first virtual character is timely adjusted and controlled, and therefore, the man-machine interaction efficiency is improved, and details are omitted.
In the related scheme, if the user has higher definition requirements on the game picture in the fierce combat process, in order to avoid that the view angle expansion area displayed according to the designated transparency still causes a certain line of sight interference to the user, the embodiment of the application provides the following scheme: the movement of the viewing angle extension region on the graphical user interface is controlled in response to a movement operation for the viewing angle extension region.
Specifically, in response to the movement operation for the view angle expansion area, the view angle expansion area is controlled to move to the position of the end point action of the movement operation, and then the view angle expansion area can be placed at any position of the graphical user interface according to the actual requirement of a user, so that the interference to a game is prevented, and the human-computer interaction efficiency is indirectly improved.
In addition, in order to enable a user to more clearly observe a second game screen of a target scene area observed at a second viewing angle, which is displayed in a viewing angle expansion area, the embodiment of the present application may adjust the area of the viewing angle expansion area in response to an expansion operation for the viewing angle expansion area. Specifically, the area of the visual angle expansion area is adjusted in response to the moving operation of the boundary area of the visual angle expansion area, so that the visual angle expansion area displays more second game pictures of the target scene area observed at the second visual angle, a user can more clearly observe the game situation of enemies or teammates, whether the user can quickly decide to kill or support before, the time for viewing the surrounding visual field is shortened, and the operation efficiency of the game is improved.
Considering that in the related scheme, when the adjusted or enlarged view angle expansion area is overlapped with the touch control area of the functional control at the bottom of the graphical user interface, the functional control cannot be used continuously, so that the operation efficiency of the game can be reduced, and in order to solve the technical problem, the embodiment of the application provides the following scheme:
when the position area corresponding to the moving view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to the locking operation for the view angle expansion area, locking the position of the view angle expansion area on the graphical user interface, controlling the transparency reduction of the view angle expansion area, and enabling a user controlling the first virtual character to operate the functional control through the view angle expansion area. The functional controls comprise a mobile control, a skill release control and the like.
In an alternative embodiment, the step of locking the position of the viewing angle extension region on the graphical user interface in response to a locking operation for the viewing angle extension region comprises any one of:
(1) And responding to a preset gesture instruction, and locking the position of the visual angle expansion area on the graphical user interface.
Illustratively, in response to a sliding operation acting on the graphical user interface, a sliding track of the sliding operation is obtained, and if the sliding track exhibits a V-shape, a position of the viewing angle expansion region on the graphical user interface is locked. In addition, the sliding track corresponding to the preset gesture command may also be in a zigzag shape, a W shape, an X shape, or the like, which is not particularly limited herein.
Therefore, the position of the visual angle expansion area on the graphical user interface can be locked through a preset gesture instruction, the operation process is visual and quick, the operation time length and the operation cost are reduced, the user can conveniently and fast lock the position of the visual angle expansion area on the graphical user interface, follow-up game play is not affected, and the operation efficiency of the game is improved.
(2) And after the corresponding position area after the movement of the visual angle expansion area is overlapped with the touch area part of the functional control displayed on the graphical user interface or the corresponding area after the area adjustment of the visual angle expansion area is overlapped with the touch area part of the functional control displayed on the graphical user interface, responding to a first touch operation aiming at the area locking control, locking the position of the visual angle expansion area on the graphical user interface, and controlling the transparency reduction of the visual angle expansion area so as to enable a user controlling the first virtual character to operate the functional control through the visual angle expansion area.
The first touch operation includes, but is not limited to, a click operation, a drag operation, a long press operation, and a double click operation. The region locking control herein has a function of locking the position of the view angle expansion region on the graphical user interface.
Illustratively, in response to a click operation for the region locking control, the position of the view expansion region on the graphical user interface is locked and the transparency of the view expansion region is controlled to decrease, such that a user controlling the first virtual character operates the functionality control through the view expansion region.
Therefore, after the position area corresponding to the moving view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, the view angle expansion area can be locked on the current position of the graphical user interface by directly touching the area locking control, the operation process is simple, the operation efficiency is high, the operation time length and the operation cost are also reduced, and the user can conveniently lock the view angle expansion area. In addition, the transparency of the visual angle expansion area is reduced after the visual angle expansion area is locked, so that a user can accurately determine the position of the functional control, further the functional control is operated, the influence on game play can be reduced, and further the operation efficiency of the game is improved.
Further, in response to a second touch operation for the region lock control, the view angle expansion region cancel display in the locked state is controlled.
Here, the second touch operation includes, but is not limited to, a click operation, a drag operation, a long press operation, a double click operation. However, the second touch operation is a different operation from the first touch operation, for example, the first touch operation is a click operation and the second touch operation is a drag operation.
According to the technical scheme, different touch operations are performed on the area locking control, so that the position of the locked view angle expansion area on the graphical user interface can be locked, the view angle expansion area in the locked state is canceled, the operation process is simple, the operation efficiency is high, and the operation efficiency of a game is further improved.
In a preferred embodiment, in controlling the presentation of the second game screen obtained by observing the target scene area at the second angle of view in the angle-of-view extension area, the extension area identification is displayed in the thumbnail map displayed in the graphical user interface at a map position corresponding to the actual position of the target scene area at the second angle of view in the virtual scene, the map position being presented by the angle-of-view extension area.
Here, in the thumbnail map, object identifications corresponding to a plurality of virtual objects, such as a character identification of a virtual character, a first object identification of a non-neutral active combat unit (soldier line), and a second object identification of a neutral passive combat unit (monster), are displayed, respectively. Wherein the character identification, the first object identification and the second object identification of the virtual character can comprise any one or more of a number, a nickname, a head portrait and a symbol. Illustratively, for the character identification, a display form of the character avatar is selected. Different camps can be distinguished by selecting head frames with different colors as character identifications for the virtual characters of different camps, wherein the head frames with different line forms are selected as the character identifications for the virtual characters of different camps, for example, the head frames with thick solid lines form are marked for the virtual characters in the camps, and the head frames with thin broken lines form are marked for the virtual characters in the camps. For the first object identifier, a display form of a circular icon is selected, and for the second object identifier, a display form of a pseudo symbol or a display form of a square icon is selected, so that object identifiers of a plurality of virtual objects are displayed in a thumbnail map and are well distinguished.
Further, according to the embodiment of the application, according to the actual position of the target scene area in the virtual scene, which is displayed by the view angle expansion area, the expansion area identifier is displayed at the map position corresponding to the actual position of the target scene area in the thumbnail map, and the user can control the rotation information of the terminal device according to the expansion area identifier displayed in the thumbnail map and the relative position relationship among the object identifiers, so as to control the position change information of the second virtual camera, further enable the user to control the second virtual camera to rapidly move to the target position, wherein the target position is the position corresponding to the view area which the user wants to view, is favorable for the user to view the contrast scene in the virtual contrast task in time, and provides for supporting or killing, and the contrast means that the virtual characters in two camps are in fight.
In the process, the expansion area identifier and each object identifier are displayed in the thumbnail map, and according to the relative position relationship between the expansion area identifier and each object identifier, the searching time of the first virtual character for searching the game scene in the virtual game task can be saved, so that the moving time of the first virtual character to the area corresponding to the game scene is shortened, the man-machine interaction efficiency is improved, and the game process is accelerated.
In addition, the position of the expansion area identifier displayed on the thumbnail map changes in real time along with the movement of the second virtual camera, so that the thumbnail map can update the map position of the expansion area identifier on the thumbnail map in real time according to the actual position of the target scene area in the virtual scene displayed by the visual angle expansion area, accurate direction reference is provided for a user to control the movement of the first virtual character or the movement of the second virtual camera, and the improvement of man-machine interaction efficiency is facilitated.
For example, as shown in fig. 9, the graphical user interface 100 includes a first game screen and a thumbnail map 101 of a virtual fight task, wherein the first game screen includes a first virtual character 103, a character identifier of a second virtual character currently operated by a teammate, and a character identifier of a third virtual character currently operated by an enemy, such as a character identifier of a second virtual character 104 currently operated by one teammate, and a character identifier of a third virtual character 105 currently operated by one enemy, are displayed in the thumbnail map 101; also displayed in the thumbnail map 101 are first object identifications of non-neutral active combat units, such as first object identifications of soldier lines 109, second object identifications of neutral passive combat units, such as second object identifications of monster 106 or monster 107, and the like.
In the process of controlling the first virtual character 103 to move in the virtual scene by dragging the movement control 110, or controlling the first virtual character 103 to move in the virtual scene by dragging the movement control 110 and controlling the first virtual character 103 to release the skill by clicking the skill release control 120, if the user wants to observe the blood volume condition of surrounding enemies, such as the blood volume condition of the third virtual character 105, the user can control the terminal device to rotate anticlockwise around the first diagonal rotation axis, display the view angle expansion area 140 on the graphical user interface 100, and simultaneously control the second virtual camera to move to the position where the third virtual character 105 is located, and display the expansion area identifier (black box) of the view angle expansion area 140 on the thumbnail map 101, such as when the second virtual camera moves to the position where the third virtual character 105 is located, the position area where the third virtual character 105 is located is the target scene area, the view angle expansion area 140 includes the second game screen obtained by currently observing the target scene area at the second view angle, and the second game screen includes one of the third virtual character 105 and the small soldier in the line 109, and the expansion area identifier (black box) in the thumbnail map 101 is displayed to the position where the virtual character 105 moves. The position of the viewing angle expansion area 140 on the gui 100 may be selected according to actual requirements. Here, highlighting the blood bar 150 of the third virtual character 105 is beneficial to the user to pay attention to the life value of the third virtual character 105 in time, and if the life value of the third virtual character 105 is lower, the user can control the first virtual character 103 to kill the third virtual character 105 with the lower life value before clicking to improve the economy of the user, and is beneficial to the user to adjust and control the moving direction of the first virtual character 103 in time, so that the man-machine interaction efficiency is improved.
In the above process, the second virtual camera is controlled to move to the position where the third virtual character 105 is located by controlling the rotation of the terminal device so as to determine the target scene area, and the second game picture obtained by observing the target scene area at the second view angle is displayed in real time in the view angle expansion area 140 displayed on the graphical user interface 100, so that the continuity of the game behavior of the user controlling the first virtual character 103 in the virtual scene is not interrupted, and the game operation efficiency is improved; in addition, the graphical user interface 100 includes a first game screen obtained by observing a part of the virtual scene at a first viewing angle (the first virtual character 103 is in the first game screen) and a second game screen obtained by observing the target scene area at a second viewing angle, so that the user is ensured to always display the first game screen on the graphical user interface 100 when observing the second game screen, the user cannot feel to deviate from the current game behavior, and the game experience of the user is further improved.
In the related scheme, different users want to check the game situation of surrounding enemies or teammates, the following steps are needed to be executed: and controlling the terminal equipment to rotate to control the second virtual camera to move towards a moving direction corresponding to the rotating direction of the terminal equipment, determining the area shot by the moved second virtual camera from the virtual scene as a target scene area, and displaying a second game picture obtained by observing the target scene area at a second visual angle in real time in a visual angle expansion area displayed on the graphical user interface. In order to improve convenience of game operations, the embodiment of the application proposes: and responding to the marking operation, displaying marking prompt information in the visual angle expansion area, and sending the marking prompt information to a game picture of at least one second virtual character, wherein the second virtual character and the first virtual character belong to the same camp.
Exemplary, tag cues include, but are not limited to: gather, attack, withdraw, pay attention, etc. The display forms of the marker prompt information include, but are not limited to: text information, voice information, popup information, etc.
According to the method, information sharing can be achieved by the virtual roles in the same camp, and the obtained message results can be synchronized to other teammates only by executing the operation of checking the peripheral visual field by one virtual role, so that other teammates do not need to execute the peripheral visual field checking operation on the same area, the operation time is saved, teammates can know the game situation in the virtual game task in time, the teammates can better conduct next tactical deployment, the convenience of game operation is improved, the game process can be accelerated, the man-machine interaction efficiency is improved, and the game time of a single virtual game task is not too long.
In an alternative embodiment, step S203 specifically further includes: controlling the second virtual camera to move to the target scene area according to the first moving speed, and displaying part of the virtual scene observed at the second viewing angle in the viewing angle expansion area; the second visual angle is a shooting visual angle preset by the second virtual camera; if the virtual object is detected to be included in the target scene area observed at the second visual angle, the second virtual camera is controlled to move in the target scene area according to the second moving speed, and the virtual object in the target scene area observed at the second visual angle is displayed in the visual angle expansion area; wherein the first movement rate is greater than the second movement rate.
Here, when the virtual object is detected to be included in the target scene area observed at the second view angle, the moving speed of the second virtual camera is reduced, the user can conveniently and accurately check the possible combat situation in the target scene area observed at the second view angle, which is displayed in the view angle expansion area, and then the operation of the terminal equipment is stopped, the user can accurately position the field to be checked by matching the virtual object with the target scene area to be observed, the continuity of the operation of the user in the game is not required to be interrupted, the operation complexity of the player is reduced, and the man-machine interaction efficiency is improved.
Further, the first movement rate of the second virtual camera may also be controlled according to the rotation angle of the mobile terminal. For example, the larger the angle between the plane of the mobile terminal screen and the horizontal plane, the faster the first movement rate of the second virtual camera. Therefore, the user can control the first moving speed of the second virtual camera by controlling the rotating angle of the mobile terminal, and if the second virtual camera is required to move quickly, the mobile terminal is only required to be perpendicular to the horizontal plane as much as possible, so that the game operation efficiency is improved.
In the game display control method provided by the embodiment of the invention, in the process of performing the virtual fight task, the second virtual camera is controlled to move towards the moving direction corresponding to the rotating direction of the terminal equipment by controlling the terminal equipment to rotate, the area shot by the moved second virtual camera is determined to be the target scene area from the virtual scene, the second game picture obtained by observing the target scene area at the second visual angle is displayed in real time in the visual angle expansion area displayed on the graphical user interface, so that the user can observe the second game picture obtained by observing the target scene area at the second visual angle through the first game picture displayed in real time in the visual angle expansion area under the condition that the first game picture displayed on the graphical user interface is unchanged, the game situation around the first virtual character is timely focused to decide whether to support or kill, the continuity of the game behavior of the first virtual character in the virtual scene is not interrupted, and the continuity of the game is ensured. And the graphical user interface comprises a first game picture and a second game picture at the same time, so that when a user observes the second game picture, the graphical user interface always displays the first game picture, wherein the first virtual character is positioned in the first game picture, the user cannot feel to deviate from the current game behavior, and the game experience of the user is further improved.
In the related scheme, if the user only habitually shakes the mobile terminal and does not want to view the surrounding view in the process of controlling the first virtual character to perform the virtual fight task, the manner of viewing the surrounding view may cause the display of the view expansion area by mistake. In order to avoid bad game experience of a user caused by the misoperation, the embodiment of the application provides another game display control method. Referring to fig. 10, fig. 10 is a flowchart of another game display control method according to an embodiment of the present application. As shown in fig. 10, a method provided in an embodiment of the present application includes:
s1001, responding to a gravity sensing start instruction, and controlling the terminal equipment to enter a gravity sensing state.
S1002, responding to the rotation operation of the terminal equipment in the gravity sensing state, acquiring rotation information of the terminal equipment, and displaying the visual angle expansion area on a graphical user interface.
S1003, determining a target scene area from the virtual scene according to the rotation information of the terminal equipment.
And S1004, displaying a second game picture obtained by observing the target scene area at a second visual angle in the visual angle expansion area.
Here, the descriptions of step S1002 to step S1004 may refer to the descriptions of step S201 to step S203 described above, and will not be repeated here.
In step S1001, the gravity sensing activation instruction includes, but is not limited to, one of the following: gesture instructions and control touch instructions.
According to the method, the terminal equipment in the virtual fight task is controlled to enter the gravity sensing state through the gravity sensing opening instruction, only the terminal equipment entering the gravity sensing state can respond to the rotation operation issued by the user, so that the gravity sensing opening instruction is equivalent to starting the switch of the angle sensing module of the terminal equipment in the game, and after that, the gravity change of the terminal equipment can be sensed, through the mode, whether the user wants to view the surrounding visual field or simply like to shake the terminal equipment can be distinguished, the situation that the visual angle expansion area is displayed on the graphical user interface by mistake due to misoperation of the player is avoided, the game experience of the user is improved, and the accuracy of game operation is improved.
In one embodiment, a gravity sensing start instruction is taken as a control touch instruction for illustration: displaying a visual angle expansion control on the graphical user interface; the step S1001 specifically includes: and responding to a first touch operation aiming at the visual angle expansion control, and controlling the terminal equipment to enter a gravity sensing state.
The first touch operation includes, but is not limited to, a click operation, a move operation, a long press operation, and a double click operation.
According to the mode, the visual angle expansion control is arranged on the graphical user interface, so that a user can control the terminal equipment in the virtual fight task to enter the gravity sensing state by directly touching the visual angle expansion control, the operation process is simple, and the operation efficiency is high.
Preferably, the visual angle expansion control is arranged at a position, close to the skill release control, on the graphical user interface so as to interrupt user operation as little as possible, improve continuity of game operation and further improve operation efficiency of a user.
Further, after the position area corresponding to the moved view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to the second touch operation for the view angle expansion control, locking the position of the view angle expansion area on the graphical user interface, and controlling the transparency reduction of the view angle expansion area, so that a user controlling the first virtual character can operate the functional control through the view angle expansion area.
Further, in response to a third touch operation for the view angle expansion control, the view angle expansion area cancel display in the locked state is controlled.
The first touch operation, the second touch operation and the third touch operation may be the same operation, and when the first touch operation, the second touch operation and the third touch operation are the same operation, the operation times of the touch operation and the functions of the view angle expansion control are in one-to-one correspondence, and for example, the first touch operation, the second touch operation and the third touch operation are all click operations, and the first click of the view angle expansion control indicates that the terminal device in the virtual fight task is controlled to enter a gravity sensing state, so that the terminal device entering the gravity sensing state can respond to the rotation operation issued by the user; clicking the view angle expansion control for the second time represents locking the position of the view angle expansion area on the graphical user interface; clicking the view expansion control a third time indicates canceling the view expansion area in the display-locked state.
In the above manner, the first touch operation, the second touch operation and the third touch operation are the same operation, so that the user is facilitated to know the operation, the operation is simple, and the operation accuracy can be improved on the basis of considering the operation convenience.
The first touch operation, the second touch operation and the third touch operation may be different operations, and exemplary, the first touch operation is a click operation, the second touch operation is a drag operation, and the third touch operation is a long press operation, and since the first touch operation, the second touch operation and the third touch operation are different, the different touch operations correspond to a unique function of the view angle expansion control, for example, clicking the view angle expansion control indicates that the terminal device in the virtual fight task is controlled to enter a gravity sensing state, so that the terminal device entering the gravity sensing state can respond to a rotation operation issued by a user; dragging the view expansion control to lock the position of the view expansion area on the graphical user interface; the long press view angle expansion control represents a view angle expansion area in a cancel display lock state.
In the above manner, the first touch operation, the second touch operation and the third touch operation are different operations, and the different touch operations correspond to the unique functions of the visual angle expansion control, so that a user can execute different functions according to the different touch operations without executing according to a designated sequence, the operation is more flexible, the touch operations corresponding to the corresponding functions can be executed according to actual requirements, the time for executing the touch operations on the visual angle expansion control according to the sequence by the user is reduced, and the game operation efficiency is improved.
In another embodiment, a gravity sensing on command is taken as an example to describe the finger command:
and responding to the specified gesture instruction, and controlling the terminal equipment to enter a gravity sensing state.
For example, in response to a specified movement operation acting on the graphical user interface, a specified movement track of the specified movement operation is obtained, and if the specified movement track presents a U-shape, the terminal device in the virtual combat task is controlled to enter a gravity sensing state. In addition, the specified movement track corresponding to the specified gesture instruction may also be in a V shape, a Z shape, a W shape, an X shape, or the like, which is not particularly limited herein; however, in the same game, if two gesture commands represent different functions, finger commands corresponding to the two functions should be set differently when the gesture commands are preset, so as to distinguish the two functions.
Therefore, the terminal equipment in the virtual fight task can be controlled to enter the gravity sensing state through a specified gesture instruction, the operation process is visual and quick, the operation duration and the operation cost are reduced, the user can conveniently control the terminal equipment in the virtual fight task to enter the gravity sensing state, the subsequent game fight is not influenced, and the operation efficiency of the game is improved.
For example, as shown in fig. 11, a view angle expansion control 160 is further displayed on the graphical user interface 100, the user touches the view angle expansion control 160, and controls the terminal device in the virtual combat task to enter a gravity sensing state, further, in a process of controlling the first virtual character 103 to move in the virtual scene by dragging the movement control 110, or in a process of controlling the first virtual character 103 to move in the virtual scene by clicking the skill release control 120 and controlling the first virtual character 103 to release the skill combat soldier line 109, if the user wants to observe the game situation of surrounding enemies and teammates, such as the game situation of the third virtual character 105 and the second virtual character 104, the user can control the terminal device in the gravity sensing state to rotate around the second diagonal direction to a direction away from the user, and simultaneously control the second virtual camera to move to the position of the third virtual character 105 or the second virtual character 104 on the graphical user interface 100, and display an expansion area identifier (black frame) of the view angle expansion area 140 on the thumbnail map 101, such as the second virtual camera moves to the third virtual character 105 or the position of the second virtual character 104, and the second virtual character 104 moves to the position of the third virtual character 105, and the game scene is determined to be the second virtual character 104, and the game scene is simultaneously, and the game object area is expanded from the second virtual character 105 to the position of the second virtual character 104, and the second virtual character position is observed to the second virtual character position, and the current virtual character area is the second virtual character 104, and the current position, and the second virtual character is the second virtual image is the target image and the target image is the target image and the target image. The position of the viewing angle expansion area 140 on the gui 100 may be selected according to actual requirements. Here, highlighting the blood bar 150 of the third virtual character 105 and the blood bar 150 of the second virtual character 104 is beneficial for the user to pay attention to the life value of the third virtual character 105 and the life value of the second virtual character 104 in time, and for the user to decide whether to control the first virtual character 103 to support.
In the above process, the view angle expansion control 160 is set on the gui 100, where the view angle expansion control 160 is equivalent to a switch for starting the angle sensing module of the terminal device in the game, and the user can only control the terminal device to respond to the rotation operation issued by the user after touching the view angle expansion control 160, so as to avoid the error display of the view angle expansion area 140 on the gui 100 due to the error operation of the player, and improve the accuracy of the game operation. In addition, the operation does not interrupt the continuity of the user controlling the first virtual character 103 to finish the game behavior in the virtual scene, so that the game operation efficiency is improved; and the method can also ensure that when a user observes a second game picture obtained by observing the target scene area at a second visual angle, the first game picture obtained by observing part of the virtual scene at the first visual angle is always displayed on the graphical user interface 100, so that the user can not generate a sense of deviating from the current game behavior, and the game experience of the user is further improved.
According to the game display control method provided by the embodiment of the application, after the terminal equipment in the virtual fight task is controlled to enter a gravity sensing state in response to the gravity sensing start instruction, the terminal equipment is controlled to rotate to control the second virtual camera to move towards the moving direction corresponding to the rotating direction of the terminal equipment, the area shot by the moved second virtual camera is determined to be the target scene area from the virtual scene, the second game picture obtained by observing the target scene area at the second view angle is displayed in the view angle expansion area displayed on the graphical user interface in real time, so that a user can observe the second game picture obtained by observing the target scene area at the second view angle and displayed in real time in the view angle expansion area under the condition that the first game picture of observing part of the virtual scene at the first view angle displayed on the graphical user interface is unchanged, timely pay attention to the game situation around the first virtual character to decide whether support or kill, continuity of game behavior of the first virtual character in the virtual scene is not interrupted, and continuity of game is guaranteed. And the graphical user interface simultaneously comprises a first game picture obtained by observing part of the virtual scene at a first visual angle and a second game picture obtained by observing the target scene area at a second visual angle, so that the first game picture is always displayed on the graphical user interface when a user observes the second game picture, and the first virtual character is always in the first game picture, so that the user does not feel to deviate from the current game behavior, and the game experience of the user is further improved. Furthermore, the embodiment of the application can accurately acquire the user intention, avoid the situation that the visual angle expansion area is displayed on the graphical user interface by mistake due to misoperation of a player, influence the game experience of the user, further superimpose and display virtual scenes observed at different visual angles under the condition of not interrupting the current interactive operation of the user, ensure the continuity of the operation of the user in the game, improve the game operation efficiency and improve the game experience of the user.
Based on the same inventive concept, the embodiment of the present application further provides a game display control device corresponding to the game display control method, and since the principle of solving the problem by the device in the embodiment of the present application is similar to that of the game display control method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 12 and 13, fig. 12 is a schematic structural diagram of a game display control device according to an embodiment of the present application, and fig. 13 is a schematic structural diagram of another game display control device according to an embodiment of the present application. As shown in fig. 12, the apparatus 1200 includes:
the rotation display module 1201 is configured to, in a process of controlling the first virtual character to perform a virtual fight task, respond to a rotation operation for the terminal device, obtain rotation information of the terminal device, and display a view angle expansion area on the graphical user interface;
an area determining module 1202, configured to determine a target scene area from the virtual scene according to rotation information of the terminal device;
the region displaying module 1203 is configured to control the view angle expansion region to display a second game screen obtained by observing the target scene region at a second view angle.
In an alternative embodiment, the apparatus further comprises a gravity sensing module 1204, the gravity sensing module 1204 being configured to:
responding to a gravity sensing starting instruction, and controlling the terminal equipment to enter a gravity sensing state;
the rotary display module 1201 is specifically configured to:
and responding to the rotation operation of the terminal equipment in the gravity sensing state, and acquiring the rotation information of the terminal equipment.
In an alternative embodiment, a view angle expansion control is displayed on a graphical user interface; the gravity sensing module 1204 is specifically configured to:
and responding to a first touch operation aiming at the visual angle expansion control, and controlling the terminal equipment to enter a gravity sensing state.
In an alternative embodiment, the area determination module 1202 is specifically configured to:
and determining the moving direction of the second virtual camera according to the rotation information of the terminal equipment, controlling the second virtual camera to move according to the moving direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area.
In an alternative embodiment, the virtual scene is divided into a plurality of visual field observation areas, the graphical user interface comprises a plurality of sub-interface areas, and the position distribution of the plurality of sub-interface areas in the graphical user interface corresponds to the position distribution of the plurality of visual field observation areas in the virtual scene one by one; the area determination module 1202 is specifically further configured to:
Determining a target sub-interface area corresponding to the rotation information from a plurality of sub-interface areas of the graphical user interface;
and determining the visual field observation area corresponding to the target sub-interface area as a target scene area according to the position distribution of the visual field observation areas in the virtual scene.
In an alternative embodiment, the graphical user interface further displays a thumbnail map, the thumbnail map displaying a third game screen obtained by observing all virtual scenes at a third viewing angle, the third game screen including azimuth information of each virtual character located in the virtual scenes; the second game screen includes attribute information of each virtual character located in the target scene area.
In an alternative embodiment, the display form for displaying the view expansion area on the graphical user interface includes one of:
overlapping and displaying the view angle expansion area on the thumbnail map;
the view angle expansion area is displayed at an arbitrary position on the graphic user interface other than the thumbnail map in accordance with the designated transparency.
In an alternative embodiment, the apparatus further comprises an area moving module (not shown in the figure) for: controlling the viewing angle expansion area to move on the graphical user interface in response to the movement operation for the viewing angle expansion area;
Or, the apparatus further comprises a region enlarging module (not shown in the figure) for: the area of the viewing angle expansion area is adjusted in response to an expansion operation for the viewing angle expansion area.
In an alternative embodiment, the perspective expansion control is displayed on the graphical user interface, and the apparatus further comprises a region locking module (not shown in the figure) for:
when the position area corresponding to the moving view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to the second touch operation for the view angle expansion control, locking the position of the view angle expansion area on the graphical user interface, controlling the transparency reduction of the view angle expansion area, and enabling a user controlling the first virtual character to operate the functional control through the view angle expansion area.
In an alternative embodiment, the apparatus further comprises a cancel display module (not shown in the figure) for:
and controlling the view angle expansion area in the locked state to cancel display in response to the third touch operation for the view angle expansion control.
In an alternative embodiment, the apparatus further comprises an identification display module (not shown in the figure) for:
in the process of displaying a second game picture obtained by observing the target scene area at the second view angle in the control view angle expansion area, an expansion area identifier is displayed at a map position corresponding to the actual position of the target scene area observed at the second view angle, which is displayed in the view angle expansion area, in the virtual scene in the thumbnail map displayed in the graphical user interface.
In an alternative embodiment, the apparatus further comprises a tag synchronization module (not shown in the figure) for:
and responding to the marking operation, displaying marking prompt information in the visual angle expansion area, and sending the marking prompt information to a game picture of at least one second virtual character, wherein the second virtual character and the first virtual character belong to the same camp.
In an alternative embodiment, the area displaying module 1203 is specifically configured to:
controlling the second virtual camera to move to the target scene area according to the first moving speed, and displaying the target scene area observed at the second viewing angle in the viewing angle expansion area; the second visual angle is a shooting visual angle preset by the second virtual camera;
If the virtual object is detected to be included in the target scene area observed at the second visual angle, the second virtual camera is controlled to move in the target scene area according to the second moving speed, and the virtual object in the target scene area observed at the second visual angle is displayed in the visual angle expansion area; wherein the first movement rate is greater than the second movement rate.
According to the game display control device provided by the embodiment of the application, under the condition that the first game picture displayed on the graphical user interface and used for observing part of the virtual scene at the first visual angle is unchanged, the user controlling the first virtual character timely pays attention to the game situation around the first virtual character through the second game picture obtained by observing the target scene area at the second visual angle and displayed in real time in the visual angle expansion area, so that whether support or killing is carried out is decided, continuity of game behavior of the first virtual character in the virtual scene is not broken, and continuity of a game is guaranteed. And the graphical user interface simultaneously comprises a first game picture obtained by observing part of the virtual scene at a first visual angle and a second game picture obtained by observing the target scene area at a second visual angle, so that when a user observes the second game picture, the graphical user interface always displays the first game picture where the first virtual character is located, the user can not generate a sense of separating from the current game behavior, and the game experience of the user is further improved. Furthermore, the embodiment of the application can accurately acquire the user intention, avoid the situation that the visual angle expansion area is displayed on the graphical user interface by mistake due to misoperation of a player, influence the game experience of the user, further superimpose and display virtual scenes observed at different visual angles under the condition of not interrupting the current interactive operation of the user, ensure the continuity of the operation of the user in the game, improve the game operation efficiency and improve the game experience of the user.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 14, electronic device 1400 includes a processor 1401, a memory 1402, and a bus 1403.
Memory 1402 stores machine-readable instructions executable by processor 1401, which when electronic device 1400 is running, communicate between processor 1401 and memory 1402 over bus 1403, such that processor 1401, when running, performs the following instructions:
responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment, and displaying the visual angle expansion area on a graphical user interface;
determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and controlling the second game picture obtained by observing the target scene area at the second visual angle to be displayed in the visual angle expansion area.
In an alternative embodiment, the instructions executed by processor 1401 further include:
responding to a gravity sensing starting instruction, and controlling the terminal equipment to enter a gravity sensing state;
in the instructions executed by the processor 1401, in response to a rotation operation for a terminal device, a step of acquiring rotation information of the terminal device includes:
And responding to the rotation operation of the terminal equipment in the gravity sensing state, and acquiring the rotation information of the terminal equipment.
In an alternative embodiment, a view angle expansion control is displayed on a graphical user interface; in the instructions executed by the processor 1401, in response to the gravity sensing on instruction, the step of controlling the terminal device to enter the gravity sensing state includes:
and responding to a first touch operation aiming at the visual angle expansion control, and controlling the terminal equipment to enter a gravity sensing state.
In an alternative embodiment, the step of determining the target scene area from the virtual scene according to the rotation information of the terminal device in the instructions executed by the processor 1401 includes:
and determining the moving direction of the second virtual camera according to the rotation information of the terminal equipment, controlling the second virtual camera to move according to the moving direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area.
In an alternative embodiment, the virtual scene is divided into a plurality of visual field observation areas, the graphical user interface comprises a plurality of sub-interface areas, and the position distribution of the plurality of sub-interface areas in the graphical user interface corresponds to the position distribution of the plurality of visual field observation areas in the virtual scene one by one;
The step of determining the target scene area from the virtual scene according to the rotation information of the terminal device in the instructions executed by the processor 1401 includes:
determining a target sub-interface area corresponding to the rotation information from a plurality of sub-interface areas of the graphical user interface;
and determining the visual field observation area corresponding to the target sub-interface area as a target scene area according to the position distribution of the visual field observation areas in the virtual scene.
In an alternative embodiment, the graphical user interface further displays a thumbnail map, the thumbnail map displaying a third game screen obtained by observing all virtual scenes at a third viewing angle, the third game screen including azimuth information of each virtual character located in the virtual scenes; the second game screen includes attribute information of each virtual character located in the target scene area.
In an alternative embodiment, the display form for displaying the view expansion area on the graphical user interface includes one of:
overlapping and displaying the view angle expansion area on the thumbnail map;
the view angle expansion area is displayed at an arbitrary position on the graphic user interface other than the thumbnail map in accordance with the designated transparency.
In an alternative embodiment, the instructions executed by processor 1401 further include:
Controlling the viewing angle expansion area to move on the graphical user interface in response to the movement operation for the viewing angle expansion area;
or, in response to an enlarging operation for the viewing angle expansion region, the region area of the viewing angle expansion region is adjusted.
In an alternative embodiment, the visual angle expansion control is displayed on the graphical user interface, and the instructions executed by the processor 1401 further include:
when the position area corresponding to the moving view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to the second touch operation for the view angle expansion control, locking the position of the view angle expansion area on the graphical user interface, controlling the transparency reduction of the view angle expansion area, and enabling a user controlling the first virtual character to operate the functional control through the view angle expansion area.
In an alternative embodiment, the instructions executed by processor 1401 further include:
and controlling the view angle expansion area in the locked state to cancel display in response to the third touch operation for the view angle expansion control.
In an alternative embodiment, the instructions executed by processor 1401 further include:
in the process of displaying a second game picture obtained by observing the target scene area at the second view angle in the control view angle expansion area, an expansion area identifier is displayed at a map position corresponding to the actual position of the target scene area observed at the second view angle, which is displayed in the view angle expansion area, in the virtual scene in the thumbnail map displayed in the graphical user interface.
In an alternative embodiment, the instructions executed by processor 1401 further include:
and responding to the marking operation, displaying marking prompt information in the visual angle expansion area, and sending the marking prompt information to a game picture of at least one second virtual character, wherein the second virtual character and the first virtual character belong to the same camp.
In an alternative embodiment, the step of controlling the second game screen displayed in the view angle expansion area by observing the target scene area at the second view angle in the instruction executed by the processor 1401 includes:
controlling the second virtual camera to move to the target scene area according to the first moving speed, and displaying the target scene area observed at the second viewing angle in the viewing angle expansion area; the second visual angle is a shooting visual angle preset by the second virtual camera;
If the virtual object is detected to be included in the target scene area observed at the second visual angle, the second virtual camera is controlled to move in the target scene area according to the second moving speed, and the virtual object in the target scene area observed at the second visual angle is displayed in the visual angle expansion area; wherein the first movement rate is greater than the second movement rate.
In the embodiment of the application, under the condition that the first game picture displayed on the graphical user interface and used for observing part of the virtual scene at the first visual angle is unchanged, the user timely pays attention to the game situation around the first virtual character through the second game picture obtained by observing the target scene area at the second visual angle and displayed in real time in the visual angle expansion area, so that whether support or killing is carried out or not is decided, continuity of game behavior of the first virtual character in the virtual scene is not interrupted, and continuity of games is guaranteed. And the graphical user interface simultaneously comprises a first game picture obtained by observing part of the virtual scene at a first visual angle and a second game picture obtained by observing the target scene area at a second visual angle, so that when a user observes the second game picture, the graphical user interface always displays the first game picture where the first virtual character is located, the user can not generate a sense of separating from the current game behavior, and the game experience of the user is further improved. Furthermore, the embodiment of the application can accurately acquire the user intention, avoid the situation that the visual angle expansion area is displayed on the graphical user interface by mistake due to misoperation of a player, influence the game experience of the user, further superimpose and display virtual scenes observed at different visual angles under the condition of not interrupting the current interactive operation of the user, ensure the continuity of the operation of the user in the game, improve the game operation efficiency and improve the game experience of the user.
The present embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the following commands:
responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment, and displaying the visual angle expansion area on a graphical user interface;
determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and controlling the second game picture obtained by observing the target scene area at the second visual angle to be displayed in the visual angle expansion area.
In an alternative embodiment, the instructions executed by the computer-readable storage medium further include:
responding to a gravity sensing starting instruction, and controlling the terminal equipment to enter a gravity sensing state;
in the instructions executed by the computer-readable storage medium, in response to a rotation operation for a terminal device, a step of acquiring rotation information of the terminal device includes:
and responding to the rotation operation of the terminal equipment in the gravity sensing state, and acquiring the rotation information of the terminal equipment.
In an alternative embodiment, a view angle expansion control is displayed on a graphical user interface; in the instructions executed by the computer readable storage medium, in response to the gravity sensing start instruction, the step of controlling the terminal device to enter a gravity sensing state includes:
And responding to a first touch operation aiming at the visual angle expansion control, and controlling the terminal equipment to enter a gravity sensing state.
In an alternative embodiment, the step of determining the target scene area from the virtual scene according to the rotation information of the terminal device in the instructions executed by the computer readable storage medium includes:
and determining the moving direction of the second virtual camera according to the rotation information of the terminal equipment, controlling the second virtual camera to move according to the moving direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area.
In an alternative embodiment, the virtual scene is divided into a plurality of visual field observation areas, the graphical user interface comprises a plurality of sub-interface areas, and the position distribution of the plurality of sub-interface areas in the graphical user interface corresponds to the position distribution of the plurality of visual field observation areas in the virtual scene one by one;
in the instructions executed by the computer-readable storage medium, the step of determining the target scene area from the virtual scene according to the rotation information of the terminal device includes:
determining a target sub-interface area corresponding to the rotation information from a plurality of sub-interface areas of the graphical user interface;
And determining the visual field observation area corresponding to the target sub-interface area as a target scene area according to the position distribution of the visual field observation areas in the virtual scene.
In an alternative embodiment, the graphical user interface further displays a thumbnail map, the thumbnail map displaying a third game screen obtained by observing all virtual scenes at a third viewing angle, the third game screen including azimuth information of each virtual character located in the virtual scenes; the second game screen includes attribute information of each virtual character located in the target scene area.
In an alternative embodiment, the display form for displaying the view expansion area on the graphical user interface includes one of:
overlapping and displaying the view angle expansion area on the thumbnail map;
the view angle expansion area is displayed at an arbitrary position on the graphic user interface other than the thumbnail map in accordance with the designated transparency.
In an alternative embodiment, the instructions executed by the computer-readable storage medium further include:
controlling the viewing angle expansion area to move on the graphical user interface in response to the movement operation for the viewing angle expansion area;
or, in response to an enlarging operation for the viewing angle expansion region, the region area of the viewing angle expansion region is adjusted.
In an alternative embodiment, the visual angle expansion control is displayed on the graphical user interface, and the instructions executed by the computer readable storage medium further comprise:
when the position area corresponding to the moving view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to the second touch operation for the view angle expansion control, locking the position of the view angle expansion area on the graphical user interface, controlling the transparency reduction of the view angle expansion area, and enabling a user controlling the first virtual character to operate the functional control through the view angle expansion area.
In an alternative embodiment, the instructions executed by the computer-readable storage medium further include:
and controlling the view angle expansion area in the locked state to cancel display in response to the third touch operation for the view angle expansion control.
In an alternative embodiment, the instructions executed by the computer-readable storage medium further include:
in the process of displaying a second game picture obtained by observing the target scene area at the second view angle in the control view angle expansion area, an expansion area identifier is displayed at a map position corresponding to the actual position of the part of the virtual scene, which is displayed in the view angle expansion area and is observed at the second view angle, in the thumbnail map displayed in the graphical user interface.
In an alternative embodiment, the instructions executed by the computer-readable storage medium further include:
and responding to the marking operation, displaying marking prompt information in the visual angle expansion area, and sending the marking prompt information to a game picture of at least one second virtual character, wherein the second virtual character and the first virtual character belong to the same camp.
In an alternative embodiment, the step of controlling the second game screen displayed in the view angle expansion area and obtained by observing the target scene area at the second view angle in the instructions executed by the computer readable storage medium includes:
controlling the second virtual camera to move to the target scene area according to the first moving speed, and displaying the target scene area observed at the second viewing angle in the viewing angle expansion area; the second visual angle is a shooting visual angle preset by the second virtual camera;
if the virtual object is detected to be included in the target scene area observed at the second visual angle, the second virtual camera is controlled to move in the target scene area according to the second moving speed, and the virtual object in the target scene area observed at the second visual angle is displayed in the visual angle expansion area; wherein the first movement rate is greater than the second movement rate.
In the embodiment of the application, under the condition that the first game picture displayed on the graphical user interface and used for observing part of the virtual scene at the first visual angle is unchanged, the user timely pays attention to the game situation around the first virtual character through the second game picture obtained by observing the target scene area at the second visual angle and displayed in real time in the visual angle expansion area, so that whether support or killing is carried out or not is decided, continuity of game behavior of the first virtual character in the virtual scene is not interrupted, and continuity of games is guaranteed. And the graphical user interface simultaneously comprises a first game picture obtained by observing part of the virtual scene at a first visual angle and a second game picture obtained by observing the target scene area at a second visual angle, so that when a user observes the second game picture, the graphical user interface always displays the first game picture where the first virtual character is located, the user can not generate a sense of separating from the current game behavior, and the game experience of the user is further improved. Furthermore, the embodiment of the application can accurately acquire the user intention, avoid the situation that the visual angle expansion area is displayed on the graphical user interface by mistake due to misoperation of a player, influence the game experience of the user, further superimpose and display virtual scenes observed at different visual angles under the condition of not interrupting the current interactive operation of the user, ensure the continuity of the operation of the user in the game, improve the game operation efficiency and improve the game experience of the user.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A display control method of a game, characterized in that a graphical user interface is provided by a terminal device, the graphical user interface including a first game screen obtained by observing a part of a virtual scene at a first angle of view in a virtual combat task, the method comprising:
responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment, and displaying the visual angle expansion area on a graphical user interface;
determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and controlling the view angle expansion area to display a second game picture obtained by observing the target scene area at a second view angle.
2. The method according to claim 1, wherein the method further comprises:
responding to a gravity sensing starting instruction, and controlling the terminal equipment to enter a gravity sensing state;
the step of acquiring rotation information of the terminal device in response to the rotation operation for the terminal device includes:
and responding to the rotation operation of the terminal equipment in the gravity sensing state, and acquiring the rotation information of the terminal equipment.
3. The method of claim 2, wherein a perspective expansion control is displayed on the graphical user interface;
responding to a gravity sensing start instruction, and controlling the terminal equipment to enter a gravity sensing state, wherein the method comprises the following steps:
and responding to a first touch operation aiming at the visual angle expansion control, and controlling the terminal equipment to enter a gravity sensing state.
4. The method according to claim 1, wherein the step of determining the target scene area from the virtual scene based on the rotation information of the terminal device comprises:
and determining the moving direction of the second virtual camera according to the rotation information of the terminal equipment, controlling the second virtual camera to move according to the moving direction, and determining the area shot by the moved second virtual camera from the virtual scene as a target scene area.
5. The method of claim 1, wherein the virtual scene is divided into a plurality of field of view viewing areas, the graphical user interface comprises a plurality of sub-interface areas, and the position distribution of the plurality of sub-interface areas in the graphical user interface corresponds one-to-one with the position distribution of the plurality of field of view viewing areas in the virtual scene;
And determining a target scene area from the virtual scene according to the rotation information of the terminal equipment, wherein the step comprises the following steps:
determining a target sub-interface area corresponding to the rotation information from a plurality of sub-interface areas of the graphical user interface;
and determining the visual field observation area corresponding to the target sub-interface area as the target scene area according to the position distribution of the visual field observation areas in the virtual scene.
6. The method of claim 1, wherein the graphical user interface further displays a thumbnail map displaying a third game screen obtained by observing all virtual scenes at a third viewing angle, the third game screen including azimuth information of each virtual character located in the virtual scenes; the second game screen includes attribute information of each virtual character located in the target scene area.
7. The method of claim 6, wherein displaying the display form of the view expansion area on the graphical user interface comprises one of:
the view angle expansion area is displayed in a superimposed mode on the thumbnail map;
and displaying the view angle expansion area according to the designated transparency at any position on the graphical user interface except the thumbnail map.
8. The method according to claim 1, wherein the method further comprises:
controlling the view angle expansion area to move on the graphical user interface in response to a movement operation for the view angle expansion area;
or, in response to an enlarging operation for the viewing angle expansion region, adjusting a region area of the viewing angle expansion region.
9. The method of claim 8, wherein a perspective expansion control is displayed on the graphical user interface, the method further comprising:
and after the position area corresponding to the moved view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface or the area of the area corresponding to the adjusted view angle expansion area is partially overlapped with the touch area of the functional control displayed on the graphical user interface, responding to a second touch operation for the view angle expansion control, locking the position of the view angle expansion area on the graphical user interface, and controlling the transparency of the view angle expansion area to be reduced so as to enable a user controlling a first virtual role to operate the functional control through the view angle expansion area.
10. The method according to claim 9, wherein the method further comprises:
and responding to a third touch operation for the visual angle expansion control, and controlling the visual angle expansion area in the locked state to cancel display.
11. The method according to claim 1, wherein the method further comprises:
and displaying an extended region identifier at a map position corresponding to the actual position of the target scene region observed at the second viewing angle, which is displayed in the viewing angle extended region, in a thumbnail map displayed in a graphical user interface in the process of controlling the second game picture obtained by observing the target scene region at the second viewing angle in the viewing angle extended region.
12. The method according to claim 1, wherein the method further comprises:
and responding to the marking operation, displaying marking prompt information in the visual angle expansion area, and sending the marking prompt information to a game picture of at least one second virtual character, wherein the second virtual character and the first virtual character belong to the same camp.
13. The method according to claim 1, wherein the step of controlling the view angle expansion area to display a second game screen obtained by observing the target scene area at a second view angle comprises:
Controlling a second virtual camera to move to the target scene area according to a first moving speed, and displaying the target scene area observed at a second viewing angle in the viewing angle expansion area; the second visual angle is a shooting visual angle preset by the second virtual camera;
if the virtual object is detected to be included in the target scene area observed at the second visual angle, controlling the second virtual camera to move in the target scene area according to a second moving speed, and displaying the virtual object in the target scene area observed at the second visual angle in the visual angle expansion area; wherein the first movement rate is greater than the second movement rate.
14. A display control apparatus for a game, characterized in that a graphical user interface is provided by a terminal device, the graphical user interface including a first game screen obtained by observing a part of a virtual scene at a first angle of view in a virtual combat task, the apparatus comprising:
the rotation display module is used for responding to the rotation operation aiming at the terminal equipment in the process of controlling the first virtual character to perform the virtual fight task, acquiring the rotation information of the terminal equipment and displaying the visual angle expansion area on the graphical user interface;
The area determining module is used for determining a target scene area from the virtual scene according to the rotation information of the terminal equipment;
and the region display module is used for controlling the view angle expansion region to display a second game picture obtained by observing the target scene region at a second view angle.
15. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1 to 13.
16. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 13.
CN202310127723.7A 2023-02-01 2023-02-01 Game display control method and device, electronic equipment and storage medium Pending CN116099195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310127723.7A CN116099195A (en) 2023-02-01 2023-02-01 Game display control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310127723.7A CN116099195A (en) 2023-02-01 2023-02-01 Game display control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116099195A true CN116099195A (en) 2023-05-12

Family

ID=86254073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310127723.7A Pending CN116099195A (en) 2023-02-01 2023-02-01 Game display control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116099195A (en)

Similar Documents

Publication Publication Date Title
CN107823882B (en) Information processing method, information processing device, electronic equipment and storage medium
US11504620B2 (en) Method for controlling game character and electronic device and computer storage medium
US10716995B2 (en) Information processing method and apparatus, storage medium, and electronic device
US11975262B2 (en) Information processing method and apparatus, electronic device, and storage medium
JP7329764B2 (en) Game program, information processing system, information processing device, and information processing method
JP7427728B2 (en) Virtual object control method, device, computer device and program thereof
US9849375B2 (en) Game program and game device
EP3970819B1 (en) Interface display method and apparatus, and terminal and storage medium
JP7386360B2 (en) Information processing method, apparatus and terminal device
CN113440846B (en) Game display control method and device, storage medium and electronic equipment
CN111888766B (en) Information processing method and device in game, electronic equipment and storage medium
CN112619137A (en) Game picture switching method and device, electronic equipment and storage medium
TWI793838B (en) Method, device, apparatus, medium and product for selecting interactive mode for virtual object
EP3025769A1 (en) Image processing program, server device, image processing system, and image processing method
CN112791410A (en) Game control method and device, electronic equipment and storage medium
JP2020062116A (en) System, method, and program for providing content using augmented reality technique
CN116099195A (en) Game display control method and device, electronic equipment and storage medium
US11395967B2 (en) Selective indication of off-screen object presence
JP2020089492A (en) Game program, game processing method and game terminal
JP7116220B2 (en) Application control program, application control method and application control system
EP3984608A1 (en) Method and apparatus for controlling virtual object, and terminal and storage medium
WO2024078324A1 (en) Virtual object control method and apparatus, and storage medium and electronic device
CN116764192A (en) Virtual character control method, device, electronic equipment and readable storage medium
CN117753007A (en) Interactive processing method and device for virtual scene, electronic equipment and storage medium
CN113663326A (en) Game skill aiming method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination