WO2019109778A1 - 游戏对局结果的展示方法、装置及终端 - Google Patents

游戏对局结果的展示方法、装置及终端 Download PDF

Info

Publication number
WO2019109778A1
WO2019109778A1 PCT/CN2018/114824 CN2018114824W WO2019109778A1 WO 2019109778 A1 WO2019109778 A1 WO 2019109778A1 CN 2018114824 W CN2018114824 W CN 2018114824W WO 2019109778 A1 WO2019109778 A1 WO 2019109778A1
Authority
WO
WIPO (PCT)
Prior art keywords
game
virtual character
target virtual
match
displaying
Prior art date
Application number
PCT/CN2018/114824
Other languages
English (en)
French (fr)
Inventor
吴东
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019109778A1 publication Critical patent/WO2019109778A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera

Definitions

  • the embodiments of the present invention relate to the field of human-computer interaction technologies, and in particular, to a method, an apparatus, and a terminal for displaying a game match result.
  • Users typically install a client of a game application in a terminal such as a mobile phone or tablet to entertain the entertainment experience.
  • the game client After the game match is over, the game client will calculate the game result of the game match according to the specific situation of the game, and then show the result to the user.
  • the game client when the game client ends, the game client directly switches from the game screen to display the result display screen, and after obtaining the game result, loads the display match result in the result display screen.
  • the embodiment of the present application provides a method, a device, and a terminal for displaying a game match result, so as to improve the display effect of the game match result.
  • the technical solution is as follows:
  • the embodiment of the present application provides a method for displaying a game match result, where the method is performed by a terminal, and the method includes:
  • the game screen is displayed in the first person perspective with the currently controlled target virtual character
  • the game screen at the end of the game match is switched to be a transition picture sequence in which the target virtual character is in a third person perspective; wherein the transition picture sequence includes at least two a frame transition picture, and the first frame transition picture in the transition picture sequence has the same game scene as the game picture at the end of the game match;
  • an embodiment of the present application provides a device for displaying a game match result, which is applied to a terminal, where the device includes:
  • a screen display module configured to display the game screen in a first person perspective with the currently controlled target virtual character in the process of playing the game
  • a screen switching module configured to switch, when the game match ends, a game screen at the end of the game match to a transition picture sequence in which the target virtual character is in a third person perspective; wherein the transition At least two frames of transition pictures are included in the sequence of pictures, and the first frame transition picture in the transition picture sequence has the same game scene as the game picture at the end of the game match;
  • a result obtaining module configured to acquire a game result of the game match in the process of displaying the transition picture sequence
  • the result display module is configured to display the match result.
  • an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a command set, the at least one instruction, the At least one program, the set of codes, or a set of instructions is loaded and executed by the processor to implement a method of presenting the game match results described above.
  • an embodiment of the present application provides a computer readable storage medium, where the storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program,
  • the code set or set of instructions is loaded by the processor and executed to implement the method of presenting the game match results described above.
  • the display result of the transition picture sequence is used to obtain the result of the game, thereby avoiding the phenomenon that the direct display result display picture is similar to the picture jam, so that The results of the game were more smooth and smooth, and the display effect was improved.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for displaying a game match result provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a virtual character performing deceleration motion according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a view switching provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a handover process of a transition picture according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a technical solution provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for displaying a game match result provided by another embodiment of the present application.
  • FIG. 8 is a schematic diagram of a character display screen provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an intercepted person display screen provided by an embodiment of the present application.
  • FIG. 10 is a block diagram of a device for displaying a game match result provided by an embodiment of the present application.
  • FIG. 11 is a block diagram of a device for displaying game match results according to another embodiment of the present application.
  • FIG. 12 is a structural block diagram of a terminal according to an embodiment of the present application.
  • FPS First-person shooting game
  • the player's subjective perspective is used for shooting. The player does not need to manipulate the virtual characters on the screen to play the game, but the immersive experience. The entire game's game process.
  • a virtual camera is a functional module that captures a scene in a game by simulating a camera in reality and presents the scene to the player.
  • the first person perspective is the perspective of the player's subjective perspective on manipulating the virtual character for the game. During the game, the player does not see the virtual character that he or she has entered in the game screen, or can only see part of the body of the virtual character that he has entered, and the scene in which the virtual character is seen.
  • the third person perspective is the perspective of manipulating virtual characters for game play from the perspective of an outsider. During the game, the player can see the virtual character he has entered in the game screen and see the scene where the virtual character is located.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment may include: a terminal 110 and a server 120.
  • the terminal 110 refers to an electronic device such as a mobile phone, a tablet computer, a PC (Personal Computer), an electronic reader, a multimedia playback device, and the like.
  • a game client such as an FPS client, may be installed in the terminal 110, or other application clients having game functions may be installed in the terminal 110.
  • Terminal 110 has a communication connection with server 120 over a wired or wireless network.
  • the server 120 can be a server, a server cluster composed of several servers, or a cloud computing service center.
  • the server 120 refers to a server that provides background services for clients installed in the terminal 110.
  • the method provided by the embodiment of the present application may be the terminal 110.
  • the execution entity of each step is a client installed and running in the terminal 110, such as a game client.
  • the execution body of each step is described as a client, but the present invention is not limited thereto.
  • FIG. 2 shows a flowchart of a method for displaying a game match result provided by an embodiment of the present application.
  • This method can be applied to the implementation environment shown in FIG.
  • the method can include the following steps:
  • Step 201 In the process of the game match, the game screen is displayed in the first person perspective with the currently controlled target virtual character.
  • Game game refers to a game time spent to complete a certain game goal.
  • the game target is different for different games.
  • the game goal can be through a certain level, killing the enemy, and pushing down the enemy base camp.
  • the monsters and the like of the target number are not limited in this embodiment.
  • the game target of a game game can be to kill the target number of enemy characters, occupy the enemy base camp, and so on.
  • Virtual characters, also known as game characters, game characters, etc., refer to objects that players substitute and manipulate during the game.
  • the number of virtual characters can be one or more.
  • the number of virtual characters in the game match is one, it is usually a single player game, and the virtual character refers to the virtual character that the current client controls.
  • the number of virtual characters in the game match is multiple, it may be a single player game, and the current client corresponds to a plurality of different virtual characters, and the player may replace the virtual characters that are substituted and manipulated during the game match; It can also be a multiplayer game. Multiple virtual characters correspond to multiple different clients. Each client player substitutes and controls one or more virtual characters, and the virtual characters that different players substitute and manipulate can belong to the same The camp can also belong to different camps.
  • the virtual character can move in the game scene, such as walking, running, jumping, etc., and can change different poses.
  • the client displays the game screen in the first person perspective with the currently controlled target virtual character.
  • the client can be a client of the FPS.
  • the target virtual character is located in the game scene provided by the game game.
  • the game scene refers to a virtual scene created during the game match process for the virtual character to compete in the game, such as a virtual house, a virtual island, a virtual map, and the like.
  • the game screen displayed by the client includes a virtual scene provided by the game game, and optionally includes one or more virtual characters located in the virtual scene. If the client is in the first person perspective with the currently controlled target virtual character, the target virtual character is not included in the game screen displayed by the client, or only part of the limb of the target virtual character, such as an arm.
  • Step 202 When the game match ends, the game screen at the end of the game match is switched and displayed as a transition picture sequence in which the target virtual character is in the third person perspective.
  • the game screen at the end of the game match is not directly displayed as the result display screen, but is switched to be displayed as a transition picture sequence.
  • the transition picture sequence is typically a sequence of multi-frame transition pictures.
  • the first frame transition screen in the transition picture sequence has the same game scene as the game screen at the end of the game match.
  • the game scene displayed in the entire transition picture sequence has the same game scene as the game screen at the end of the game match, that is, after the game match ends, the game scene at the end of the game match is maintained at the transition screen. Displayed in the sequence.
  • the client can control the target virtual character to remain stationary in the game scene, that is, the target virtual character stops moving after the game match ends, and remains stationary, and the relative position of the game scene is not Change again.
  • the client can also control the target virtual character to move in the game scene.
  • the client controls the target virtual character to move at a preset speed in the game scene during the display of the transition picture sequence.
  • the preset moving speed is less than the moving speed of the target virtual character at the end of the game match, and the preset moving speed is not 0. That is, the target virtual character moves at a slow rate after the game match ends.
  • the client controls the target virtual character to decelerate or accelerate at a preset acceleration in the game scene during the display of the transition picture sequence. If the preset acceleration is less than 0, the target virtual character gradually reduces the moving speed after the game match ends until the movement is stopped and remains stationary.
  • the preset acceleration may also be greater than 0 or equal to 0, which is not limited in this embodiment of the present application.
  • the preset acceleration is -1 m/(s ⁇ 2)
  • the target virtual character moves toward the direction 31 at a speed of 10 m/s, which is displayed after the game match ends.
  • the target virtual character is at an initial velocity of 10 m/s, and the acceleration is decelerated toward the direction 31 with an acceleration of -1 m/(s ⁇ 2).
  • the client controls the target virtual character's perspective to switch from the first person perspective to the third person perspective after the game match ends.
  • the transition picture sequence is displayed in a third-person perspective with the target virtual character. In this way, the target virtual character is completely displayed in the transition picture sequence.
  • all the limbs of the target virtual character may be displayed in the transition picture sequence, and part of the limb of the target virtual character may also be displayed, but compared with the first person perspective view. More body parts.
  • the target virtual character currently controlled by the client is already in the third person perspective at the end of the game match, it is not necessary to perform the view switching.
  • the target virtual character currently controlled by the client displays only two hands in the game screen, and parts such as the head and the body are not displayed, and after the game match ends, the target virtual character is in the transition screen.
  • the full display, including the body of the entire target virtual character, is displayed.
  • the above-mentioned switching process may be performed in a manner of fading in and out, or may be switched in a manner of rotating and jumping out, or in other manners, which is not limited in the embodiment of the present application.
  • FIG. 4 which exemplarily shows a switching process of a part of the transition picture frame when the perspective of the target virtual character currently controlled by the client is switched from the first person perspective to the third person perspective.
  • the client controls the lens of the virtual camera corresponding to the target virtual character, gradually moves away from the target virtual character, and/or controls the lens of the virtual camera corresponding to the target virtual character, surrounding the target virtual The character rotates.
  • the client controls the lens of the virtual camera corresponding to the target virtual character, and gradually moves away from the target virtual character; when the lens of the virtual camera touches the obstacle during the process of being away, the client controls the virtual camera corresponding to the target virtual character.
  • the lens rotates around the target virtual character to avoid the above obstacles. Then, the client continues to control the lens of the virtual camera corresponding to the target virtual character, and gradually moves away from the target virtual character. In the above manner, the authenticity of the display content of the transition picture sequence can be ensured, and the lens wear can be avoided.
  • step 203 in the process of displaying the transition picture sequence, the game result of the game match is obtained.
  • the client uses the process of displaying the transition picture sequence to obtain the game result of the game match.
  • the client may locally settle the game result of the game match; if it is a networked game, the client may send a server result acquisition request to the server for requesting the game from the server.
  • the server settles the game result of the game game and sends it to the client.
  • the user may also manipulate the target virtual character release skill.
  • the client acquires a skill release instruction corresponding to the target virtual character, the skill release instruction is used to trigger the target virtual character release skill; and the control target virtual character is released according to the preset skill release speed. skill.
  • the preset skill release speed is less than the skill release speed of the target virtual character during the game match.
  • the client may also default to not responding to the skill release command after the game game ends, that is, the user is not allowed to trigger the target virtual character release skill again.
  • the user may customize whether the target avatar release skill can be triggered after the game match ends.
  • the client receives first setting information for setting a skill permission of the target virtual character to be triggered after the game match ends; setting a skill of the target virtual character according to the first setting information Allowed to be triggered after the game match ends.
  • the client receives the second setting information
  • the second setting information is used to set that the skill of the target virtual character is not allowed to be triggered after the game match ends; and the skill of the target virtual character is set to be according to the second setting information. It is not allowed to be triggered after the game is over.
  • step 204 the result of the match is displayed.
  • the client After the client obtains the result of the match, the result of the match is displayed. Optionally, the client displays the result of the match on the upper layer of the transition screen; or the client displays the result of the match in the transition screen.
  • the client adjusts display parameters of the transition picture, including ambiguity and/or brightness, before displaying the game result, for example, the client gradually increases the ambiguity of the transition picture, and gradually The brightness of the transition picture is reduced; then, the client displays the transition screen according to the adjusted display parameters, and displays the result of the match on the upper layer of the transition screen.
  • FIG. 5 shows a schematic diagram of a handover process of a transition picture
  • the client first controls the target virtual character to switch from the first person perspective to the third person perspective, and then the virtual camera's lens keeps moving away from the target virtual The character, then the virtual camera's lens rotates around the target virtual character in a preset direction, and at the end, the brightness of the transition picture is continuously reduced by the mask, and finally the result of the match is displayed.
  • the transition screen is generated by the client according to the content of the game screen at the end of the game match.
  • the client uses the game scene of the game screen at the end of the game as the game scene displayed in the transition screen; in addition, the client adds a virtual character of the third person perspective to the game scene of the transition screen, thereby
  • the display content of the transition screen includes a game scene and a virtual character of a third person perspective located in the game scene.
  • the client adjusts the game scene in the transition screen and adjusts the position, posture, angle, and the like of the virtual character in the game scene according to the moving speed of the virtual character after the game match and the lens change of the virtual camera, and generates a frame-by-frame.
  • the arranged transition pictures form a sequence of transition pictures.
  • the client displays the transition picture sequence to present the dynamic change process of the game scene and the dynamic change process of the virtual object in the game scene.
  • the client logically determines that the game match ends, and the time lapse of the client is slow or zero to achieve the effect of controlling the moving speed of the virtual character.
  • the client's time lapse speed needs to be synchronized with the server to ensure the accuracy of the game data on the server side. From the screen representation, the perspective of the virtual character is switched from the first person perspective to the second person perspective, and at the same time the movement speed is slow or zero. After that, the lens of the virtual camera is continuously pulled and rotated to present the effect of dynamically changing the display content of the transition picture.
  • the transition picture sequence is displayed after the game game ends, and the result of the transition picture sequence is used to obtain the game result, thereby avoiding directly displaying the result display screen to generate a picture card similar to the picture card.
  • the phenomenon of the game makes the display of the results of the game more smooth and smooth, and improves the display effect.
  • the method may include the following steps:
  • step 701 it is determined whether the current game match ends.
  • Step 702 When it is determined that the current game match ends, set the time lapse speed to 0, and control the virtual character of the current client in the game game to stop moving, and remain still.
  • step 703 the perspective of the virtual character controlling the current client is faded out from the first person perspective.
  • Step 704 Control the perspective of the virtual character of the current client to switch to a third person perspective.
  • step 705 the lens of the control virtual camera is gradually away from the virtual character of the current client.
  • Step 706 controlling the lens of the virtual camera to rotate around the virtual character of the current client.
  • step 707 the blur of the transition picture is increased and the brightness is reduced until it becomes dark.
  • step 708 the result of the match is displayed on the upper layer of the transition screen.
  • the client may need to display other content in addition to the game result.
  • the client is after the game match ends. You can also perform the following steps:
  • the target content includes, but is not limited to, a segment that satisfies a preset condition in a game match process, a segment in which a virtual character used in a game match process satisfies a preset condition, and a use of a virtual character used in a game match process. At least one.
  • the foregoing preset conditions include that the performance score of the virtual character is higher than the preset score, and/or the enemy character of the continuous kill is higher than the preset number.
  • the client may obtain the performance score of the virtual character, and detect whether the obtained performance score is higher than the preset score.
  • the step of obtaining the performance score may include: the current client calculates the performance score at the local end, or obtains the performance score calculated by the server.
  • the client may acquire the number of the enemy characters continuously killed by the virtual character, and detect whether the acquired quantity is greater than the preset quantity.
  • the step of obtaining the number of kills may include: the current client counts the number of kills at the local end, or obtains the number of kills calculated by the server.
  • the client may acquire the three kill segments and the four kill segments in the game match process. Or five kill fragments.
  • the client may obtain a segment in which the virtual character of the current client meets the preset condition during the game matching process, and may also acquire a segment that meets the preset condition during the game matching process by the virtual character of the other client.
  • the client can acquire the highlight of the virtual character from the server.
  • the server can parse and extract the above-mentioned highlights according to the performance of each client in the game matching process.
  • the client can obtain the use strategy of the virtual character from the server.
  • the use of the virtual character stored in the server can be a pre-set strategy for the designer, or a strategy uploaded by other users.
  • the target content is displayed on the upper layer of the transition screen.
  • the client may perform the following steps:
  • the virtual character included in the character display screen is a target virtual character controlled by the client during the game match process.
  • the virtual character included in the character display screen may also include the target virtual character controlled by the client during the game match process and the enemy virtual character of the other client, and present a picture of both PKs.
  • Control commands are used to trigger the adjustment of the pose of the virtual character.
  • the virtual character may include different poses such as standing, squatting, jumping, and the like.
  • the client can display a number of alternative gestures from which the user selects the gesture to be adjusted.
  • the client displays the character display screen after displaying the game result, and the user can adjust the posture of the virtual character. After that, the user triggers the interception instruction, and the client intercepts the character display screen, as shown in FIG.
  • the client can also provide functions for saving, sharing, etc. of the captured characters.
  • FIG. 10 is a block diagram of a device for displaying a game match result provided by an embodiment of the present application.
  • the apparatus has a function of implementing the above-described method examples, and the functions may be implemented by hardware or by hardware to execute corresponding software.
  • the apparatus may include a screen display module 1010, a screen switching module 1020, a result obtaining module 1030, and a result display module 1040.
  • the screen display module 1010 is configured to display the game screen in a first person perspective with the currently controlled target virtual character in the process of the game match.
  • the screen switching module 1020 is configured to switch, when the game match ends, a game screen at the end of the game match to a transition picture sequence in which the target virtual character is in a third person perspective; wherein The transition picture sequence includes at least two frame transition pictures, and the first frame transition picture in the transition picture sequence has the same game scene as the game picture at the end of the game match.
  • the result obtaining module 1030 is configured to obtain a game result of the game match in the process of displaying the transition picture sequence.
  • the result display module 1040 is configured to display the match result.
  • the transition picture sequence is displayed after the game game ends, and the result of the transition picture sequence is used to obtain the game result, thereby avoiding directly displaying the result display screen to generate a picture card similar to the picture card.
  • the phenomenon of the game makes the display of the results of the game more smooth and smooth, and improves the display effect.
  • the screen switching module 1020 is further configured to:
  • the target virtual character is controlled to perform deceleration or acceleration movement in the game scene according to a preset acceleration.
  • the apparatus further includes:
  • the instruction acquisition module 1050 is configured to acquire, in the process of displaying the transition picture sequence, a skill release instruction corresponding to the target virtual character, where the skill release instruction is used to trigger the target virtual character release skill;
  • the skill release module 1060 is configured to control the target virtual character to release the skill according to a preset skill release speed, where the preset skill release speed is less than the skill release speed of the target virtual character in the game match process.
  • the apparatus further includes:
  • the information receiving module 1070 is configured to receive first setting information, where the first setting information is used to set that the skill of the target virtual character is allowed to be triggered after the game matching ends;
  • the setting module 1080 is configured to set the skill of the target virtual character to be triggered after the game match ends according to the first setting information.
  • the apparatus further includes:
  • a lens control module 1090 configured to control a lens of the virtual camera corresponding to the target virtual character to gradually move away from the target virtual character during display of the transition picture sequence; and/or display the transition picture sequence And controlling a lens of the virtual camera corresponding to the target virtual character to rotate around the target virtual character.
  • the result display module 1040 is configured to:
  • Adjusting display parameters of the transition screen the display parameters including ambiguity and/or brightness
  • the result of the match is displayed on the upper layer of the transition screen.
  • the apparatus further includes:
  • a character display module 1091 configured to display a character display screen, where the character display screen includes the target virtual character
  • the posture adjustment module 1092 is configured to adjust a posture of the target virtual character according to a control instruction corresponding to the target virtual character;
  • the screen capture module 1093 is configured to intercept the character display screen when acquiring an intercept instruction corresponding to the character display screen.
  • FIG. 12 is a structural block diagram of a terminal 1200 according to an embodiment of the present application.
  • the terminal 1200 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV). Level 4) Player, laptop or desktop computer.
  • Terminal 1200 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, and the like.
  • the terminal 1200 can be implemented as the collection device 12 or the playback device 13 in the above embodiment.
  • the terminal 1200 includes a processor 1201 and a memory 1202.
  • Processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1201 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 1201 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 1201 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and rendering of the content that the display needs to display.
  • the processor 1201 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1202 can include one or more computer readable storage media, which can be non-transitory.
  • the memory 1202 may also include high speed random access memory, as well as non-volatile memory such as one or more magnetic disk storage devices, flash memory storage devices.
  • the non-transitory computer readable storage medium in memory 1202 is for storing at least one instruction for execution by processor 1201 to implement the methods provided by the method embodiments of the present application.
  • the terminal 1200 optionally further includes: a peripheral device interface 1203 and at least one peripheral device.
  • the processor 1201, the memory 1202, and the peripheral device interface 1203 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1203 via a bus, signal line or circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1204, a touch display screen 1205, a camera 1206, an audio circuit 1207, a positioning component 1208, and a power source 1209.
  • the peripheral device interface 1203 can be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 1201 and the memory 1202.
  • processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one of processor 1201, memory 1202, and peripheral interface 1203 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 1204 is configured to receive and transmit an RF (Radio Frequency) signal, also referred to as an electromagnetic signal.
  • Radio frequency circuit 1204 communicates with the communication network and other communication devices via electromagnetic signals.
  • the RF circuit 1204 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1204 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1204 can communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, the World Wide Web, a metropolitan area network, an intranet, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
  • the radio frequency circuit 1204 may further include an NFC (Near Field Communication) related circuit, which is not limited in this application.
  • the display screen 1205 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • display screen 1205 is a touch display screen, display screen 1205 also has the ability to capture touch signals over the surface or surface of display screen 1205.
  • the touch signal can be input to the processor 1201 as a control signal for processing.
  • the display screen 1205 can also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1205 may be one, and the front panel of the terminal 1200 is disposed; in other embodiments, the display screens 1205 may be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; In still other embodiments, the display screen 1205 can be a flexible display screen disposed on a curved surface or a folded surface of the terminal 1200. Even the display screen 1205 can be set as a non-rectangular irregular pattern, that is, a profiled screen.
  • the display screen 1205 can be prepared by using an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • Camera component 1206 is used to capture images or video.
  • camera assembly 1206 includes a front camera and a rear camera.
  • the front camera is placed on the front panel of the terminal, and the rear camera is placed on the back of the terminal.
  • the rear camera is at least two, which are respectively a main camera, a depth camera, a wide-angle camera, and a telephoto camera, so as to realize the background blur function of the main camera and the depth camera, and the main camera Combine with a wide-angle camera for panoramic shooting and VR (Virtual Reality) shooting or other integrated shooting functions.
  • camera assembly 1206 can also include a flash.
  • the flash can be a monochrome temperature flash or a two-color temperature flash.
  • the two-color temperature flash is a combination of a warm flash and a cool flash that can be used for light compensation at different color temperatures.
  • the audio circuit 1207 can include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals for processing into the processor 1201 for processing, or input to the RF circuit 1204 for voice communication.
  • the microphones may be multiple, and are respectively disposed at different parts of the terminal 1200.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is then used to convert electrical signals from the processor 1201 or the RF circuit 1204 into sound waves.
  • the speaker can be a conventional film speaker or a piezoelectric ceramic speaker.
  • audio circuit 1207 can also include a headphone jack.
  • the positioning component 1208 is configured to locate the current geographic location of the terminal 1200 to implement navigation or LBS (Location Based Service).
  • the positioning component 1208 can be a positioning component based on a US-based GPS (Global Positioning System), a Chinese Beidou system, or a Russian Galileo system.
  • a power supply 1209 is used to power various components in the terminal 1200.
  • the power source 1209 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery that is charged by a wired line
  • a wireless rechargeable battery is a battery that is charged by a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • terminal 1200 also includes one or more sensors 1210.
  • the one or more sensors 1210 include, but are not limited to, an acceleration sensor 1211, a gyro sensor 1212, a pressure sensor 1213, a fingerprint sensor 1214, an optical sensor 1215, and a proximity sensor 1216.
  • the acceleration sensor 1211 can detect the magnitude of the acceleration on the three coordinate axes of the coordinate system established by the terminal 1200.
  • the acceleration sensor 1211 can be used to detect components of gravity acceleration on three coordinate axes.
  • the processor 1201 can control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1211.
  • the acceleration sensor 1211 can also be used for the acquisition of game or user motion data.
  • the gyro sensor 1212 can detect the body direction and the rotation angle of the terminal 1200, and the gyro sensor 1212 can cooperate with the acceleration sensor 1211 to collect the 3D motion of the user to the terminal 1200. Based on the data collected by the gyro sensor 1212, the processor 1201 can implement functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • the pressure sensor 1213 may be disposed at a side border of the terminal 1200 and/or a lower layer of the touch display screen 1205.
  • the pressure sensor 1213 When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal to the terminal 1200 can be detected, and the processor 1201 performs left and right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213.
  • the operability control on the UI interface is controlled by the processor 1201 according to the user's pressure operation on the touch display screen 1205.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1214 is configured to collect the fingerprint of the user, and the processor 1201 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1201 authorizes the user to perform related sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying and changing settings, and the like.
  • the fingerprint sensor 1214 can be disposed on the front, back, or side of the terminal 1200. When the physical button or vendor logo is set on the terminal 1200, the fingerprint sensor 1214 can be integrated with the physical button or the manufacturer logo.
  • Optical sensor 1215 is used to collect ambient light intensity.
  • the processor 1201 can control the display brightness of the touch display screen 1205 based on the ambient light intensity acquired by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1205 is raised; when the ambient light intensity is low, the display brightness of the touch display screen 1205 is lowered.
  • the processor 1201 can also dynamically adjust the shooting parameters of the camera assembly 1206 based on the ambient light intensity acquired by the optical sensor 1215.
  • Proximity sensor 1216 also referred to as a distance sensor, is typically disposed on the front panel of terminal 1200. Proximity sensor 1216 is used to capture the distance between the user and the front of terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front side of the terminal 1200 is gradually decreasing, the processor 1201 controls the touch display screen 1205 to switch from the bright screen state to the screen state; when the proximity sensor 1216 detects When the distance between the user and the front side of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display screen 1205 to switch from the state of the screen to the bright state.
  • FIG. 12 does not constitute a limitation to the terminal 1200, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements may be employed.
  • a terminal comprising a processor and a memory, the memory storing at least one instruction, at least one program, a code set or a set of instructions.
  • the at least one instruction, the at least one program, the code set, or the set of instructions are configured to be executed by one or more processors to implement the methods described above.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program
  • the set of codes or the set of instructions implements the above method when executed by a processor of the terminal.
  • the computer readable storage medium described above may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a plurality as referred to herein means two or more.
  • "and/or” describing the association relationship of the associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate that there are three cases where A exists separately, A and B exist at the same time, and B exists separately.
  • the character "/" generally indicates that the contextual object is an "or" relationship.

Abstract

一种游戏对局结果的展示方法、装置及终端。其中方法包括:在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面;在游戏对局结束时,将游戏对局结束时的游戏画面,切换显示为以目标虚拟角色处于第三人称视角的过渡画面序列,过渡画面序列中的第1帧过渡画面与游戏对局结束时的游戏画面具有相同的游戏场景;在显示过渡画面序列的过程中,获取游戏对局的对局结果;显示对局结果。通过在游戏对局结束后显示过渡画面序列,利用过渡画面序列的显示过程获取对局结果,避免直接显示结果展示画面产生类似于画面卡顿的现象,使得对局结果的展示更加平稳流畅,提高展示效果。

Description

游戏对局结果的展示方法、装置及终端
本申请要求于2017年12月05日提交的、申请号为201711268205.8、发明名称为“游戏对局结果的展示方法、装置及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人机交互技术领域,特别涉及一种游戏对局结果的展示方法、装置及终端。
背景技术
用户通常会在诸如手机、平板电脑之类的终端中安装游戏应用程序的客户端,以进行娱乐体验。
在游戏对局结束之后,游戏客户端会根据本次游戏对局的具体情况,计算本次游戏对局的对局结果,而后将对局结果展现给用户。在相关技术中,游戏客户端在游戏对局结束时,直接从游戏画面切换显示为结果展示画面,并在获得对局结果之后,在结果展示画面中加载显示对局结果。
由于对局结果的清算需要耗费一定时间,在这段时间内,结果展示画面中并无内容,产生类似于画面卡顿的现象,展示效果不佳。
发明内容
本申请实施例提供了一种游戏对局结果的展示方法、装置及终端,以提升游戏对局结果的展示效果。所述技术方案如下:
一方面,本申请实施例提供一种游戏对局结果的展示方法,所述方法由终端执行,所述方法包括:
在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面;
在所述游戏对局结束时,将所述游戏对局结束时的游戏画面,切换显示为以所述目标虚拟角色处于第三人称视角的过渡画面序列;其中,所述过渡画面 序列中包括至少两帧过渡画面,且所述过渡画面序列中的第1帧过渡画面与所述游戏对局结束时的游戏画面具有相同的游戏场景;
在显示所述过渡画面序列的过程中,获取所述游戏对局的对局结果;
显示所述对局结果。
另一方面,本申请实施例提供一种游戏对局结果的展示装置,应用于终端中,所述装置包括:
画面显示模块,用于在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面;
画面切换模块,用于在所述游戏对局结束时,将所述游戏对局结束时的游戏画面,切换显示为以所述目标虚拟角色处于第三人称视角的过渡画面序列;其中,所述过渡画面序列中包括至少两帧过渡画面,且所述过渡画面序列中的第1帧过渡画面与所述游戏对局结束时的游戏画面具有相同的游戏场景;
结果获取模块,用于在显示所述过渡画面序列的过程中,获取所述游戏对局的对局结果;
结果显示模块,用于显示所述对局结果。
再一方面,本申请实施例提供一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述游戏对局结果的展示方法。
又一方面,本申请实施例提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述游戏对局结果的展示方法。
本申请实施例提供的技术方案中,通过在游戏对局结束后显示过渡画面序列,利用过渡画面序列的显示过程获取对局结果,避免直接显示结果展示画面产生类似于画面卡顿的现象,使得对局结果的展示更加平稳流畅,提高展示效果。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所 需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例提供的实施环境的示意图;
图2是本申请一个实施例提供的游戏对局结果的展示方法的流程图;
图3是本申请一个实施例提供的虚拟角色做减速运动的示意图;
图4是本申请一个实施例提供的视角切换的示意图;
图5是本申请一个实施例提供的过渡画面的切换过程的示意图;
图6是本申请一个实施例提供的技术方案的原理图;
图7是本申请另一个实施例提供的游戏对局结果的展示方法的流程图;
图8是本申请一个实施例提供的人物展示画面的示意图;
图9是本申请一个实施例提供的截取人物展示画面的示意图;
图10是本申请一个实施例提供的游戏对局结果的展示装置的框图;
图11是本申请另一个实施例提供的游戏对局结果的展示装置的框图;
图12是本申请一个实施例提供的终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了便于说明,下述先对本申请各个实施例中所涉及的术语做简单介绍。
FPS(First-person shooting game,第一人称射击类游戏),是一种以玩家的主观视角来进行射击比赛的游戏,玩家无需再操纵屏幕中的虚拟角色来进行游戏,而是身临其境地体验整个游戏的比赛过程。
虚拟摄像机,是一种通过模仿现实中的摄像机,对游戏中的场景进行拍摄并为玩家展示该场景的功能模块。
第一人称视角,是以玩家的主观视角来操控虚拟角色进行游戏的视角。在游戏过程中,玩家在游戏画面中看不到自己代入的虚拟角色,或只能看到自己代入的虚拟角色的部分肢体,以及看到这个虚拟角色所处的场景。
第三人称视角,是以局外者的视角来操控虚拟角色进行游戏的视角。在游戏过程中,玩家在游戏画面中能够看到自己代入的虚拟角色以及看到这个虚拟角色所处的场景。
请参考图1,其示出了本申请一个实施例提供的实施环境的示意图,该实施环境可以包括:终端110和服务器120。
终端110是指诸如手机、平板电脑、PC(Personal Computer,个人计算机)、电子阅读器、多媒体播放设备等电子设备。终端110中可以安装游戏客户端,比如FPS客户端,或者终端110中还可以安装具有游戏功能的其它应用程序客户端。
终端110通过有线或者无线网络与服务器120之间具有通信连接。
服务器120可以是一台服务器,也可以是由若干台服务器组成的服务器集群,还可以是一个云计算服务中心。服务器120是指为终端110中安装的客户端提供后台服务的服务器。
本申请实施例提供的方法,各步骤的执行主体可以是终端110,例如各步骤的执行主体为终端110中安装运行的客户端,如游戏客户端。在下述方法实施例中,为了便于描述,以各步骤的执行主体为客户端进行介绍说明,但对此不构成限定。
请参考图2,其示出了本申请一个实施例提供的游戏对局结果的展示方法的流程图。该方法可应用于图1所示的实施环境中。该方法可以包括如下几个步骤:
步骤201,在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面。
游戏对局是指为完成某一游戏目标而耗费的一段游戏时间,针对不同的游戏,上述游戏目标也有所不同,例如游戏目标可以是通过某一关卡、击杀敌方、推倒敌方大本营、击杀目标数量的怪物等,本申请实施例对此不作限定。以FPS游戏为例,一个游戏对局的游戏目标可以是击杀目标数量的敌方角色、占领敌方大本营等。虚拟角色也称为游戏角色、游戏人物等,是指玩家在游戏过程中代入和操控的对象。
在一个游戏对局中,虚拟角色的数量可以是一个,也可以是多个。当游戏对局中虚拟角色的数量为一个时,通常为单人制游戏,该虚拟角色是指当前客户端操控的虚拟角色。当游戏对局中虚拟角色的数量为多个时,可以是单人制游戏,当前客户端对应于多个不同的虚拟角色,玩家可在游戏对局过程中更换 其代入和操控的虚拟角色;也可以是多人制游戏,多个虚拟角色对应于多个不同的客户端,每一个客户端的玩家代入和操控一个或多个虚拟角色,并且,不同玩家所代入和操控的虚拟角色可以属于同一阵营,也可以属于不同阵营。在游戏对局过程中,虚拟角色能够在游戏场景中移动,例如行走、奔跑、跳跃等,且可以变换不同的姿态。
在游戏对局的过程中,客户端以当前控制的目标虚拟角色处于第一人称视角显示游戏画面。例如,该客户端可以是FPS的客户端。目标虚拟角色位于游戏对局所提供的游戏场景中,该游戏场景是指是指在游戏对局过程中营造出的供虚拟角色进行游戏竞技的虚拟场景,如虚拟房屋、虚拟岛屿、虚拟地图等。客户端显示的游戏画面中,包含游戏对局所提供的虚拟场景,可选地还包括位于该虚拟场景中的一个或多个虚拟角色。如果客户端以当前控制的目标虚拟角色处于第一人称视角,则客户端所显示的游戏画面中不包括目标虚拟角色,或者只包括目标虚拟角色的部分肢体,如一段手臂。
步骤202,在游戏对局结束时,将游戏对局结束时的游戏画面,切换显示为以目标虚拟角色处于第三人称视角的过渡画面序列。
在本申请实施例中,并不直接从游戏对局结束时的游戏画面切换显示为结果展示画面,而是切换显示为过渡画面序列。过渡画面序列通常是由多帧过渡画面构成的序列。过渡画面序列中的第1帧过渡画面与游戏对局结束时的游戏画面具有相同的游戏场景。在一个示例中,整个过渡画面序列中展示的游戏场景与游戏对局结束时的游戏画面具有相同的游戏场景,也即在游戏对局结束后,保持游戏对局结束时的游戏场景在过渡画面序列中进行显示。
另外,在显示过渡画面序列的过程中,客户端可以控制目标虚拟角色在游戏场景中保持静止,也即目标虚拟角色在游戏对局结束后停止移动,保持静止,其与游戏场景的相对位置不再发生变化。
在显示过渡画面序列的过程中,客户端也可以控制目标虚拟角色在游戏场景中进行移动。
在一个示例中,客户端在显示过渡画面序列的过程中,控制目标虚拟角色在游戏场景中以预设速度移动。可选地,预设移动速度小于目标虚拟角色在游戏对局结束时的移动速度,且预设移动速度不为0。也即,目标虚拟角色在游戏对局结束后以慢速率移动。
在另一个示例中,客户端在显示过渡画面序列的过程中,控制目标虚拟角 色在游戏场景中按预设加速度进行减速移动或加速移动。如果预设加速度小于0,则目标虚拟角色在游戏对局结束后逐渐降低移动速度,直至停止移动,保持静止。当然,在实际实现时,预设加速度也可以大于0或者等于0,本申请实施例对此不作限定。
比如,请参考图3,假设预设加速度为-1m/(s^2),游戏对局结束时,目标虚拟角色以10m/s的速度朝方向31运动,则在游戏对局结束后显示的过渡画面序列中,目标虚拟角色以10m/s为初速度,-1m/(s^2)为加速度朝方向31不断减速运动。
可选地,如果在游戏对局结束时,客户端当前控制的目标虚拟角色处于第一人称视角,则在游戏对局结束后,客户端控制目标虚拟角色的视角从第一人称视角切换为第三人称视角,以目标虚拟角色处于第三人称视角显示过渡画面序列。这样,目标虚拟角色在过渡画面序列中完整显示。需要说明的是,在目标虚拟角色处于第三人称视角的状态下,过渡画面序列中可以显示该目标虚拟角色的全部肢体,也可以显示该目标虚拟角色的部分肢体,但相较于第一人称视角显示的肢体内容更多。另外,如果在游戏对局结束时,客户端当前控制的目标虚拟角色已经处于第三人称视角,则不必进行视角切换。
例如,在游戏对局结束时,客户端当前控制的目标虚拟角色在游戏画面中仅显示双手,而诸如头、身体等部位并未显示,在游戏对局结束后,目标虚拟角色在过渡画面中完整显示,包括整个目标虚拟角色的身体都显示出来。上述切换过程可以以淡出淡入的方式进行切换,也可以以旋转跳出、旋转跳进的方式进行切换,或者其它方式,本申请实施例对此不作限定。比如,请参考图4,其示例性示出了客户端当前控制的目标虚拟角色的视角从第一人称视角切换为第三人称视角时,一部分过渡画面帧的切换过程。
可选地,在显示过渡画面序列的过程中,客户端控制目标虚拟角色对应的虚拟摄像机的镜头,逐渐远离目标虚拟角色,和/或,控制目标虚拟角色对应的虚拟摄像机的镜头,围绕目标虚拟角色旋转。
在一个示例中,客户端控制目标虚拟角色对应的虚拟摄像机的镜头,逐渐远离目标虚拟角色;当虚拟摄像机的镜头在远离过程中触碰到障碍物时,客户端控制目标虚拟角色对应的虚拟摄像机的镜头,围绕目标虚拟角色旋转,以避开上述障碍物。而后,客户端继续控制目标虚拟角色对应的虚拟摄像机的镜头,逐渐远离目标虚拟角色。通过上述方式,能够确保过渡画面序列的显示内容的 真实性,避免出现镜头穿帮。
步骤203,在显示过渡画面序列的过程中,获取游戏对局的对局结果。
客户端利用显示过渡画面序列的过程,获取游戏对局的对局结果。可选地,如果是单机游戏,则客户端可以在本地结算游戏对局的对局结果;如果是联网游戏,则客户端可以向服务器发送对局结果获取请求,用于请求从服务器获取对局结果,服务器接收到对局结果获取请求之后,结算游戏对局的对局结果,并发送给客户端。
可选地,在本申请实施例中,在显示过渡画面序列的过程中,用户还可操控目标虚拟角色释放技能。具体地,在显示过渡画面序列的过程中,客户端获取对应于目标虚拟角色的技能释放指令,该技能释放指令用于触发目标虚拟角色释放技能;控制目标虚拟角色按照预设的技能释放速度释放技能。可选地,预设的技能释放速度小于目标虚拟角色在游戏对局过程中的技能释放速度。
当然,在其它可能的实施方式中,客户端也可默认设定在游戏对局结束后,不响应技能释放指令,也即不允许用户再触发目标虚拟角色释放技能。或者,也可由用户自定义设定是否可以在游戏对局结束后触发目标虚拟角色释放技能。在一个示例中,客户端接收第一设置信息,该第一设置信息用于设置在游戏对局结束后所述目标虚拟角色的技能允许被触发;根据第一设置信息将目标虚拟角色的技能设置为在游戏对局结束后允许被触发。或者,客户端接收第二设置信息,该第二设置信息用于设置在游戏对局结束后所述目标虚拟角色的技能不允许被触发;根据第二设置信息将目标虚拟角色的技能设置为在游戏对局结束后不允许被触发。
步骤204,显示对局结果。
客户端获取到对局结果之后,显示对局结果。可选地,客户端在过渡画面上层显示对局结果;或者,客户端在过渡画面中显示对局结果。
在一种可能的实施方式中,客户端在显示对局结果之前,调整过渡画面的显示参数,该显示参数包括模糊度和/或亮度,例如,客户端逐渐提高过渡画面的模糊度,并且逐渐降低过渡画面的亮度;而后,客户端根据调整后的显示参数显示过渡画面,在过渡画面上层显示对局结果。
结合参考图5,其示出了过渡画面的切换过程的示意图,在游戏对局结束后,客户端首先控制目标虚拟角色从第一人称视角切换为第三人称视角,而后虚拟摄像机的镜头不断远离目标虚拟角色,之后虚拟摄像机的镜头按照预设方 向围绕目标虚拟角色旋转,并在最后通过遮罩不断降低过渡画面的亮度,最终显示对局结果。
可选地,过渡画面由客户端根据游戏对局结束时的游戏画面的内容生成。可选地,客户端将游戏对局结束时的游戏画面的游戏场景,作为过渡画面中显示的游戏场景;另外,客户端还在上述过渡画面的游戏场景中添加第三人称视角的虚拟角色,从而使得过渡画面的显示内容中包含游戏场景以及位于该游戏场景中的第三人称视角的虚拟角色。之后,客户端根据虚拟角色在游戏对局结束后的移动速度和虚拟摄像机的镜头变换,调整过渡画面中的游戏场景以及调整虚拟角色在游戏场景中的位置、姿态、角度等内容,生成逐帧排列的过渡画面,形成一过渡画面序列。客户端通过显示该过渡画面序列,以呈现出游戏场景的动态变化过程,虚拟对象在游戏场景中的动态变化过程。
结合参考图6,客户端在程序逻辑上判断游戏对局结束,客户端的时间流逝速度变慢或者为零,以达到控制虚拟角色的移动速度的效果。此外,如果是联网游戏,则客户端的时间流逝速度还需要与服务器同步,以确保服务器端的游戏数据的准确性。从画面表现上,虚拟角色的视角由第一人称视角切换为第二人称视角,且与此同时移动速度变慢或者为零。之后,将虚拟摄像机的镜头不断拉远、旋转,以呈现过渡画面的显示内容动态改变的效果。
综上所述,本申请实施例提供的技术方案中,通过在游戏对局结束后显示过渡画面序列,利用过渡画面序列的显示过程获取对局结果,避免直接显示结果展示画面产生类似于画面卡顿的现象,使得对局结果的展示更加平稳流畅,提高展示效果。
在上述实施例的一个应用场景中,假设该方法用于FPS游戏客户端中,则如图7所示,该方法可以包括如下几个步骤:
步骤701,判断当前的游戏对局是否结束。
步骤702,在确定当前的游戏对局结束时,将时间流逝速度设置为0,控制游戏对局中的当前客户端的虚拟角色停止移动,保持静止。
步骤703,控制当前客户端的虚拟角色的视角由第一人称视角淡出。
步骤704,控制当前客户端的虚拟角色的视角切换为第三人称视角。
步骤705,控制虚拟摄像机的镜头逐渐远离当前客户端的虚拟角色。
步骤706,控制虚拟摄像机的镜头围绕当前客户端的虚拟角色旋转。
步骤707,提高过渡画面的模糊度并降低亮度,直至变黑。
步骤708,在过渡画面上层显示对局结果。
在上述各个实施例中,需要补充的一点是,在游戏对局结束后,客户端除了展示对局结果之外,还可能需要展示其他内容,为了提高展示效率,在游戏对局结束之后客户端还可以执行如下步骤:
第一,获取目标内容;
目标内容包括但不限于:游戏对局过程中满足预设条件的片段、游戏对局过程中使用的虚拟角色满足预设条件的片段、在游戏对局过程中使用的虚拟角色的使用攻略中的至少一种。
上述预设条件包括虚拟角色的表现评分高于预设评分,和/或,连续击杀的敌方角色高于预设数量。
当预设条件包括虚拟角色的表现评分高于预设评分时,客户端可以获取虚拟角色的表现评分,检测获取到的表现评分是否高于预设评分。其中,获取表现评分的步骤可以包括:当前客户端在本端计算表现评分,或者,获取服务器计算得到的表现评分。
当预设条件包括连续击杀的敌方角色高于预设数量时,客户端可以获取虚拟角色连续击杀的敌方角色的数量,检测获取到的数量是否大于预设数量。其中,获取击杀的数量的步骤可以包括:当前客户端在本端统计击杀的数量,或者,获取服务器计算得到的击杀的数量。
当目标内容包括游戏对局过程中满足预设条件的片段时,比如,预设条件为三杀、四杀或者五杀,则客户端可以获取游戏对局过程中的三杀片段、四杀片段或者五杀片段。并且,实际实现时,客户端可以获取当前客户端的虚拟角色在游戏对局过程中满足预设条件的片段,也可以获取其他客户端的虚拟角色在游戏对局过程中满足预设条件的片段。
当目标内容包括游戏对局过程中使用的虚拟角色满足预设条件的片段时,客户端可以从服务器中获取虚拟角色的精彩片段。其中,服务器可以根据各个客户端在游戏对局过程中的表现解析并提取到上述精彩片段。
当目标内容包括在游戏对局过程中使用的虚拟角色的使用攻略时,客户端可以从服务器中获取虚拟角色的使用攻略。其中,服务器中存储的虚拟角色的使用攻略可以为设计人员预先设定的攻略,也可以为其他用户上传的攻略。
第二,在显示对局结果之后,显示目标内容。
例如,客户端在过渡画面上层显示对局结果之后,在过渡画面上层显示上述目标内容。
可选地,客户端在显示对局结果之后,还可以执行如下步骤:
1、显示人物展示画面,人物展示画面中包含虚拟角色;
可选地,人物展示画面中包含的虚拟角色是客户端在游戏对局过程中控制的目标虚拟角色。或者,人物展示画面中包含的虚拟角色也可以包括客户端在游戏对局过程中控制的目标虚拟角色和其他客户端的敌方虚拟角色,并且呈现双方PK的画面。
2、根据对应于虚拟角色的控制指令,调整虚拟角色的姿态;
控制指令用于触发调整虚拟角色的姿态。可选地,虚拟角色可以包括站立、下蹲、跳跃等不同姿态。客户端可以显示若干个供选择的姿态,由用户从中选择所要调整至的姿态。
3、当获取到对应于人物展示画面的截取指令时,截取人物展示画面。
如图8所示,客户端在显示对局结果之后显示人物展示画面,用户可以调整虚拟角色的姿态。之后,用户触发截取指令,客户端截取人物展示画面,如图9所示。客户端还可提供对于截取到的人物展示画面的保存、分享等功能。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图10,其示出了本申请一个实施例提供的游戏对局结果的展示装置的框图。该装置具有实现上述方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以包括:画面显示模块1010、画面切换模块1020、结果获取模块1030和结果显示模块1040。
画面显示模块1010,用于在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面。
画面切换模块1020,用于在所述游戏对局结束时,将所述游戏对局结束时的游戏画面,切换显示为以所述目标虚拟角色处于第三人称视角的过渡画面序列;其中,所述过渡画面序列中包括至少两帧过渡画面,且所述过渡画面序列中的第1帧过渡画面与所述游戏对局结束时的游戏画面具有相同的游戏场景。
结果获取模块1030,用于在显示所述过渡画面序列的过程中,获取所述游戏对局的对局结果。
结果显示模块1040,用于显示所述对局结果。
综上所述,本申请实施例提供的技术方案中,通过在游戏对局结束后显示过渡画面序列,利用过渡画面序列的显示过程获取对局结果,避免直接显示结果展示画面产生类似于画面卡顿的现象,使得对局结果的展示更加平稳流畅,提高展示效果。
可选地,所述画面切换模块1020,还用于:
在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中保持静止;
或者,
在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中以预设速度移动,且所述预设移动速度不为0;
或者,
在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中按预设加速度进行减速移动或加速移动。
可选地,如图11所示,所述装置还包括:
指令获取模块1050,用于在显示所述过渡画面序列的过程中,获取对应于所述目标虚拟角色的技能释放指令,所述技能释放指令用于触发所述目标虚拟角色释放技能;
技能释放模块1060,用于控制所述目标虚拟角色按照预设的技能释放速度释放技能,所述预设的技能释放速度小于所述目标虚拟角色在所述游戏对局过程中的技能释放速度。
可选地,如图11所示,所述装置还包括:
信息接收模块1070,用于接收第一设置信息,所述第一设置信息用于设置在所述游戏对局结束后所述目标虚拟角色的技能允许被触发;
设置模块1080,用于根据所述第一设置信息将所述目标虚拟角色的技能设置为在所述游戏对局结束后允许被触发。
可选地,如图11所示,所述装置还包括:
镜头控制模块1090,用于在显示所述过渡画面序列的过程中,控制所述目标虚拟角色对应的虚拟摄像机的镜头,逐渐远离所述目标虚拟角色;和/或,在 显示所述过渡画面序列的过程中,控制所述目标虚拟角色对应的虚拟摄像机的镜头,围绕所述目标虚拟角色旋转。
可选地,所述结果显示模块1040,用于:
调整所述过渡画面的显示参数,所述显示参数包括模糊度和/或亮度;
根据调整后的所述显示参数显示所述过渡画面;
在所述过渡画面上层显示所述对局结果。
可选地,如图11所示,所述装置还包括:
人物展示模块1091,用于显示人物展示画面,所述人物展示画面中包含所述目标虚拟角色;
姿态调整模块1092,用于根据对应于所述目标虚拟角色的控制指令,调整所述目标虚拟角色的姿态;
画面截取模块1093,用于当获取到对应于所述人物展示画面的截取指令时,截取所述人物展示画面。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图12,其示出了本申请一个实施例提供的终端1200的结构框图。该终端1200可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1200还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。该终端1200可以实现成为上述实施例中的采集端设备12或者播放端设备13。
通常,终端1200包括有:处理器1201和存储器1202。
处理器1201可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1201可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA (Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1201也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1201可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1201还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1202可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1202还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1202中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1201所执行以实现本申请中方法实施例提供的方法。
在一些实施例中,终端1200还可选包括有:外围设备接口1203和至少一个外围设备。处理器1201、存储器1202和外围设备接口1203之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1203相连。具体地,外围设备包括:射频电路1204、触摸显示屏1205、摄像头1206、音频电路1207、定位组件1208和电源1209中的至少一种。
外围设备接口1203可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1201和存储器1202。在一些实施例中,处理器1201、存储器1202和外围设备接口1203被集成在同一芯片或电路板上;在一些其他实施例中,处理器1201、存储器1202和外围设备接口1203中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1204用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1204通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1204将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1204包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1204可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代 移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1204还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1205用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1205是触摸显示屏时,显示屏1205还具有采集在显示屏1205的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1201进行处理。此时,显示屏1205还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1205可以为一个,设置终端1200的前面板;在另一些实施例中,显示屏1205可以为至少两个,分别设置在终端1200的不同表面或呈折叠设计;在再一些实施例中,显示屏1205可以是柔性显示屏,设置在终端1200的弯曲表面上或折叠面上。甚至,显示屏1205还可以设置成非矩形的不规则图形,也即异形屏。显示屏1205可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1206用于采集图像或视频。可选地,摄像头组件1206包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1206还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1207可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1201进行处理,或者输入至射频电路1204以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1200的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1201或射频电路1204的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频 电路1207还可以包括耳机插孔。
定位组件1208用于定位终端1200的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1208可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1209用于为终端1200中的各个组件进行供电。电源1209可以是交流电、直流电、一次性电池或可充电电池。当电源1209包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1200还包括有一个或多个传感器1210。该一个或多个传感器1210包括但不限于:加速度传感器1211、陀螺仪传感器1212、压力传感器1213、指纹传感器1214、光学传感器1215以及接近传感器1216。
加速度传感器1211可以检测以终端1200建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1211可以用于检测重力加速度在三个坐标轴上的分量。处理器1201可以根据加速度传感器1211采集的重力加速度信号,控制触摸显示屏1205以横向视图或纵向视图进行用户界面的显示。加速度传感器1211还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1212可以检测终端1200的机体方向及转动角度,陀螺仪传感器1212可以与加速度传感器1211协同采集用户对终端1200的3D动作。处理器1201根据陀螺仪传感器1212采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1213可以设置在终端1200的侧边框和/或触摸显示屏1205的下层。当压力传感器1213设置在终端1200的侧边框时,可以检测用户对终端1200的握持信号,由处理器1201根据压力传感器1213采集的握持信号进行左右手识别或快捷操作。当压力传感器1213设置在触摸显示屏1205的下层时,由处理器1201根据用户对触摸显示屏1205的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1214用于采集用户的指纹,由处理器1201根据指纹传感器 1214采集到的指纹识别用户的身份,或者,由指纹传感器1214根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1201授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1214可以被设置终端1200的正面、背面或侧面。当终端1200上设置有物理按键或厂商Logo时,指纹传感器1214可以与物理按键或厂商Logo集成在一起。
光学传感器1215用于采集环境光强度。在一个实施例中,处理器1201可以根据光学传感器1215采集的环境光强度,控制触摸显示屏1205的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1205的显示亮度;当环境光强度较低时,调低触摸显示屏1205的显示亮度。在另一个实施例中,处理器1201还可以根据光学传感器1215采集的环境光强度,动态调整摄像头组件1206的拍摄参数。
接近传感器1216,也称距离传感器,通常设置在终端1200的前面板。接近传感器1216用于采集用户与终端1200的正面之间的距离。在一个实施例中,当接近传感器1216检测到用户与终端1200的正面之间的距离逐渐变小时,由处理器1201控制触摸显示屏1205从亮屏状态切换为息屏状态;当接近传感器1216检测到用户与终端1200的正面之间的距离逐渐变大时,由处理器1201控制触摸显示屏1205从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图12中示出的结构并不构成对终端1200的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例中实施例中,还提供了一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集。所述至少一条指令、至少一段程序、代码集或指令集经配置以由一个或者一个以上处理器执行,以实现上述方法。
在示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或所述指令集在被终端的处理器执行时实现上述方法。
可选地,上述计算机可读存储介质可以是ROM、随机存取存储器(RAM)、 CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品,当该计算机程序产品被执行时,其用于实现上述方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种游戏对局结果的展示方法,所述方法由终端执行,所述方法包括:
    在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面;
    在所述游戏对局结束时,将所述游戏对局结束时的游戏画面,切换显示为以所述目标虚拟角色处于第三人称视角的过渡画面序列;其中,所述过渡画面序列中包括至少两帧过渡画面,且所述过渡画面序列中的第1帧过渡画面与所述游戏对局结束时的游戏画面具有相同的游戏场景;
    在显示所述过渡画面序列的过程中,获取所述游戏对局的对局结果;
    显示所述对局结果。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中保持静止;
    或者,
    在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中以预设速度移动,且所述预设移动速度不为0;
    或者,
    在显示所述过渡画面序列的过程中,控制所述目标虚拟角色在所述游戏场景中按预设加速度进行减速移动或加速移动。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    在显示所述过渡画面序列的过程中,获取对应于所述目标虚拟角色的技能释放指令,所述技能释放指令用于触发所述目标虚拟角色释放技能;
    控制所述目标虚拟角色按照预设的技能释放速度释放技能,所述预设的技能释放速度小于所述目标虚拟角色在所述游戏对局过程中的技能释放速度。
  4. 根据权利要求3所述的方法,其中,所述控制所述目标虚拟角色按照预设的技能释放速度释放技能之前,还包括:
    接收第一设置信息,所述第一设置信息用于设置在所述游戏对局结束后所述目标虚拟角色的技能允许被触发;
    根据所述第一设置信息将所述目标虚拟角色的技能设置为在所述游戏对局结束后允许被触发。
  5. 根据权利要求1至4任一项所述的方法,其中,所述方法还包括:
    在显示所述过渡画面序列的过程中,控制所述目标虚拟角色对应的虚拟摄像机的镜头,逐渐远离所述目标虚拟角色;
    和/或,
    在显示所述过渡画面序列的过程中,控制所述目标虚拟角色对应的虚拟摄像机的镜头,围绕所述目标虚拟角色旋转。
  6. 根据权利要求1至4任一项所述的方法,其中,所述显示所述对局结果,包括:
    调整所述过渡画面的显示参数,所述显示参数包括模糊度和/或亮度;
    根据调整后的所述显示参数显示所述过渡画面;
    在所述过渡画面上层显示所述对局结果。
  7. 根据权利要求1至4任一项所述的方法,其中,所述显示所述对局结果之后,还包括:
    显示人物展示画面,所述人物展示画面中包含所述目标虚拟角色;
    根据对应于所述目标虚拟角色的控制指令,调整所述目标虚拟角色的姿态;
    当获取到对应于所述人物展示画面的截取指令时,截取所述人物展示画面。
  8. 一种游戏对局结果的展示装置,应用于终端中,所述装置包括:
    画面显示模块,用于在游戏对局的过程中,以当前控制的目标虚拟角色处于第一人称视角显示游戏画面;
    画面切换模块,用于在所述游戏对局结束时,将所述游戏对局结束时的游戏画面,切换显示为以所述目标虚拟角色处于第三人称视角的过渡画面序列;其中,所述过渡画面序列中包括至少两帧过渡画面,且所述过渡画面序列中的第1帧过渡画面与所述游戏对局结束时的游戏画面具有相同的游戏场景;
    结果获取模块,用于在显示所述过渡画面序列的过程中,获取所述游戏对局的对局结果;
    结果显示模块,用于显示所述对局结果。
  9. 一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至7任一所述的游戏对局结果的展示方法。
  10. 一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至7任一所述的游戏对局结果的展示方法。
PCT/CN2018/114824 2017-12-05 2018-11-09 游戏对局结果的展示方法、装置及终端 WO2019109778A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711268205.8 2017-12-05
CN201711268205.8A CN107982918B (zh) 2017-12-05 2017-12-05 游戏对局结果的展示方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2019109778A1 true WO2019109778A1 (zh) 2019-06-13

Family

ID=62035656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114824 WO2019109778A1 (zh) 2017-12-05 2018-11-09 游戏对局结果的展示方法、装置及终端

Country Status (2)

Country Link
CN (1) CN107982918B (zh)
WO (1) WO2019109778A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107982918B (zh) * 2017-12-05 2020-03-03 腾讯科技(深圳)有限公司 游戏对局结果的展示方法、装置及终端
CN108717733B (zh) * 2018-06-07 2019-07-02 腾讯科技(深圳)有限公司 虚拟环境的视角切换方法、设备及存储介质
CN109675308A (zh) 2019-01-10 2019-04-26 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、存储介质、处理器及终端
CN110062271B (zh) * 2019-04-28 2022-03-04 腾讯科技(成都)有限公司 场景切换方法、装置、终端及存储介质
CN110180176B (zh) * 2019-06-06 2021-10-29 腾讯科技(深圳)有限公司 战报展示界面的显示方法、装置、设备及可读存储介质
CN110308792B (zh) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 虚拟角色的控制方法、装置、设备及可读存储介质
CN112221134B (zh) * 2020-11-09 2022-05-31 腾讯科技(深圳)有限公司 基于虚拟环境的画面显示方法、装置、设备及介质
CN112870705B (zh) * 2021-03-18 2023-04-14 腾讯科技(深圳)有限公司 对局结算界面的显示方法、装置、设备及介质
CN113101650A (zh) * 2021-04-28 2021-07-13 网易(杭州)网络有限公司 游戏场景切换方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1550248A (zh) * 2003-05-09 2004-12-01 阿鲁策株式会社 游戏机及其控制程序
US20080096637A1 (en) * 2006-10-18 2008-04-24 Aruze Gaming America, Inc. Slot machine and playing method thereof
CN101369356A (zh) * 2007-02-26 2009-02-18 康斯坦丁诺斯·安东诺普洛斯 用于显示作为比赛的抽签结果的方法
JP2014018377A (ja) * 2012-07-17 2014-02-03 Konami Digital Entertainment Co Ltd ゲームシステム及びその特典付与制御方法
CN106658046A (zh) * 2016-12-05 2017-05-10 上海时年信息科技有限公司 基于luajava的视频回放方法及系统
CN107982918A (zh) * 2017-12-05 2018-05-04 腾讯科技(深圳)有限公司 游戏对局结果的展示方法、装置及终端

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760205B (zh) * 2012-06-12 2016-06-01 北京像素软件科技股份有限公司 一种传送游戏对象的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1550248A (zh) * 2003-05-09 2004-12-01 阿鲁策株式会社 游戏机及其控制程序
US20080096637A1 (en) * 2006-10-18 2008-04-24 Aruze Gaming America, Inc. Slot machine and playing method thereof
CN101369356A (zh) * 2007-02-26 2009-02-18 康斯坦丁诺斯·安东诺普洛斯 用于显示作为比赛的抽签结果的方法
JP2014018377A (ja) * 2012-07-17 2014-02-03 Konami Digital Entertainment Co Ltd ゲームシステム及びその特典付与制御方法
CN106658046A (zh) * 2016-12-05 2017-05-10 上海时年信息科技有限公司 基于luajava的视频回放方法及系统
CN107982918A (zh) * 2017-12-05 2018-05-04 腾讯科技(深圳)有限公司 游戏对局结果的展示方法、装置及终端

Also Published As

Publication number Publication date
CN107982918A (zh) 2018-05-04
CN107982918B (zh) 2020-03-03

Similar Documents

Publication Publication Date Title
WO2019109778A1 (zh) 游戏对局结果的展示方法、装置及终端
CN108619721B (zh) 虚拟场景中的距离信息显示方法、装置及计算机设备
CN108710525B (zh) 虚拟场景中的地图展示方法、装置、设备及存储介质
WO2019201047A1 (zh) 在虚拟环境中进行视角调整的方法、装置及可读存储介质
WO2019153750A1 (zh) 用于对虚拟环境进行视角切换的方法、装置、设备及存储介质
AU2020256524A1 (en) Operation control method and apparatus, and electronic device and storage medium
WO2022134980A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN111589127B (zh) 虚拟角色的控制方法、装置、设备及存储介质
JP7309913B2 (ja) 視点回転の方法、装置、端末およびコンピュータプラグラム
CN112328091B (zh) 弹幕显示方法、装置、终端及存储介质
KR102565711B1 (ko) 관점 회전 방법 및 장치, 디바이스 및 저장 매체
WO2019184782A1 (zh) 虚拟场景中的对象控制方法、装置及计算机设备
JP7186901B2 (ja) ホットスポットマップの表示方法、装置、コンピュータ機器および読み取り可能な記憶媒体
WO2022227915A1 (zh) 显示位置标记的方法、装置、设备及存储介质
CN111603770A (zh) 虚拟环境画面的显示方法、装置、设备及介质
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
CN112704876A (zh) 虚拟对象互动模式的选择方法、装置、设备及存储介质
CN111589141A (zh) 虚拟环境画面的显示方法、装置、设备及介质
WO2022237076A1 (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
JP7413563B2 (ja) 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム
CN113509729A (zh) 虚拟道具的控制方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18885124

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885124

Country of ref document: EP

Kind code of ref document: A1