CN114344895A - Picture display method, device, equipment and computer readable storage medium - Google Patents

Picture display method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN114344895A
CN114344895A CN202210004962.9A CN202210004962A CN114344895A CN 114344895 A CN114344895 A CN 114344895A CN 202210004962 A CN202210004962 A CN 202210004962A CN 114344895 A CN114344895 A CN 114344895A
Authority
CN
China
Prior art keywords
state
behavior state
camera
bitmap data
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210004962.9A
Other languages
Chinese (zh)
Inventor
张弘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210004962.9A priority Critical patent/CN114344895A/en
Publication of CN114344895A publication Critical patent/CN114344895A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a picture display method, a picture display device, picture display equipment and a computer readable storage medium, and belongs to the technical field of Internet. The method comprises the following steps: displaying a first game picture, wherein the first game picture comprises a virtual object and at least one state control in a first behavior state, the first game picture is acquired by a virtual camera, and configuration information when the virtual camera acquires the first game picture is determined based on a first camera parameter set corresponding to the first behavior state; and responding to a selected instruction of a first state control in the at least one state control, displaying a second game picture, wherein the second game picture is acquired by a virtual camera, configuration information when the virtual camera acquires the second game picture is determined based on a second camera parameter set, the second camera parameter set is acquired based on the first behavior state and a second behavior state corresponding to the first state control, and the behavior state of the virtual object in the second game picture is determined based on the second behavior state. The method reduces the development cost of the camera parameter set.

Description

Picture display method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to a picture display method, a picture display device, picture display equipment and a computer readable storage medium.
Background
The virtual camera system is important for game works, the virtual camera system interacts with game characters and game worlds all the time, and the virtual camera system is used for displaying the game works in an excellent artistic expression effect or a playing method which people want to strike. Therefore, there is a need for a method for displaying images to better control a virtual camera system, so as to better display the images captured by the virtual camera.
In the related art, different game modes are configured with different sets of camera parameters. When the virtual object is located in the first game mode, the configuration information of the virtual camera is adjusted to be the camera parameter set corresponding to the first game mode, and the picture acquired by the virtual camera according to the camera parameter set corresponding to the first game mode is displayed.
However, in the above-mentioned screen display method, when a new game mode appears, a corresponding camera parameter set needs to be configured for the new game mode, so that the development process of the camera parameter set is complicated, and the development efficiency is low. When a new game mode appears but a new set of camera parameters has not been developed for the new game mode, resulting in poor picture display in the new game mode.
Disclosure of Invention
The embodiment of the application provides a picture display method, a picture display device, picture display equipment and a computer readable storage medium, which can be used for solving the problems of complicated development process and low development efficiency of camera parameter sets in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a picture display method, where the method includes:
displaying a first game picture, wherein the first game picture comprises a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game picture is acquired by a virtual camera, and configuration information when the virtual camera acquires the first game picture is determined based on a first camera parameter set corresponding to the first behavior state;
and in response to a selected instruction of a first state control in the at least one state control, displaying a second game picture, wherein the second game picture is acquired by the virtual camera, configuration information of the virtual camera in acquiring the second game picture is determined based on a second camera parameter set, the second camera parameter set is acquired based on the first behavior state and a second behavior state corresponding to the first state control, and the behavior state of the virtual object in the second game picture is determined based on the second behavior state.
On the other hand, an embodiment of the present application provides a screen display method, including:
displaying a first game picture, wherein the first game picture comprises a virtual object in a first behavior state, the first game picture is acquired by a virtual camera based on first configuration information, and the first configuration information is determined based on a first camera parameter set corresponding to the first behavior state;
and in response to the virtual object being switched from the first behavior state to a third behavior state, displaying a second game picture, wherein the second game picture is acquired by the virtual camera based on second configuration information, and the second configuration information is determined based on a second camera parameter set corresponding to the third behavior state.
In another aspect, an embodiment of the present application provides an image display apparatus, including:
the game system comprises a display module, a first game screen and a second game screen, wherein the first game screen comprises a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game screen is acquired by a virtual camera, and configuration information when the virtual camera acquires the first game screen is determined based on a first camera parameter set corresponding to the first behavior state;
the display module is further configured to display a second game picture in response to a selected instruction of a first state control in the at least one state control, where the second game picture is acquired by the virtual camera, configuration information of the virtual camera during acquisition of the second game picture is determined based on a second camera parameter set, the second camera parameter set is acquired based on the first behavior state and a second behavior state corresponding to the first state control, and a behavior state of the virtual object in the second game picture is determined based on the second behavior state.
In another aspect, an embodiment of the present application provides an image display apparatus, including:
the game system comprises a display module, a first action state and a second action state, wherein the display module is used for displaying a first game picture, the first game picture comprises a virtual object in the first action state, the first game picture is acquired by a virtual camera based on first configuration information, and the first configuration information is determined based on a first camera parameter set corresponding to the first action state;
the display module is further configured to display a second game picture in response to the virtual object being switched from the first behavior state to a third behavior state, where the second game picture is acquired by the virtual camera based on second configuration information, and the second configuration information is determined based on a second camera parameter set corresponding to the third behavior state.
On the other hand, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so that the electronic device implements any one of the above-mentioned screen display methods.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor, so as to enable a computer to implement any one of the above-mentioned screen display methods.
In another aspect, a computer program or a computer program product is provided, in which at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by a processor, so as to enable a computer to implement any one of the above-mentioned screen display methods.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the technical scheme provided by the embodiment of the application, the camera parameter set and the behavior state of the virtual object are associated together, when the behavior state of the virtual object changes, the second camera parameter set is obtained according to the first behavior state before the change and the second behavior state corresponding to the first state control, the configuration information of the virtual camera is further adjusted according to the second camera parameter set, and the virtual camera after the configuration information is adjusted is adopted to collect the game picture, so that the display effect of the game picture is better. Moreover, the camera parameter set and the game mode are decoupled, so that even if a new game mode appears, the corresponding camera parameter set does not need to be developed for the new game mode, the development cost of the camera parameter set can be reduced, and the game manufacturing efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a picture display method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method for displaying a frame according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a relationship between a rotation speed of a camera and a pushing amount of a joystick in a rotation mechanism according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a process of changing an elevation angle of a camera according to an embodiment of the present disclosure;
FIG. 5 is a graph of the relationship between the field of view and the elevation angle of a camera provided by an embodiment of the present application;
FIG. 6 is a graph of a relationship between rocker arm offset and elevation angle provided by an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a display of a blocking mechanism provided in an embodiment of the present application;
FIG. 8 is a flowchart of a third game frame acquisition process provided in the embodiments of the present application;
FIG. 9 is a schematic illustration showing a second game screen according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a second game screen provided in the embodiment of the present application;
FIG. 11 is a schematic illustration showing a second game screen according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a second game screen provided in the embodiment of the present application;
FIG. 13 is a graph of the change in rocker arm deflection before the addition of the mix-in time and the mix-out time and the change in rocker arm deflection after the addition of the mix-in time and the mix-out time provided by an embodiment of the present application;
FIG. 14 is a flowchart of a method for displaying a frame according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of an image display device according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a screen display device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
For ease of understanding, a number of terms referred to in the embodiments of the present application are explained first:
virtual scene: the application program provides (or displays) a scene when running on the terminal device, the virtual scene refers to a scene created for a virtual object to move, and the virtual scene may be a two-dimensional virtual scene, a 2.5-dimensional virtual scene, a three-dimensional virtual scene, or the like. The virtual scene can be a simulation scene of the real world, a semi-simulation semi-fictional scene of the real world, or a pure fictional scene. Exemplarily, the virtual scene in the embodiment of the present application is a three-dimensional virtual scene.
Virtual object: a virtual object refers to a movable object in a virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, or the like. The interactive object can control the virtual object through a peripheral component or a mode of clicking a touch display screen. Each virtual object has its own shape and volume in the virtual scene, occupying a portion of the controls in the virtual scene. Illustratively, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on animated skeletal techniques.
3C: a basic module in a game system consists of three basic elements (Character, role), (Camera, virtual Camera) and (Control), which cooperate with each other and flexibly interact with each other, so that the system can operate smoothly.
A camera: particularly, the virtual camera in the game is a game object capable of moving, and displays pictures to the player in real time to help the player to better observe and experience the game world.
Bitmap: a computer data organization structure. Different logic states in the system are represented by bits, so that the memory overhead can be greatly saved.
Rocker arm: the camera is usually bound with an observation target, the observation target is generally a virtual object in a third person game, a connecting line between the observation target and the camera is called a rocker arm, the rocker arm can rotate and stretch, and a player can adjust the position of the camera by controlling the rocker arm to obtain a better visual angle.
POV (Point of View, camera viewpoint): the position of the virtual camera represented by three-dimensional coordinates is used, and the design and experience required by the game are met through continuous transformation.
FOV (Field of View, camera Field of View): is an angle value, the larger the angle, the larger the player's field of view, and the more game elements can be observed.
Fig. 1 is a schematic diagram of an implementation environment of a screen display method according to an embodiment of the present application, and as shown in fig. 1, the implementation environment includes: a terminal device 101 and a server 102.
Among them, the terminal apparatus 101 has installed and run therein an application program capable of providing a virtual scene. The terminal device 101 is configured to execute the screen display method provided in the embodiment of the present application.
The type of the application program capable of providing the virtual scene is not limited in the embodiments of the present application, and the application program capable of providing the virtual scene is, for example, a game-class application program, such as a Third-Person Shooting (TPS) game, a First-Person Shooting (FPS) game, a Multiplayer Online Battle sports (MOBA) game, a Multiplayer live game, and the like. In an exemplary embodiment, the game application related in the embodiment of the present application is a game application based on frame synchronization, that is, the screen display method provided in the embodiment of the present application may be applied to a game application based on frame synchronization.
Of course, the application capable of providing a virtual scene may be other types of applications besides game-like applications. For example, Virtual Reality (VR) type applications, Augmented Reality (AR) type applications, three-dimensional map programs, scene simulation programs, social type applications, interactive entertainment type applications, and the like.
The server 102 is configured to provide a background service for an application installed in the terminal device 101 and capable of providing a virtual scene. In one possible implementation, the server 102 undertakes primary computational work and the terminal device 101 undertakes secondary computational work. Alternatively, the server 102 undertakes the secondary computing work and the terminal device 101 undertakes the primary computing work. Or, the terminal device 101 and the server 102 perform cooperative computing by using a distributed computing architecture.
In one possible implementation, the terminal device 101 is any electronic device product capable of performing human-Computer interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction, or a handwriting device, for example, a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a PPC (Pocket PC, palmtop), a tablet Computer, a smart car, a smart television, a smart speaker, and the like. The server 102 may be one server, a server cluster composed of a plurality of server units, or a cloud computing service center. The terminal apparatus 101 establishes a communication connection with the server 102 through a wired network or a wireless network.
Those skilled in the art will appreciate that the terminal device 101 and the server 102 are only examples, and other existing or future terminal devices or servers, as applicable to the present application, are also included within the scope of the present application and are hereby incorporated by reference.
Based on the foregoing implementation environment, the embodiment of the present application provides a screen display method, which may be executed by the terminal device 101 in fig. 1, taking a flowchart of the screen display method provided in the embodiment of the present application as an example, as shown in fig. 2. As shown in fig. 2, the method comprises the steps of:
in step 201, a first game screen is displayed, where the first game screen includes a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game screen is captured by a virtual camera, and configuration information of the virtual camera during capturing the first game screen is determined based on a first camera parameter set corresponding to the first behavior state.
In the exemplary embodiment of the present application, a target game capable of providing a virtual scene is installed and executed in a terminal device, and the target game may be any type of game, which is not limited in the exemplary embodiment of the present application. Illustratively, the target game in the embodiment of the present application is a MOBA-type game.
A plurality of application programs are displayed on a display interface of the terminal device, and the types of each application program may be the same or different, which is not limited in this embodiment of the present application. When a user selects a target game in a plurality of displayed application programs, the terminal equipment receives a selection instruction aiming at the target game, runs the target game and displays a first page of the target game, wherein a game starting control is displayed in the first page. And responding to a selection instruction of a user for starting a game control, and displaying a first game picture of the target game, wherein a virtual object and at least one state control are displayed in the first game picture, and one state control corresponds to one behavior state. For example, the first game screen includes a first state control and a second state control, where the behavior state corresponding to the first state control is squatting, and the behavior state corresponding to the second state control is aiming. Of course, the number of the state controls displayed in the first game screen may be more or less, and this is not limited in this embodiment of the application. The behavior state of the virtual object in the first game screen is a first behavior state, and the first behavior state may be a single behavior state or a combined behavior state composed of at least two single behavior states, which is not limited in this embodiment of the present application. Illustratively, the first behavior state is aim; for another example, the first behavioral state is squat; for another example, the first behavior state is squat aiming.
Alternatively, the first game screen may be a game screen displayed when the user just clicks the start game control, and the behavior state of the virtual object in the first game screen is set by the developer of the game. For example, when the behavior state of the virtual object in the first game screen displayed when the user just clicks the start game control is standing, the first game screen is a game screen acquired by the virtual camera after the configuration information of the virtual camera is adjusted according to the camera parameter set determined by the bitmap data corresponding to standing.
Optionally, the first game screen may also be any screen of the user during the game. At this time, the behavior state of the virtual object in the first game frame is controlled by the user, and the display process of the first game frame is similar to the display process of the second game frame in step 202, which will not be described herein again.
Optionally, the first camera parameter set corresponding to the acquired first behavior state may be acquired in the following two implementations.
Implementation one, a behavior state, corresponds to one bitmap data, and one bitmap data corresponds to one camera parameter set. Since the behavior state of the virtual object is the first behavior state, the first bitmap data corresponding to the first behavior state is acquired. The camera parameter set corresponding to the first bitmap data is taken as the first camera parameter set.
Wherein the bitmap data comprises a plurality of binary values. As shown in the following table one, a table of a correspondence relationship between a behavior state and bitmap data provided in an embodiment of the present application is shown, and a table of a correspondence relationship between bitmap data and a camera parameter set is shown in the following table two.
Watch 1
Behavioral state Bitmap data
Squatting down 0000010
Aiming 0000100
Squat sighting device 0000110
As shown in the table one, when the behavior state is squatting, the corresponding bitmap data is 0000010; when the behavior state is targeting, the corresponding bitmap data is 0000100; when the behavior state is squat, the corresponding bitmap data is 0000110.
Watch two
Bitmap data Camera parameter set
0000010 Camera parameter set 1
0000100 Camera parameter set two
0000110 Camera parameter set three
As can be seen from the above table two, the camera parameter set corresponding to the bitmap data 0000010 is the camera parameter set one, the camera parameter set corresponding to the bitmap data 0000100 is the camera parameter set two, and the camera parameter set corresponding to the bitmap data 0000110 is the camera parameter set three.
Illustratively, the first behavior state is squatting, and as can be seen from the above table one, the bitmap data corresponding to the squatting is 0000010, and as can be seen from the above table two, the camera parameter set corresponding to the bitmap data 0000010 is the camera parameter set one.
In the second implementation manner, one behavior state corresponds to one bitmap data, and since the behavior state of the virtual object is the first behavior state, the first bitmap data corresponding to the first behavior state is acquired. And acquiring a state value corresponding to the first behavior state based on the first bitmap data. One state value corresponds to one camera parameter set. And taking the camera parameter set corresponding to the state value corresponding to the first action state as the first camera parameter set.
Illustratively, the process of obtaining the state value corresponding to the first behavior state based on the first bitmap data includes: and converting the first bitmap data to obtain a numerical value corresponding to the first bitmap data, and taking the numerical value corresponding to the first bitmap data as a state numerical value corresponding to the first action state.
Optionally, the bitmap data comprises a plurality of binary values. When the first bitmap data is converted, the first bitmap data may be converted from binary to decimal, or from binary to hexadecimal, or from binary to octal, which is not limited in the embodiment of the present application.
As shown in the following table three, a table of correspondence between state values and camera parameter sets is provided in the embodiments of the present application.
Watch III
Numerical value of state Camera parameter set
2 Camera parameter set 1
4 Camera parameter set two
6 Camera parameter set three
As can be seen from the above table three, when the state value is 2, the corresponding camera parameter set is the camera parameter set one; when the state value is 4, the corresponding camera parameter set is a second camera parameter set; when the state value is 6, the corresponding camera parameter set is camera parameter set three.
Illustratively, the first behavior state is squatting, and as can be seen from table one above, the bitmap data corresponding to squatting is 0000010. And converting the first bitmap data to obtain a value 2 corresponding to the first bitmap data, namely a state value 2 corresponding to the first behavior state. As can be seen from the above table three, the camera parameter set corresponding to the state value 2 is the first camera parameter set, i.e., the first camera parameter set is the first camera parameter set.
It should be noted that any implementation manner described above may be selected to acquire the first camera parameter set, which is not limited in the embodiment of the present application.
It should be noted that, when there is no first bitmap data corresponding to the first behavior state, a numerical value corresponding to the first behavior state is acquired, and based on the numerical value corresponding to the first behavior state, the first bitmap data corresponding to the first behavior state is acquired. The process is similar to the process of obtaining the second bitmap data corresponding to the second behavior state based on the value corresponding to the second behavior state in step 202, and is not described herein again.
In one possible implementation, the configuration information includes at least one of a mix-in time and a mix-out time, and a camera viewpoint, a swing arm offset, a camera field of view. The camera viewpoint is a viewpoint relative to the virtual object.
Optionally, a reference object and other controls may also be displayed in the first game screen, which is not limited in this embodiment of the application. The reference object may be an object of the same team as the virtual object, an object of a team that is opposite to the virtual object, or a neutral object, which is not limited in the embodiment of the present application. Neutral objects refer to objects that do not have team attributes, such as monsters in the game.
In step 202, in response to a selected instruction of a first state control in at least one state control, a second game picture is displayed, the second game picture is acquired by a virtual camera, configuration information when the virtual camera acquires the second game picture is determined based on a second camera parameter set, the second camera parameter set is obtained based on a first behavior state and a second behavior state corresponding to the first state control, and a behavior state of a virtual object in the second game picture is determined based on the second behavior state.
In one possible implementation manner, before displaying the second game screen, the second camera parameter set is acquired, where the acquisition process of the second camera parameter set includes: and responding to a selected instruction of a first state control in the at least one state control, and acquiring a second camera parameter set based on a first behavior state of the virtual object in the first game picture and a second behavior state corresponding to the first state control.
And responding to the first state control in the at least one state control selected by the user, and the terminal equipment receives the selected instruction of the first state control in the at least one state control to obtain second bitmap data corresponding to the second behavior state. It is determined whether a second behavioral state is included in the first behavioral state. And removing the second bitmap data from the first bitmap data corresponding to the first behavior state to obtain third bitmap data when the first behavior state includes the second behavior state. Based on the third bitmap data, a second set of camera parameters is obtained. In response to the second behavior state being included in the first behavior state, the behavior state in which the virtual object is located in the second game screen is a behavior state other than the second behavior state in the first behavior state.
Alternatively, in response to the second behavioral state not being included in the first behavioral state, a state relationship between the first behavioral state and the second behavioral state is determined. Based on the state relationship and the second bitmap data, a second set of camera parameters is obtained.
There are two implementation manners described below for obtaining the second bitmap data corresponding to the second behavior state.
In response to the fact that the bitmap data corresponding to the behavior state and the corresponding relationship between the behavior state and the bitmap data are stored in the terminal device, second bitmap data corresponding to the second behavior state are obtained from the corresponding relationship between the behavior state and the bitmap data.
In response to that the bitmap data corresponding to the behavior state and the corresponding relationship between the behavior state and the bitmap data do not exist in the terminal device, the terminal device stores the corresponding relationship between the behavior state and the numerical value corresponding to the behavior state, and obtains a numerical value corresponding to a second behavior state, wherein the numerical value corresponding to the second behavior state is used for obtaining second bitmap data corresponding to the second behavior state; and adjusting the intermediate bitmap data based on the numerical value corresponding to the second behavior state to obtain second bitmap data corresponding to the second behavior state.
The value corresponding to the behavior state may be a power value corresponding to the behavior state. The intermediate bitmap data is one bitmap data set by a developer of the game to help acquire bitmap data corresponding to the action state, and illustratively, the intermediate bitmap data is 0000001. It should be noted that the number of bits of the intermediate bitmap data may be more or less, but the intermediate bitmap data is always bitmap data whose rightmost bit is 1 and other bits are 0. Based on the value corresponding to the second behavior state, the process of adjusting the intermediate bitmap data is to perform bit clearing on the intermediate bitmap data, so that the second bitmap data corresponding to the second behavior state can be obtained.
Optionally, the adjusting the intermediate bitmap data based on the value corresponding to the second behavior state to obtain the second bitmap data corresponding to the second behavior state includes: and shifting the intermediate bitmap data to the left by the numerical value bit corresponding to the second behavior state to obtain second bitmap data corresponding to the second behavior state.
Illustratively, the intermediate bitmap data is 0000001, the value corresponding to the second behavior state is 1, and the second bitmap data corresponding to the second behavior state is 0000010.
Determining whether the second behavior state is included in the first behavior state includes: based on the first bitmap data and the second bitmap data, a data parameter is determined, the data parameter indicating whether the second behavior state is included in the first behavior state. In response to the data parameter being greater than the data threshold, determining that the first behavioral state includes the second behavioral state. In response to the data parameter not being greater than the parameter threshold, determining that the second behavior state is not included in the first behavior state. Optionally, the parameter threshold is set based on experience, or is adjusted according to an application scenario, which is not limited in the embodiment of the present application. Illustratively, the parameter threshold is 0.
In one possible implementation, the determining the data parameter based on the first bitmap data and the second bitmap data includes: and-operating the first bitmap data and the second bitmap data to obtain fifth bitmap data. And converting the fifth bitmap data to obtain a numerical value corresponding to the fifth bitmap data, and taking the numerical value corresponding to the fifth bitmap data as a data parameter.
And operation is a basic logic operation mode in a computer, and the symbol of the and operation is represented as "&". And the two bitmap data participating in the AND operation are subjected to the AND operation according to binary bits. The operation rule of and operation is as follows: 0&0 ═ 0; 0&1 ═ 0; 1&0 ═ 0; 1&1 ═ 1. The process of converting the fifth bitmap data to obtain the value corresponding to the fifth bitmap data is similar to the process of converting the first bitmap data to obtain the value corresponding to the first bitmap data in step 201, and is not described herein again.
Illustratively, the first bitmap data is 0000110, the second bitmap data is 0000010, and the data threshold is 0. And (3) carrying out AND operation on the first bitmap data and the second bitmap data to obtain fifth bitmap data of 0000010, and converting the fifth bitmap data to obtain a value corresponding to the fifth bitmap data of 2, namely the data parameter of 2. Since 2 is greater than 0, it is determined that the second behavior state is included in the first behavior state.
For another example, the first bitmap data is 0000010, the second bitmap data is 0000100, and the data threshold is 0. And-operating the first bitmap data and the second bitmap data to obtain fifth bitmap data of 0000000, and converting the fifth bitmap data to obtain a value of 0 corresponding to the fifth bitmap data, namely, a data parameter of 0. Since 0 is not greater than 0, it is determined that the second behavior state is not included in the first behavior state.
In one possible implementation, the fifth bitmap data may be obtained by the following formula (1).
Fifth bitmap data ═ C & (0000001 < b)) (1)
In the above formula (1), C is the first bitmap data, 0000001 is the middle bitmap data, < < is the left shift symbol, and b is the value corresponding to the second behavior state.
Optionally, when the first behavior state includes the second behavior state, the process of removing the second bitmap data from the first bitmap data corresponding to the first behavior state to obtain the third bitmap data includes: and performing negation operation on the second bitmap data to obtain reference bitmap data. And-operating the reference bitmap data and the first bitmap data to obtain third bitmap data.
The negation operation is a basic logic operation mode in a computer, and the sign of the negation operation is expressed as "-". The inverting operation is to invert the binary bits in the second bitmap data bitwise. The operation rule of the inversion operation is as follows: -0 ═ 1; 1-0.
Illustratively, taking 0000110 as the first bitmap data and 0000010 as the second bitmap data, the second bitmap data is inverted to obtain 1111101 as the reference bitmap data. The and operation is performed on the reference bitmap data and the first bitmap data, and the third bitmap data is obtained as 0000100.
In one possible implementation, the third bitmap data may be obtained by the following equation (2).
The third bitmap data (C & (0000001 < b)) (2)
In the above formula (2), C is the first bitmap data, 0000001 is the middle bitmap data, < < is the left shift symbol, b is the value corresponding to the second behavior state, and the bitmap data corresponding to (0000001 < < b) is the bitmap data corresponding to the second behavior state.
Optionally, the process of obtaining the second set of camera parameters based on the third bitmap data includes: the camera parameter set corresponding to the third bitmap data is taken as the second camera parameter set. Or, determining a state value corresponding to the third bitmap data, and using the camera parameter set corresponding to the state value corresponding to the third bitmap data as the second camera parameter set.
Optionally, the terminal device stores a state relationship between the behavior states. A table of state relationships between behavior states provided in the embodiments of the present application is shown in table four below.
Watch four
Figure BDA0003456254470000121
Figure BDA0003456254470000131
Wherein the state relationship comprises a coexistence relationship and an interruption relationship. The coexistence relationship includes an immediate coexistence relationship and a delayed coexistence relationship. The interruption relationship includes an immediate interruption relationship and a delayed interruption relationship. In table four, the behavior state in the horizontal direction represents the first behavior state, and the behavior state in the vertical direction represents the second behavior state. In the fourth table, √ denotes that the state relationship between the two behavior states is an immediate coexistence relationship, x denotes that the state relationship between the two behavior states is an immediate interruption relationship, → denotes that the state relationship between the two behavior states is a delayed interruption relationship, and ≈ denotes that the state relationship between the two behavior states is a delayed coexistence relationship.
As can be seen from the table four, when the first behavior state is a standing state and the second behavior state is a standing state, the state relationship between the two behavior states is an immediate coexistence relationship. When the first behavior state is standing and the second behavior state is squatting, the state relationship between the two behavior states is an immediate interruption relationship. When the first behavior state is other and the second behavior state is other, the state relationship between the two behavior states is shown in table four above, and is not described in detail here.
It should be noted that the number of the behavior states may be more or less, and the number of the behavior states is limited in the embodiment of the present application. The behavioral states also illustratively include interactive, shooting, close combat, and the like behavioral states. Generally, there are 32 basic behavior states, and 2^32 combined behavior states can be obtained from 32 basic behavior states. Each behavior state corresponds to a binary bitmap data, and the binary bitmap data corresponding to each behavior state have the same bit number. The number of bits of the binary bitmap data referred to in the embodiments of the present application is 7 bits, but is not intended to limit the number of bits of the binary bitmap data. When there are 32 basic behavior states, the bit number of the binary bitmap data corresponding to each behavior state is 32 bits.
Optionally, in response to the second behavior state not being included in the first behavior state, the state relationship between the first behavior state and the second behavior state is determined according to table four above.
The process of acquiring the second set of camera parameters based on the state relationship and the second bitmap data includes the following two cases.
In a first case, in response to that the state relationship is a coexistence relationship, adding second bitmap data on the basis of first bitmap data corresponding to the first behavior state to obtain fourth bitmap data, wherein the fourth bitmap data is used for indicating bitmap data when the first behavior state and the second behavior state exist at the same time, and the coexistence relationship is used for indicating that the first behavior state and the second behavior state exist at the same time. Based on the fourth bitmap data, a second set of camera parameters is obtained.
When the state relationship between the first behavior state and the second behavior state is a coexistence relationship, the behavior state in which the virtual object is located in the second game screen includes both the first behavior state and the second behavior state.
In a possible implementation manner, the process of adding the second bitmap data on the basis of the first bitmap data corresponding to the first behavior state to obtain the fourth bitmap data includes: and performing OR operation on the first bitmap data and the second bitmap data corresponding to the first behavior state to obtain fourth bitmap data.
The or operation is a basic logical operation mode in a computer, and the sign of the or operation is represented as "|". And performing OR operation on the two bitmap data participating in the OR operation according to binary bits. The operation rule of the or operation is as follows: 0|0 ═ 0; 0|1 ═ 1; 1|0 ═ 1; 1|1 ═ 1.
Illustratively, the first bitmap data is 0000010, the second bitmap data is 0000100, and the first bitmap data and the second bitmap data are ored to obtain the fourth bitmap data 0000110.
Alternatively, the fourth bitmap data may be obtained by the following formula (3).
Fourth bitmap data ═ (C | (0000001 < b)) (3)
In the above formula (3), C is the first bitmap data, 0000001 is the middle bitmap data, < < is the left shift symbol, b is the value corresponding to the second bitmap data, and 0000001 < < b) is the second bitmap data corresponding to the second behavior state.
After obtaining the fourth bitmap data, based on the fourth bitmap data, obtaining the second set of camera parameters includes: the camera parameter set corresponding to the fourth bitmap data is taken as the second camera parameter set. Or, determining a state value corresponding to the fourth bitmap data, and using the camera parameter set corresponding to the state value corresponding to the fourth bitmap data as the second camera parameter set.
It should be noted that, the second bitmap data is added on the basis of the first bitmap data to obtain the fourth bitmap data, and since the bit operation is performed on the two bitmap data, the efficiency of obtaining the fourth bitmap data can be improved.
And in the second case, when the state relationship is the interruption relationship, acquiring a second camera parameter set based on the second bitmap data, wherein the interruption relationship is used for indicating that the first behavior state is interrupted by the second behavior state.
When the state relationship between the first behavior state and the second behavior state is the interruption relationship, the behavior state of the virtual object in the second game picture is the second behavior state.
The process of obtaining the second set of camera parameters based on the second bitmap data comprises: the camera parameter set corresponding to the second bitmap data is taken as the second camera parameter set. Or, determining a state value corresponding to the second bitmap data, and using the camera parameter set corresponding to the state value corresponding to the second bitmap data as the second camera parameter set.
In a possible implementation manner, when the state relationship between the first behavior state and the second behavior state is a coexistence relationship and an immediate coexistence relationship, or when the state relationship between the first behavior state and the second behavior state is an interruption relationship and an immediate interruption relationship, after the second camera parameter set is acquired, the configuration information of the virtual camera is immediately adjusted to the second camera parameter set, the virtual camera acquires a second game screen, and the terminal device displays the second game screen.
And when the state relationship between the first behavior state and the second behavior state is a coexistence relationship and is a delay coexistence relationship, acquiring a first delay time corresponding to the delay coexistence relationship, after acquiring the second camera parameter set, adjusting the configuration information of the virtual camera to the second camera parameter set after the first delay time, acquiring a second game picture by the virtual camera, and further displaying the second game picture by the terminal equipment.
And when the state relationship between the first behavior state and the second behavior state is an interruption relationship and is a delay interruption relationship, acquiring second delay time corresponding to the delay interruption relationship, after acquiring the second camera parameter set, adjusting the configuration information of the virtual camera to the second camera parameter set after the second delay time, acquiring a second game picture by the virtual camera, and further displaying the second game picture by the terminal equipment.
The time lengths of the first delay time and the second delay time may be the same or different, and this is not limited in this embodiment of the application. Illustratively, the first delay time is 30 seconds and the second delay time is 20 seconds. For another example, the first delay time and the second delay time are both 10 seconds.
Optionally, the configuration information includes at least one of a mix-in time and a mix-out time, and a camera viewpoint, a swing arm offset, and a camera field of view, the camera viewpoint being a camera viewpoint relative to the virtual object.
In one possible implementation manner, when the configuration information includes a mixing-in time, a mixing-out time, a camera viewpoint, a swing arm offset, and a camera field of view, after the second camera parameter set is acquired, the adjusting the configuration information of the virtual camera to the second camera parameter set includes: adjusting a camera viewpoint of the virtual camera to a camera viewpoint included in the second camera parameter set, adjusting a swing arm offset of the virtual camera to a swing arm offset included in the second camera parameter set, adjusting a camera view field of the virtual camera to a camera view field included in the second camera parameter set, adjusting a mixing time of the virtual camera to a mixing time included in the second camera parameter set, and adjusting a mixing-out time of the virtual camera to a mixing-out time included in the second camera parameter set.
When the second camera parameter set includes the camera viewpoint, the camera viewpoint included in the second camera parameter set may be adjusted to obtain an adjusted camera viewpoint, and the adjusted camera viewpoint is a camera viewpoint relative to the game coordinate system, because the camera viewpoint is a camera viewpoint relative to the virtual object. And then adjusting the camera viewpoint of the virtual camera to the adjusted camera viewpoint.
And adjusting the camera viewpoint included in the second camera parameter set by adopting the conversion matrix to obtain the adjusted camera viewpoint. The transformation matrix is a 4 × 4 matrix, or the transformation matrix is a matrix of other sizes, which is not limited in the embodiments of the present application.
In one possible implementation, a camera parameter set corresponding to a game mechanism included in the game may be further obtained, the game mechanism including at least one of blocking, rotating, delaying, vibrating, and expanding, and one game mechanism corresponding to one camera parameter set. And responding to a selected instruction of a first state control in the at least one state control, displaying a third game picture, wherein the third game picture is acquired by a virtual camera, configuration information when the virtual camera acquires the third game picture is determined based on a synthetic camera parameter set, the synthetic camera parameter set is acquired based on the second camera parameter set and a camera parameter set corresponding to the game mechanism, and the behavior state of the virtual object in the third game picture is determined based on the second behavior state.
Optionally, the second camera parameter set and the camera parameter set corresponding to the game mechanism are synthesized to obtain a synthesized camera parameter set.
When the game mechanism is any one of rotation, delay, vibration and extension, the camera parameter set corresponding to the game mechanism is set by a developer of the game. When the game mechanism is blocking, the process of acquiring the corresponding camera parameter set comprises the following steps: a distance between a camera viewpoint of the camera and the barrier is acquired, and based on the distance, a corresponding set of camera parameters for the barrier is acquired.
Fig. 3 is a schematic diagram illustrating a relationship between a camera rotation speed and a joystick pushing amount in a rotation mechanism according to an embodiment of the present disclosure, where in fig. 3, a vertical axis is the camera rotation speed and a horizontal axis is the joystick pushing amount. The virtual camera comprises four sections, wherein the first section is a dead zone limit, and when the pushing amount of the rocker is small, the virtual camera cannot rotate, so that misoperation of a user is prevented. The rotation speed of the camera at the second section is small, the camera is slowly increased, because the user is in the state of aiming at the open lens at the time and needs to perform fine operation, the third section keeps stable transition, the rotation speed of the fourth section is changed greatly, and the user can quickly change the visual angle at the time so as to observe the environment.
Fig. 4 is a schematic diagram illustrating a process of changing the elevation angle of a camera according to an embodiment of the present disclosure, in fig. 4, a virtual camera may adapt according to the change of the elevation angle, including view field adaptation and viewing distance adaptation. The angle between the virtual camera and the horizon is the elevation angle of the virtual camera. The dashed line in fig. 4 is the swing arm offset of the virtual camera. Fig. 5 is a graph illustrating a relationship between a field of view and an elevation angle of a camera provided by an embodiment of the present application, in which when the elevation angle is minimum, a virtual camera mainly observes the ground, and when the field of view of the virtual camera is maximum, the observation range is wider. As the elevation angle increases, the virtual camera field of view continues to decrease until the horizontal plane. When the virtual camera field of view reaches a minimum value, the virtual camera gradually faces the sky, and the virtual camera field of view needs to be continuously increased. Based on the schematic diagram of the relationship between the camera field of view and the elevation angle shown in fig. 5, the virtual camera motion trajectory in fig. 4 can be obtained. FIG. 6 is a graph illustrating the relationship between rocker arm offset and elevation angle provided by an embodiment of the present application. Since the swing arm offset refers to the length of a connection line between the virtual camera and the virtual object, and the swing arm offset is constantly changing along with the constant change of the elevation angle of the virtual camera, the schematic diagram of the relationship between the swing arm offset and the elevation angle is shown in fig. 6, and is not repeated here. Based on the schematic diagram of the relationship between swing arm offset and elevation shown in fig. 6, the virtual camera motion trajectory in fig. 4 can be obtained.
Fig. 7 is a schematic diagram illustrating a display of a blocking mechanism according to an embodiment of the present disclosure. In fig. 7, a dotted circle represents a position before the virtual camera moves, and a collision body (obstacle) exists between the dotted circle and the virtual object, and therefore, it is necessary to determine a distance between the dotted circle and the collision body, move the dotted circle forward by the distance, and further correct the virtual camera position to before the collision body (i.e., the position of the solid circle in fig. 7) so that there is no collision body between the virtual camera and the virtual object, and the virtual camera does not penetrate into the collision body. A virtual elastic rope, such as a virtual straight line in fig. 7, is arranged between the virtual camera and the virtual object, and the virtual elastic rope is used for assisting the virtual camera to move smoothly.
Synthesizing the second camera parameter set with a camera parameter set corresponding to the game mechanism to obtain a synthesized camera parameter set, wherein the process of synthesizing the camera parameter set comprises the following steps: adding a numerical value corresponding to the first configuration information in the second camera parameter set and a numerical value corresponding to the first configuration information in the camera parameter set corresponding to the game mechanism to obtain a composite numerical value corresponding to the first configuration information, wherein the first configuration information is any one of the configuration information of the virtual camera; and traversing the configuration information of the virtual camera in sequence according to the mode to obtain a synthetic numerical value corresponding to each configuration information, and obtaining a synthetic camera parameter set based on the synthetic numerical value corresponding to each configuration information.
Illustratively, the camera viewpoint included in the second camera parameter set is (X)1,Y1,Z1) The length of the rocker arm is L1Field of view of camera is alpha1The mixing time is t1Mixing time of T1. The camera viewpoint included in the camera parameter set corresponding to the game mechanism is (X)2,Y2,Z2) The length of the rocker arm is L2Field of view of camera is alpha2The mixing time is t2Mixing time of T2. The camera viewpoint included in the synthetic camera parameter set is (X)1+X2,Y1+Y2,Z1+Z2) The length of the rocker arm is L1+L2Field of view of camera is alpha12The mixing time is t1+t2Mixing time of T1+T2
In a possible implementation manner, after the synthesis camera parameter set is acquired, the configuration information of the virtual camera is adjusted according to the synthesis camera parameter set, so that the virtual camera renders according to the adjusted configuration information, collects a game picture, uses the collected game picture as a third game picture, and displays the third game picture.
As shown in fig. 8, which is a flowchart for acquiring a third game screen according to an embodiment of the present application, in fig. 8, two channels are included, where one channel is a data driver module, and the second channel is a logic driver module. The first channel comprises loading configuration, constructing mapping, detecting virtual objects and state checking. The loading configuration is used for setting a corresponding bitmap data for each behavior state, and the construction mapping is used for constructing a corresponding camera parameter set for each bitmap data. Detecting a virtual object is detecting whether a behavior state of the virtual object changes. And responding to the change of the behavior state of the virtual object, processing the behavior state before the change and the behavior state corresponding to the selected first state control based on state verification, and acquiring a second camera parameter set. And the second channel is a game mechanism, the game mechanism comprises at least one of rotation, blocking, vibration, delay and extension, and the second channel is used for acquiring a camera parameter set corresponding to the game mechanism. And synthesizing the camera parameter set corresponding to the game mechanism with the second camera parameter set to obtain a synthesized camera parameter set. And adjusting the camera viewpoint in the synthetic camera parameter set to obtain an adjusted camera viewpoint, wherein the adjusted camera viewpoint is a camera viewpoint relative to the game world coordinates. And generating a rendering instruction, sending the rendered rendering instruction to a rendering thread, and rendering the picture by the rendering thread based on the rendering instruction to obtain a third game picture. The rendering instructions include the camera viewpoint after adjustment and the camera field of view, swing arm offset, blend-in time, and blend-out time included in the set of synthetic camera parameters.
Fig. 9 to 12 are schematic display diagrams of a second game screen according to an embodiment of the present application. In fig. 9, the behavior state of the virtual object is sprint, and the second game screen shown in fig. 9 is a game screen acquired by the virtual camera after the configuration information of the virtual camera is adjusted according to the camera parameter set corresponding to the sprint. The state value of the virtual camera corresponding to the sprint is 256, and the camera parameter set corresponding to the second game screen shown in fig. 9 is: the swing arm offset is 154.102, the camera viewpoint is X-0.691, Y-45.81, Z-24.003, and the camera field of view is 90.694.
In fig. 10, the behavior state of the virtual object is a slide, and the second game screen shown in fig. 10 is a game screen captured by the virtual camera after the configuration information of the virtual camera is adjusted according to the camera parameter set corresponding to the slide. The state value of the virtual camera corresponding to the slide is 64, and the camera parameter set corresponding to the second game screen shown in fig. 10 is: the swing arm offset is 193.424, the camera view point is X-0.691, Y-49.258, Z-51.946, and the camera field of view is 90.258.
In fig. 11, the behavior state of the virtual object is squat concealment, and the second game screen shown in fig. 11 is a game screen acquired by the virtual camera after the configuration information of the virtual camera is adjusted according to the camera parameter set corresponding to the squat concealment. The state value of the virtual camera corresponding to the squat concealment is 132, and the camera parameter set corresponding to the second game screen shown in fig. 11 is: the swing arm offset is 138, the camera view point is X-0.672, Y-40, Z-18, and the camera field of view is 90.
In fig. 12, the behavior state of the virtual object is the target shooting, and the second game screen shown in fig. 12 is a game screen acquired by the virtual camera after the configuration information of the virtual camera is adjusted according to the camera parameter set corresponding to the target shooting. The virtual camera status value corresponding to the sighting shooting is 20, and the camera parameter set corresponding to the second game screen shown in fig. 12 is: the swing arm offset is 115, the camera view point is X-7.303, Y-40, Z-20, and the camera field of view is 60.
It should be noted that the virtual camera displayed in the second game screen shown in each of fig. 9 to 12 may not be displayed in the second game screen, and the camera parameter set corresponding to each game screen may be displayed in the corresponding second game screen or may not be displayed in the corresponding second game screen only for making the second game screen more intuitive, which is not limited in the embodiment of the present application.
When the game picture is displayed, at least one of the mixing time and the mixing-out time is considered in the configuration information of the virtual camera, so that the motion trail of the virtual camera is smoother, and the picture display method provided by the application is smoother when the game picture is displayed. Fig. 13 shows a change curve of the swing arm offset before the adding of the mix-in time and the mix-out time and a change curve of the swing arm offset after the adding of the mix-in time and the mix-out time according to the embodiment of the present application. In fig. 13, a first curve is a change curve of the swing arm offset before adding the mix-in time and the mix-out time, and a second curve is a change curve of the swing arm offset after adding the mix-in time and the mix-out time. As can be seen from fig. 13, the change curve of the swing arm offset is smoother after adding the mix-in time and the mix-out time.
It should be noted that, in the above-mentioned image display method, the process of acquiring the second camera parameter set may also be executed by the server, after the server acquires the second camera parameter set, the server sends the second camera parameter set to the terminal device, and the terminal device adjusts the configuration information of the virtual camera according to the second camera parameter set, so as to acquire the second game image and display the second game image. The process of the server acquiring the second camera parameter set is similar to the process of the terminal device acquiring the second camera parameter set, and is not described herein again.
The method comprises the steps of associating a camera parameter set with a behavior state of a virtual object, acquiring a second camera parameter set according to a first behavior state before change and a second behavior state corresponding to a first state control when the behavior state of the virtual object changes, adjusting configuration information of the virtual camera according to the second camera parameter set, and acquiring a game picture by using the virtual camera after adjustment of the configuration information, so that the display effect of the game picture is better. Moreover, the camera parameter set and the game mode are decoupled, so that even if a new game mode appears, the corresponding camera parameter set does not need to be developed for the new game mode, the development cost of the camera parameter set can be reduced, and the game manufacturing efficiency can be improved.
In addition, the configuration information of the virtual camera in the application comprises at least one of mixing time and mixing time, so that when the behavior state changes, the motion track of the virtual camera is smoother and smoother, the probability of jitter and jump of a game picture acquired by the virtual camera is further reduced, and the display of the game picture is smoother and smoother.
Fig. 14 is a flowchart illustrating a screen display method according to an embodiment of the present application, where the method may be executed by the terminal device 101 in fig. 1, and as shown in fig. 14, the method includes the following steps:
in step 1401, a first game screen is displayed, the first game screen including a virtual object in a first behavior state, the first game screen being captured by a virtual camera based on first configuration information, the first configuration information being determined based on a first camera parameter set corresponding to the first behavior state.
In a possible implementation manner, the process of displaying the first game frame is similar to the process of displaying the first game frame in step 201, and is not described herein again.
In step 1402, in response to the virtual object being switched from the first behavior state to the third behavior state, a second game screen is displayed, the second game screen being acquired by the virtual camera based on second configuration information, the second configuration information being determined based on a second camera parameter set corresponding to the third behavior state.
The first game frame also comprises at least one state control, and one state control corresponds to one behavior state. And responding to a selected instruction of a first state control in the at least one state control, and determining that the virtual object is switched from the first behavior state to a third behavior state based on the second behavior state and the first behavior state corresponding to the first state control.
And responding to the second behavior state included in the first behavior state, wherein the third behavior state is a behavior state except for the second behavior state in the first behavior state. Illustratively, the first behavioral state is squat aiming, the second behavioral state is squat, and the third behavioral state is aiming.
In response to the first behavior state not including the second behavior state and the state relationship between the first behavior state and the second behavior state being a coexistence relationship, the third behavior state is a combined behavior state of the first behavior state and the second behavior state, the coexistence relationship indicating that the first behavior state and the second behavior state exist at the same time. Illustratively, the first behavior state is squat, the second behavior state is aim, and the third behavior state is determined to be squat aim since the state relationship between squat and aim is a coexistence relationship.
In response to the first behavioral state not including the second behavioral state and the state relationship between the first behavioral state and the second behavioral state being an interruption relationship, the third behavioral state is the second behavioral state, the interruption relationship indicating an interruption of the first behavioral state by the second behavioral state. Illustratively, the first behavior state is standing and the second behavior state is squatting, and the third behavior state is determined as squatting because the state relationship between standing and squatting is a breaking relationship.
In one possible implementation, each behavior state corresponds to one or more regions. In response to the virtual object being switched from the first behavior state to the third behavior state, a second game screen is displayed, the second game screen including the virtual object in the third behavior state, and the virtual object being located in a target area in the second game screen, the target area being determined based on the third behavior state.
Illustratively, if the third action state is sprint and the target area is an area corresponding to sprint, the virtual object is in the sprint state and the virtual object is in the target area in the second game screen. And if the third action state is sliding and the target area is an area corresponding to sliding, the virtual object is in the sliding state in the second game picture, and the virtual object is in the target area. When the third behavior state is the other state, the virtual object in the second game screen is located in the other state, and the virtual object is located in the target area corresponding to the other state.
In a possible implementation manner, the process of displaying the second game frame is similar to the process in step 202, and is not described herein again.
According to the method, the camera parameter set and the behavior state of the virtual object are associated together, when the behavior state of the virtual object changes, the second game picture is displayed according to the first behavior state before the change and the third behavior state in the change, and the configuration information of the virtual camera in the second game picture is the second configuration information, so that the display effect of the second game picture is better. Moreover, the camera parameter set and the game mode are decoupled, so that even if a new game mode appears, the corresponding camera parameter set does not need to be developed for the new game mode, the development cost of the camera parameter set can be reduced, and the game manufacturing efficiency can be improved.
Fig. 15 is a schematic structural diagram of a screen display device according to an embodiment of the present application, and as shown in fig. 15, the device includes:
the display module 1501 is configured to display a first game screen, where the first game screen includes a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game screen is acquired by a virtual camera, and configuration information of the virtual camera when acquiring the first game screen is determined based on a first camera parameter set corresponding to the first behavior state;
the display module 1501 is further configured to display a second game screen in response to a selected instruction of a first state control in the at least one state control, where the second game screen is acquired by a virtual camera, configuration information of the virtual camera when acquiring the second game screen is determined based on a second camera parameter set, the second camera parameter set is acquired based on a first behavior state and a second behavior state corresponding to the first state control, and a behavior state of the virtual object in the second game screen is determined based on the second behavior state.
In one possible implementation, the apparatus further includes:
the acquisition module is used for acquiring second bitmap data corresponding to the second behavior state; in response to the first behavior state including the second behavior state, removing the second bitmap data from the first bitmap data corresponding to the first behavior state to obtain third bitmap data; based on the third bitmap data, a second set of camera parameters is obtained.
In a possible implementation manner, the obtaining module is configured to perform negation operation on the second bitmap data to obtain reference bitmap data; and-operating the reference bitmap data and the first bitmap data to obtain third bitmap data.
In a possible implementation manner, the obtaining module is configured to obtain a numerical value corresponding to the second behavior state, where the numerical value corresponding to the second behavior state is used to obtain second bitmap data corresponding to the second behavior state; and adjusting the intermediate bitmap data based on the numerical value corresponding to the second behavior state to obtain second bitmap data corresponding to the second behavior state.
In one possible implementation manner, the obtaining module is configured to determine a state relationship between the first behavior state and the second behavior state in response to that the second behavior state is not included in the first behavior state; based on the state relationship and the second bitmap data, a second set of camera parameters is obtained.
In a possible implementation manner, the obtaining module is configured to perform an or operation on first bitmap data and second bitmap data corresponding to the first behavior state in response to that the state relationship is a coexistence relationship, to obtain fourth bitmap data, where the fourth bitmap data is used to indicate bitmap data when the first behavior state and the second behavior state exist at the same time, and the coexistence relationship is used to indicate that the first behavior state and the second behavior state exist at the same time; based on the fourth bitmap data, a second set of camera parameters is obtained.
In one possible implementation, the obtaining module is configured to obtain, based on the second bitmap data, a second set of camera parameters in response to the state relationship being an interruption relationship, the interruption relationship being indicative of an interruption of the first behavior state by the second behavior state.
In one possible implementation, the apparatus further includes:
a determining module for determining a data parameter based on the first bitmap data and the second bitmap data, the data parameter indicating whether the first behavior state includes the second behavior state; determining that the first behavioral state includes a second behavioral state in response to the data parameter being greater than the parameter threshold; in response to the data parameter not being greater than the parameter threshold, determining that the second behavior state is not included in the first behavior state.
In a possible implementation manner, the determining module is configured to perform an and operation on the first bitmap data and the second bitmap data to obtain fifth bitmap data; and converting the fifth bitmap data to obtain a numerical value corresponding to the fifth bitmap data, and taking the numerical value corresponding to the fifth bitmap data as a data parameter.
In one possible implementation, the configuration information includes at least one of a mix-in time and a mix-out time, and a camera viewpoint, a swing arm offset, a camera field of view, the camera viewpoint being a viewpoint relative to the virtual object.
In a possible implementation manner, the obtaining module is further configured to obtain a camera parameter set corresponding to a game mechanism included in the game, where the game mechanism includes at least one of blocking, rotating, delaying, shaking, and expanding, and one game mechanism corresponds to one camera parameter set;
the display module 1501 is further configured to, in response to a selected instruction of a first state control in the at least one state control, display a third game screen, where the third game screen is acquired by a virtual camera, configuration information of the virtual camera when acquiring the third game screen is determined based on a synthetic camera parameter set, the synthetic camera parameter set is acquired based on the second camera parameter set and a camera parameter set corresponding to the game mechanism, and a behavior state of the virtual object in the third game screen is determined based on the second behavior state.
In one possible implementation, the apparatus further includes:
the synthesis module is used for adding a numerical value corresponding to the first configuration information in the second camera parameter set and a numerical value corresponding to the first configuration information in the camera parameter set corresponding to the game mechanism to obtain a synthesized numerical value corresponding to the first configuration information, wherein the first configuration information is any one of the configuration information of the virtual camera;
and traversing the configuration information of the virtual camera in sequence according to the mode to obtain a synthetic numerical value corresponding to each configuration information, and obtaining a synthetic camera parameter set based on the synthetic numerical value corresponding to each configuration information.
The device associates the camera parameter set with the behavior state of the virtual object, acquires the second camera parameter set according to the first behavior state before change and the second behavior state corresponding to the first state control when the behavior state of the virtual object changes, further adjusts the configuration information of the virtual camera according to the second camera parameter set, and acquires the game picture by adopting the virtual camera after the configuration information adjustment, so that the display effect of the game picture is better. Moreover, the camera parameter set and the game mode are decoupled, so that even if a new game mode appears, the corresponding camera parameter set does not need to be developed for the new game mode, the development cost of the camera parameter set can be reduced, and the game manufacturing efficiency can be improved.
Fig. 16 is a schematic structural diagram of a screen display device according to an embodiment of the present application, and as shown in fig. 16, the device includes:
a display module 1601, configured to display a first game screen, where the first game screen includes a virtual object in a first behavior state, the first game screen is acquired by a virtual camera based on first configuration information, and the first configuration information is determined based on a first camera parameter set corresponding to the first behavior state;
the display module 1601 is further configured to, in response to the virtual object being switched from the first behavior state to the third behavior state, display a second game screen, where the second game screen is acquired by the virtual camera based on second configuration information, and the second configuration information is determined based on a second camera parameter set corresponding to the third behavior state.
In one possible implementation, the display module 1601 is configured to display a second game screen in response to the virtual object being switched from the first behavior state to the third behavior state, where the second game screen includes the virtual object in the third behavior state, and the virtual object is located in a target area in the second game screen, and the target area is determined based on the third behavior state.
In one possible implementation, the first game screen further includes at least one state control, and one state control corresponds to one behavior state;
the device still includes:
and the determining module is used for responding to a selected instruction of a first state control in the at least one state control, and determining that the virtual object is switched from the first behavior state to a third behavior state based on the second behavior state and the first behavior state corresponding to the first state control.
In one possible implementation, in response to the first behavioral state comprising the second behavioral state, the third behavioral state is a behavioral state other than the second behavioral state in the first behavioral state;
in response to the first behavior state not including the second behavior state and the state relationship between the first behavior state and the second behavior state being a coexistence relationship, the third behavior state being a combined behavior state of the first behavior state and the second behavior state, the coexistence relationship indicating that the first behavior state and the second behavior state exist at the same time;
in response to the first behavioral state not including the second behavioral state and the state relationship between the first behavioral state and the second behavioral state being an interruption relationship, the third behavioral state is the second behavioral state, the interruption relationship indicating an interruption of the first behavioral state by the second behavioral state.
The device associates the camera parameter set with the behavior state of the virtual object, when the behavior state of the virtual object changes, the second game picture is displayed according to the first behavior state before the change and the third behavior state in the change, and the configuration information of the virtual camera in the second game picture is the second configuration information, so that the display effect of the second game picture is better. Moreover, the camera parameter set and the game mode are decoupled, so that even if a new game mode appears, the corresponding camera parameter set does not need to be developed for the new game mode, the development cost of the camera parameter set can be reduced, and the game manufacturing efficiency can be improved.
It should be understood that, when the above-mentioned apparatus is provided to implement its functions, it is only illustrated by the division of the above-mentioned functional modules, and in practical applications, the above-mentioned functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 17 shows a block diagram of a terminal device 1700 according to an exemplary embodiment of the present application. The terminal device 1700 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal device 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal device 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the picture display method provided by the method embodiments of the present application.
In some embodiments, terminal device 1700 may further optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1704, a display screen 1705, a camera assembly 1706, an audio circuit 1707, and a power supply 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminal devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1705 may be one, disposed on the front panel of the terminal device 1700; in other embodiments, the number of the display screens 1705 may be at least two, and the two display screens are respectively disposed on different surfaces of the terminal device 1700 or in a folding design; in other embodiments, display 1705 may be a flexible display, disposed on a curved surface or on a folded surface of terminal device 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. In general, a front camera is provided on the front panel of the terminal apparatus 1700, and a rear camera is provided on the rear surface of the terminal apparatus 1700. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal apparatus 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
Power supply 1709 is used to provide power to various components in terminal device 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When the power supply 1709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal device 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal apparatus 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the display screen 1705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal apparatus 1700, and the gyro sensor 1712 may acquire a 3D motion of the user on the terminal apparatus 1700 in cooperation with the acceleration sensor 1711. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be located on the side frames of terminal device 1700 and/or underneath display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal device 1700, the user's grip signal to the terminal device 1700 may be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal acquired by the pressure sensor 1713. When the pressure sensor 1713 is disposed below the display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the display screen 1705 is reduced. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
A proximity sensor 1716, also called a distance sensor, is usually provided on the front panel of the terminal device 1700. Proximity sensor 1716 is used to gather the distance between the user and the front of terminal device 1700. In one embodiment, when the proximity sensor 1716 detects that the distance between the user and the front surface of the terminal device 1700 is gradually reduced, the processor 1701 controls the display 1705 to switch from the bright screen state to the dark screen state; when the proximity sensor 1716 detects that the distance between the user and the front surface of the terminal apparatus 1700 is gradually increased, the processor 1701 controls the display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not limiting of terminal device 1700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1801 and one or more memories 1802, where at least one program code is stored in the one or more memories 1802, and the at least one program code is loaded and executed by the one or more processors 1801 to implement the screen display method provided by each of the method embodiments. Of course, the server 1800 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server 1800 may also include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause a computer to implement any of the above-described screen display methods.
Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program or a computer program product having at least one computer instruction stored therein, the at least one computer instruction being loaded and executed by a processor to cause a computer to implement any one of the screen display methods described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the principles of the present application should be included in the protection scope of the present application.

Claims (21)

1. A method for displaying a screen, the method comprising:
displaying a first game picture, wherein the first game picture comprises a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game picture is acquired by a virtual camera, and configuration information when the virtual camera acquires the first game picture is determined based on a first camera parameter set corresponding to the first behavior state;
and in response to a selected instruction of a first state control in the at least one state control, displaying a second game picture, wherein the second game picture is acquired by the virtual camera, configuration information of the virtual camera in acquiring the second game picture is determined based on a second camera parameter set, the second camera parameter set is acquired based on the first behavior state and a second behavior state corresponding to the first state control, and the behavior state of the virtual object in the second game picture is determined based on the second behavior state.
2. The method of claim 1, wherein prior to displaying the second game screen, the method further comprises:
acquiring second bitmap data corresponding to the second behavior state;
in response to the first behavior state including the second behavior state, removing the second bitmap data from the first bitmap data corresponding to the first behavior state to obtain third bitmap data;
based on the third bitmap data, the second set of camera parameters is obtained.
3. The method of claim 2, wherein removing the second bitmap data from the first bitmap data corresponding to the first behavior state to obtain a third bitmap data comprises:
performing negation operation on the second bitmap data to obtain reference bitmap data;
and operating the reference bitmap data and the first bitmap data to obtain the third bitmap data.
4. The method of claim 2, wherein the obtaining second bitmap data corresponding to the second behavior state comprises:
acquiring a numerical value corresponding to the second behavior state, wherein the numerical value corresponding to the second behavior state is used for acquiring second bitmap data corresponding to the second behavior state;
and adjusting the intermediate bitmap data based on the numerical value corresponding to the second behavior state to obtain second bitmap data corresponding to the second behavior state.
5. The method of claim 2, further comprising:
responsive to not including the second behavioral state in the first behavioral state, determining a state relationship between the first behavioral state and the second behavioral state;
and acquiring the second camera parameter set based on the state relation and the second bitmap data.
6. The method of claim 5, wherein obtaining the second set of camera parameters based on the state relationship and the second bitmap data comprises:
responding to the state relationship being a coexistence relationship, performing an or operation on first bitmap data and second bitmap data corresponding to the first behavior state to obtain fourth bitmap data, wherein the fourth bitmap data is used for indicating bitmap data when the first behavior state and the second behavior state exist at the same time, and the coexistence relationship is used for indicating that the first behavior state and the second behavior state exist at the same time;
based on the fourth bitmap data, obtaining the second set of camera parameters.
7. The method of claim 5, wherein obtaining the second set of camera parameters based on the state relationship and the second bitmap data comprises:
in response to the state relationship being a breaking relationship, obtaining the second set of camera parameters based on the second bitmap data, the breaking relationship indicating that the first behavior state is broken by the second behavior state.
8. The method of any of claims 2 to 7, further comprising:
determining a data parameter based on the first bitmap data and the second bitmap data, the data parameter indicating whether the second behavior state is included in the first behavior state;
determining that the first behavioral state includes the second behavioral state in response to the data parameter being greater than a parameter threshold;
determining that the second behavioral state is not included in the first behavioral state in response to the data parameter not being greater than the parameter threshold.
9. The method of claim 8, wherein determining data parameters based on the first bitmap data and the second bitmap data comprises:
performing and operation on the first bitmap data and the second bitmap data to obtain fifth bitmap data;
and converting the fifth bitmap data to obtain a numerical value corresponding to the fifth bitmap data, and taking the numerical value corresponding to the fifth bitmap data as the data parameter.
10. The method of any of claims 1 to 7, wherein the configuration information comprises at least one of a mix-in time and a mix-out time, and a camera viewpoint, a swing arm offset, a camera field of view, the camera viewpoint being a viewpoint relative to the virtual object.
11. The method of any of claims 1 to 7, further comprising:
acquiring a camera parameter set corresponding to a game mechanism included in a game, wherein the game mechanism comprises at least one of blocking, rotating, delaying, vibrating and expanding, and one game mechanism corresponds to one camera parameter set;
and in response to a selected instruction of a first state control in the at least one state control, displaying a third game picture, wherein the third game picture is acquired by the virtual camera, configuration information of the virtual camera in acquiring the third game picture is determined based on a synthetic camera parameter set, the synthetic camera parameter set is obtained based on the second camera parameter set and a camera parameter set corresponding to the game mechanism, and a behavior state of the virtual object in the third game picture is determined based on the second behavior state.
12. The method of claim 11, wherein prior to displaying the third game screen, the method further comprises:
adding a numerical value corresponding to first configuration information in the second camera parameter set and a numerical value corresponding to the first configuration information in the camera parameter set corresponding to the game mechanism to obtain a composite numerical value corresponding to the first configuration information, wherein the first configuration information is any one of the configuration information of the virtual camera;
and traversing the configuration information of the virtual camera in sequence according to the mode to obtain a synthetic numerical value corresponding to each configuration information, and obtaining the synthetic camera parameter set based on the synthetic numerical value corresponding to each configuration information.
13. A method for displaying a screen, the method comprising:
displaying a first game picture, wherein the first game picture comprises a virtual object in a first behavior state, the first game picture is acquired by a virtual camera based on first configuration information, and the first configuration information is determined based on a first camera parameter set corresponding to the first behavior state;
and in response to the virtual object being switched from the first behavior state to a third behavior state, displaying a second game picture, wherein the second game picture is acquired by the virtual camera based on second configuration information, and the second configuration information is determined based on a second camera parameter set corresponding to the third behavior state.
14. The method of claim 13, wherein said displaying a second game screen in response to the virtual object being switched from the first behavior state to a third behavior state comprises:
in response to the virtual object being switched from the first behavior state to a third behavior state, displaying the second game screen, the second game screen including the virtual object in the third behavior state, and the virtual object being located in a target area in the second game screen, the target area being determined based on the third behavior state.
15. The method of claim 13, wherein the first game frame further comprises at least one state control, one state control corresponding to one behavior state;
the method further comprises the following steps:
and responding to a selected instruction of a first state control in the at least one state control, and determining that the virtual object is switched from the first behavior state to a third behavior state based on a second behavior state corresponding to the first state control and the first behavior state.
16. The method of claim 15, wherein in response to the second behavioral state being included in the first behavioral state, the third behavioral state is a behavioral state other than the second behavioral state in the first behavioral state;
in response to the second behavior state not being included in the first behavior state and a state relationship between the first behavior state and the second behavior state being a coexistence relationship, the third behavior state being a combined behavior state of the first behavior state and the second behavior state, the coexistence relationship indicating that the first behavior state and the second behavior state exist at the same time;
in response to the second behavioral state not being included in the first behavioral state and a state relationship between the first behavioral state and the second behavioral state being an interruption relationship, the third behavioral state being the second behavioral state, the interruption relationship indicating an interruption of the first behavioral state by the second behavioral state.
17. A picture display apparatus, characterized in that the apparatus comprises:
the game system comprises a display module, a first game screen and a second game screen, wherein the first game screen comprises a virtual object in a first behavior state and at least one state control, one state control corresponds to one behavior state, the first game screen is acquired by a virtual camera, and configuration information when the virtual camera acquires the first game screen is determined based on a first camera parameter set corresponding to the first behavior state;
the display module is further configured to display a second game picture in response to a selected instruction of a first state control in the at least one state control, where the second game picture is acquired by the virtual camera, configuration information of the virtual camera during acquisition of the second game picture is determined based on a second camera parameter set, the second camera parameter set is acquired based on the first behavior state and a second behavior state corresponding to the first state control, and a behavior state of the virtual object in the second game picture is determined based on the second behavior state.
18. A picture display apparatus, characterized in that the apparatus comprises:
the game system comprises a display module, a first action state and a second action state, wherein the display module is used for displaying a first game picture, the first game picture comprises a virtual object in the first action state, the first game picture is acquired by a virtual camera based on first configuration information, and the first configuration information is determined based on a first camera parameter set corresponding to the first action state;
the display module is further configured to display a second game picture in response to the virtual object being switched from the first behavior state to a third behavior state, where the second game picture is acquired by the virtual camera based on second configuration information, and the second configuration information is determined based on a second camera parameter set corresponding to the third behavior state.
19. An electronic device comprising a processor and a memory, wherein at least one program code is stored in the memory, and wherein the at least one program code is loaded into and executed by the processor to cause the electronic device to implement the screen display method according to any one of claims 1 to 12, or to cause the electronic device to implement the screen display method according to any one of claims 13 to 16.
20. A computer-readable storage medium, having stored therein at least one program code, which is loaded and executed by a processor, to cause a computer to implement the picture display method according to any one of claims 1 to 12, or to cause a computer to implement the picture display method according to any one of claims 13 to 16.
21. A computer program product having stored therein at least one computer instruction, which is loaded and executed by a processor, to cause a computer to implement the picture display method according to any one of claims 1 to 12, or to cause a computer to implement the picture display method according to any one of claims 13 to 16.
CN202210004962.9A 2022-01-05 2022-01-05 Picture display method, device, equipment and computer readable storage medium Pending CN114344895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210004962.9A CN114344895A (en) 2022-01-05 2022-01-05 Picture display method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210004962.9A CN114344895A (en) 2022-01-05 2022-01-05 Picture display method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114344895A true CN114344895A (en) 2022-04-15

Family

ID=81108109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210004962.9A Pending CN114344895A (en) 2022-01-05 2022-01-05 Picture display method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114344895A (en)

Similar Documents

Publication Publication Date Title
CN108734736B (en) Camera posture tracking method, device, equipment and storage medium
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN109712224B (en) Virtual scene rendering method and device and intelligent device
CN109614171B (en) Virtual item transfer method and device, electronic equipment and computer storage medium
CN110427110B (en) Live broadcast method and device and live broadcast server
CN107982918B (en) Game game result display method and device and terminal
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN111603771B (en) Animation generation method, device, equipment and medium
CN109917910B (en) Method, device and equipment for displaying linear skills and storage medium
CN111050189A (en) Live broadcast method, apparatus, device, storage medium, and program product
CN113223129B (en) Image rendering method, electronic equipment and system
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
KR102565711B1 (en) Viewpoint rotation method and apparatus, device and storage medium
CN112221134B (en) Virtual environment-based picture display method, device, equipment and medium
WO2022227915A1 (en) Method and apparatus for displaying position marks, and device and storage medium
CN111068323B (en) Intelligent speed detection method, intelligent speed detection device, computer equipment and storage medium
CN113134232B (en) Virtual object control method, device, equipment and computer readable storage medium
CN112755517B (en) Virtual object control method, device, terminal and storage medium
CN112306332B (en) Method, device and equipment for determining selected target and storage medium
CN112755526B (en) Virtual item control method, device, terminal and storage medium
CN112023403B (en) Battle process display method and device based on image-text information
CN114299201A (en) Animation frame display method, device, equipment and storage medium
CN112957732B (en) Searching method, device, terminal and storage medium
CN114344895A (en) Picture display method, device, equipment and computer readable storage medium
CN113559494A (en) Virtual item display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40072025

Country of ref document: HK