WO2023130809A1 - 画面显示方法、装置、终端、存储介质及程序产品 - Google Patents

画面显示方法、装置、终端、存储介质及程序产品 Download PDF

Info

Publication number
WO2023130809A1
WO2023130809A1 PCT/CN2022/127196 CN2022127196W WO2023130809A1 WO 2023130809 A1 WO2023130809 A1 WO 2023130809A1 CN 2022127196 W CN2022127196 W CN 2022127196W WO 2023130809 A1 WO2023130809 A1 WO 2023130809A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
target position
character
target
frame
Prior art date
Application number
PCT/CN2022/127196
Other languages
English (en)
French (fr)
Inventor
范斯丹
杨睿涵
林孔伟
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/340,676 priority Critical patent/US20230330532A1/en
Publication of WO2023130809A1 publication Critical patent/WO2023130809A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the embodiments of the present application relate to the field of computer and Internet technology, and in particular to a screen display method, device, terminal, storage medium, and program product.
  • Some game applications provide a three-dimensional virtual environment, and users can control virtual characters to perform various operations in the three-dimensional virtual environment, thereby providing users with a more realistic game environment.
  • the game application will control the virtual camera to capture the virtual character controlled by the user (hereinafter referred to as “own character”). ”) as the visual focus, observe the locked character, and present the picture captured by the virtual camera to the user.
  • this method is likely to cause the own character to block the locked character, thereby affecting the display effect of the screen.
  • Embodiments of the present application provide a screen display method, device, terminal, storage medium, and program product, which can improve the rationality of mirror movement of a virtual camera, thereby improving the display effect of the screen.
  • the technical solution is as follows:
  • a screen display method is provided, the method is executed by a terminal device, and the method includes:
  • the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with a virtual camera taking the virtual following object in the three-dimensional virtual environment as the visual focus;
  • the first locked character refers to the locked target corresponding to the own character in the character locked state
  • the single-frame target position and target position of the virtual camera in the second picture frame are obtained by interpolation.
  • the second picture frame is generated and displayed based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
  • a screen display method is provided, the method is executed by a terminal device, and the method includes:
  • the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with a virtual camera taking the virtual following object in the three-dimensional virtual environment as the visual focus;
  • the second picture frame In response to the movement of at least one of the own character and the first locked character, display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame; wherein, The single-frame target position and the single-frame target orientation are determined according to the target position and target orientation of the virtual camera, and the target position and target orientation of the virtual camera are determined according to the target position of the virtual following object, And the distance between the target position of the virtual camera and the target position of the virtual following object is smaller than the distance between the target position of the virtual camera and the target position of the first locked character, the first locked The role refers to the locked target corresponding to the own role in the role locked state.
  • a screen display device includes:
  • the picture display module is used to display a first picture frame, and the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with the virtual following object in the three-dimensional virtual environment as the visual focus through the virtual camera;
  • the object position determination module is configured to determine the target position of the virtual follower object according to the target position of its own character and the target position of the first locked character; wherein, the first locked character means that in the character locked state, the The locked target corresponding to your own role;
  • a camera position determining module configured to determine the target position and target orientation of the virtual camera according to the target position of the virtual following object; wherein, the distance between the target position of the virtual camera and the target position of the virtual following object a distance that is smaller than the distance between the target position of the virtual camera and the target position of the first locked character;
  • a single-frame position determination module configured to interpolate to obtain the virtual camera in the second frame according to the target position and target orientation of the virtual camera, and the actual position and actual orientation of the virtual camera in the first frame.
  • the picture display module is further configured to generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
  • a screen display device includes:
  • the picture display module is used to display a first picture frame, and the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with the virtual following object in the three-dimensional virtual environment as the visual focus through the virtual camera;
  • the picture display module is further configured to display the target position and orientation of the single frame of the virtual camera in the second picture frame based on the movement of at least one of the own character and the first locked character.
  • the second picture frame wherein, the single-frame target position and the single-frame target orientation are determined according to the target position and target orientation of the virtual camera, and the target position and target orientation of the virtual camera are determined according to the The target position of the virtual following object is determined, and the distance between the target position of the virtual camera and the target position of the virtual following object is smaller than the distance between the target position of the virtual camera and the target position of the first locked character
  • the first locked character refers to the locked target corresponding to the own character in the character locked state.
  • a terminal device the terminal device includes a processor and a memory, and a computer program is stored in the memory, and the computer program is loaded and executed by the processor to realize the above-mentioned screen display method.
  • a computer-readable storage medium wherein a computer program is stored in the readable storage medium, and the computer program is loaded and executed by a processor to implement the above screen display method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal device executes the above screen display method.
  • the virtual follower object in the 3D virtual environment As the visual focus of the virtual camera, in the character locked state, based on the position information of the own character and the locked character, determine the position information of the virtual follower object, and then based on the position information of the virtual follower object , to update the position and orientation of the virtual camera; when determining the position information of the virtual following object, the position information of the own character and the locked character are taken into account, so that the locked character is prevented from being blocked by the own character, so that the determined virtual following object
  • the location information is more reasonable and accurate, thereby ensuring that in the picture taken by the virtual camera with the virtual following object as the visual focus, the own character and the locked character can be presented to the user in a more reasonable and clear manner, which improves the virtual camera in the character locked state.
  • the rationality of mirror movement improves the display effect of the picture.
  • the virtual The following object is closer to the virtual camera than the first locked character, thereby avoiding the first locked object from blocking the view of the virtual camera, thereby further improving the rationality of the virtual camera's mirror movement in the character locked state.
  • Fig. 1 is a schematic diagram of a scheme implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a screen display method provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of determining a pending target position of a virtual following object provided by an embodiment of the present application
  • Fig. 4 is a schematic diagram of the angled area behind the character provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a picture taken with a virtual following object as the visual focus provided by an embodiment of the present application;
  • Fig. 6 is a schematic diagram of the rotation track where the virtual camera is located according to an embodiment of the present application.
  • Fig. 7 is a schematic diagram of determining the target position and target orientation of a virtual camera provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of the relationship between the first distance and the first interpolation coefficient provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of the relationship between the second distance and the second interpolation coefficient provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of determining a single-frame target orientation of a virtual camera provided by an embodiment of the present application.
  • Fig. 11 is a flow chart of switching locked roles in the role locked state provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of pre-lock role determination and marking provided by an embodiment of the present application.
  • FIG. 13 is a flow chart of an update process of a virtual camera in a non-role-locked state provided by an embodiment of the present application
  • FIG. 14 is a schematic diagram of an update process of a virtual camera in a non-role-locked state provided by an embodiment of the present application.
  • Fig. 15 is a flowchart of an update process of a virtual camera provided by an embodiment of the present application.
  • Fig. 16 is a flowchart of a screen display method provided by another embodiment of the present application.
  • Fig. 17 is a block diagram of a screen display device provided by an embodiment of the present application.
  • Fig. 18 is a structural block diagram of a terminal device provided by an embodiment of the present application.
  • a virtual environment is an environment displayed (or provided) when a client of an application program (such as a game application program) runs on a terminal device (also called a terminal). (such as game competition, task execution, etc.) environment, such as the virtual environment can be a virtual house, a virtual island, a virtual map, and the like.
  • the virtual environment can be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment is three-dimensional, that is, a space composed of three dimensions: length, width, and height, so it can be called a "three-dimensional virtual environment".
  • a virtual role is a role that a user account controls in an application.
  • the virtual character refers to a game character controlled by a user account in the game application program.
  • the virtual character may be in the form of a human being, an animal, a cartoon or other forms, which is not limited in this embodiment of the present application.
  • the virtual character is also three-dimensional, so it can be called a "three-dimensional virtual character".
  • the operations that can be performed by the virtual character controlled by the user account may also be different.
  • the user account can control the virtual character to perform operations such as hitting, shooting, throwing virtual items, running, jumping, and casting skills.
  • application programs may also display virtual characters to users and provide corresponding functions for the virtual characters.
  • AR Augmented Reality
  • Augmented Reality application program
  • social application program social application program
  • interactive entertainment application program etc.
  • the forms of virtual characters provided by them will also be different, and the corresponding functions will also be different, which can be pre-configured according to actual needs. Not limited.
  • FIG. 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the solution implementation environment may include: a terminal 10 and a server 20 .
  • the terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a game console, a multimedia player, a PC (Personal Computer, personal computer), a vehicle-mounted terminal, and a smart TV.
  • a client of a target application program may be installed in the terminal 10, and the target application program may refer to an application program capable of providing a three-dimensional virtual environment, such as a game application program, a simulation application program, an entertainment application program, and the like.
  • game applications that can provide a three-dimensional virtual environment include but are not limited to: three-dimensional action games (3D Action Game, referred to as "3D ACT"), three-dimensional shooting games, three-dimensional MOBA (Multiplayer Online Battle Arena, multiplayer online tactics Competitive games) games and other corresponding applications.
  • the server 20 is used to provide background services for the client of the target application in the terminal 10 .
  • the server 20 may be a background server of the above-mentioned target application.
  • the server 20 may be one server, or a server cluster composed of multiple servers, or a cloud computing service center.
  • the terminal 10 and the server 20 can communicate with each other through the network 30 .
  • the network 30 may be a wired network or a wireless network.
  • FIG. 2 shows a flowchart of a screen display method provided by an embodiment of the present application.
  • the execution subject of each step of the method may be the terminal 10 in the solution implementation environment shown in FIG.
  • the execution subject of each step is the "client” for introduction and description.
  • the method may include the following steps (210-250):
  • Step 210 displaying a first picture frame.
  • the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with the virtual following object in the three-dimensional virtual environment as the visual focus through the virtual camera.
  • the client When the client presents the content in the three-dimensional virtual environment to the user, it will display picture frames one by one, and the picture frames are images obtained by shooting the three-dimensional virtual environment through the virtual camera.
  • the above-mentioned first picture frame may refer to an image obtained by shooting a three-dimensional virtual environment by the virtual camera at the current moment.
  • the three-dimensional virtual environment will include virtual characters, such as the virtual character controlled by the user himself (referred to as "self character” in the embodiment of this application), and the virtual character controlled by other users or systems (such as AI (Artificial Intelligence, artificial intelligence)). Role.
  • the three-dimensional virtual environment may also include some other virtual items, such as virtual houses, virtual vehicles, virtual trees, etc., which are not limited in this embodiment of the present application.
  • the virtual camera technology can be used to generate picture frames, that is, the client uses the virtual camera as the viewing angle to observe the three-dimensional virtual environment, and shoots the three-dimensional virtual environment in real time (or at fixed intervals), so as to A picture frame is obtained, and the content of the picture frame changes as the position of the virtual camera changes.
  • the virtual camera takes the virtual following object in the three-dimensional virtual environment as the visual focus, and the virtual following object is an invisible object.
  • the virtual following object is not a virtual character or virtual item, and has no shape, but can be regarded as a point in the three-dimensional virtual environment.
  • the virtual following object will change its position correspondingly with the position of its own character (optionally including other virtual characters).
  • the virtual camera will follow the virtual following object to move (such as position and orientation), so as to capture the content around the virtual following object in the three-dimensional virtual environment and present it in the frame for providing to the user.
  • Step 220 determine the target position of the virtual following object according to the target position of the own character and the target position of the first locked character; wherein, the first locked character refers to the locked target corresponding to the own character in the character locked state.
  • the role-locked state refers to a state in which one's own character is locked on a certain other virtual character, and the other virtual character may be a virtual character controlled by another user or the system.
  • the position and orientation of the virtual camera need to change with the position of its own character and the locked character, so that the frame captured by the virtual camera contains its own character and the locked character as much as possible, so that the user can You can see your own character and locked character in the screen frame.
  • the position and orientation of the virtual camera will change with the position of the virtual following object, while the position of the virtual following object will change with its own character and The position of the locked character changes as it changes.
  • the locked character is the locked target corresponding to its own character.
  • the locked character will be marked and displayed, and the operations corresponding to the own character will act on the locked character.
  • the above-mentioned first locked character may be any one or more other virtual characters whose own character is locked.
  • Step 220 may also include the following sub-steps:
  • the virtual following object still needs to follow its own character as the following target and move with its own character;
  • the target position of the object also needs to take into account the target position of the first locked character.
  • the target location may be understood as a planned location, which refers to a location to be or expected to move to.
  • the target position of the own character refers to the position to which the own character wants or expects to move (such as the next frame corresponding to the first picture frame)
  • the target position of the first locked character refers to the position to which the first locked character wants or expects to move. s position.
  • the target position of the own character can be determined according to the user's control operation on the own character.
  • the target position of the first locked character may be determined according to the control operation of the first locked character by the system or other users.
  • FIG. 3 it exemplarily shows a schematic diagram of determining a pending target position of a virtual following object 31 .
  • the target position of the own character 32 is represented by point A in FIG. 3, and the target position of the first locking character 33 is represented by point B in FIG.
  • the undetermined target position of as shown by point O in Figure 3.
  • the target straight line CD is perpendicular to the straight line AB and passes through the straight line of point A, that is, the target straight line is perpendicular to the target position (point A) of the own character 32 and the target position (point B) of the first locked character 33 , and the target line passes through the target position (point A) of the own character 32.
  • the target straight line CD may also be a straight line perpendicular to the straight line AB but not passing through the point A.
  • the undetermined target position of the virtual following object satisfies the condition, then determine the undetermined target position of the virtual following object as the target position of the virtual following object.
  • the pending target position after determining the pending target position of the virtual following object, it is necessary to judge whether the pending target position satisfies the condition, and if the condition is met, determine the pending target position as the target position of the virtual following object. In addition, if the condition is not satisfied, the pending target position needs to be adjusted to obtain the target position of the virtual following object, and the adjusted target position satisfies the above condition.
  • the setting of this condition is to make the target position of the virtual following object in a more suitable position.
  • the virtual camera takes the virtual following object as the visual focus to shoot, it can capture both its own object and the first locked object into the screen. In addition, the positions of the self-object and the first locked object do not overlap, thereby improving the image display effect.
  • the above conditions include: the offset distance between the target position to be determined of the virtual following object and the target position of the own character is less than or equal to the maximum offset. If the offset distance between the undetermined target position of the virtual following object and the target position of its own character is greater than the maximum offset, then the maximum offset is used as the benchmark to adjust the undetermined target position of the virtual following object to obtain the virtual following object's Target position; wherein, the offset distance of the target position of the virtual following object relative to the target position of its own character is less than or equal to the maximum offset.
  • the maximum offset may be a value greater than 0.
  • the maximum offset may be a fixed value, or a value dynamically determined according to the position of the virtual camera.
  • the above conditions further include: the offset distance between the target position to be determined of the virtual following object and the target position of the own character is greater than the minimum offset. If the offset distance between the undetermined target position of the virtual following object and the target position of its own character is less than or equal to the minimum offset, then the minimum offset is used as a benchmark to adjust the undetermined target position of the virtual following object to obtain the virtual The target position of the following object; wherein, the offset distance between the target position of the virtual following object and the target position of its own character is greater than the minimum offset.
  • the value of the minimum offset may be 0, or a value greater than 0, which is not limited in this embodiment of the present application. Also, the minimum offset is smaller than the maximum offset described above.
  • the minimum offset can be a fixed value, or a value dynamically determined according to the position of the virtual camera. For example, as shown in Figure 3, if point O and point A coincide, move point O along the direction of point C for a certain distance to obtain the target position of the virtual following object 31; if point O and point A do not coincide, move point O is determined as the target position of the virtual following object 31 .
  • the virtual following object it is possible to prevent the virtual following object from being on the connection line between the own object and the first locked object, thereby causing the first locked object to be blocked by the own object in the picture frame captured by the virtual camera, further improving the operation of the virtual camera. mirror rationality.
  • the above conditions further include: the to-be-determined target position of the virtual following object is within the angled area behind the own character. If the undetermined target position of the virtual following object is outside the angled area behind the own character, adjust the undetermined target position of the virtual following object based on the angled area behind the own character to obtain the target position of the virtual following object; Wherein, the target position of the virtual following object is located within the angled area behind the own character.
  • the included angle area behind the own character refers to the included angle area which takes the straight line passing through the target position of the own character and the target position of the first locked object as the central axis, and faces in the opposite direction of the first locked object.
  • the size of the rear angle area is not limited, for example, it can be 90 degrees, 120 degrees, 150 degrees, 180 degrees, etc., which can be set according to actual needs.
  • FIG. 4 it exemplarily shows a schematic diagram of a rear angle area.
  • the target position of the own character 32 is represented by point A in FIG. 4
  • the target position of the first locked character 33 is represented by point B in FIG. 4
  • the angled area behind the own character 32 is represented by angle ⁇ .
  • the undetermined target position O of the virtual following object 31 is located outside the angle ⁇ , then move the point O to the side of the angle ⁇ to obtain the target position of the virtual following object 31; if the undetermined target position O of the virtual following object 31 is located at the angle ⁇ , the point O is determined as the target position of the virtual following object 31 .
  • FIG. 5 it exemplarily shows the picture obtained by shooting the three-dimensional virtual environment with the virtual camera as the visual focus after the target position of the virtual following object satisfying the conditions is determined by the above method. It can be seen from FIG. 5 that, on the one hand, both the own character 32 and the first locked character 33 are in the screen, and the own character 32 does not block the first locked character 33; A locked character 33 is closer to the virtual camera, and the size of the own character 32 is larger than that of the first locked character 33, so that the user can distinguish the two characters more intuitively.
  • Step 230 according to the target position of the virtual following object, determine the target position and target orientation of the virtual camera; wherein, the distance between the target position of the virtual camera and the target position of the virtual following object is smaller than the target position of the virtual camera and the first lock The distance between the character's target positions.
  • the target position and target orientation of the virtual camera can be determined according to the target position of the virtual following object.
  • the distance between the target position of the virtual camera and the target position of the virtual following object is smaller than the distance between the target position of the virtual camera and the target position of the first locked character, so that the field of view of the virtual camera can be achieved within the range, the virtual following object is closer to the virtual camera than the first locked character, which avoids the first locked object from blocking the virtual camera's field of view, and further improves the rationality of the virtual camera's mirror movement.
  • step 230 includes several sub-steps as follows:
  • the rotating track in the embodiment of the present application refers to the moving track of the virtual camera.
  • the virtual camera can automatically move along with the virtual object on the rotating track.
  • the rotating track can refer to a circle, an ellipse, etc., and this embodiment of the present application does not make any reference to this limited.
  • the target position of the virtual following object 31 is represented by point O
  • the target position of the own object 32 is represented by point A. That is, point A) is located in the reference plane of the three-dimensional virtual environment.
  • the plane of the rotating track 35 where the virtual camera 34 is located is parallel to the reference plane of the three-dimensional virtual environment, and the central axis 36 of the rotating track 35 passes through the target position of the virtual following object 31 (namely point O).
  • the reference plane of the three-dimensional virtual environment can be the horizontal plane (such as the ground plane) of the three-dimensional virtual environment, the virtual objects in the three-dimensional virtual environment are above the reference plane, and the plane where the rotating track 35 of the virtual camera 34 is located is also on the reference plane. above the plane, so that the content in the three-dimensional virtual environment can be photographed from a certain overlooking perspective.
  • the target position of the virtual camera refers to the position that the virtual camera wants or expects to reach in theory
  • the target orientation of the virtual camera refers to the direction that the virtual camera wants or expects in theory.
  • the single-frame target position of the virtual camera in the following refers to In fact, the position that the virtual camera wants or expects to reach is used to make the virtual camera transition from the current position to the target position.
  • the single-frame target orientation of the virtual camera refers to the actual orientation that the virtual camera wants or expects. To make the virtual camera transition from the current orientation to the target orientation.
  • the projection point of the target position of the virtual camera in the reference plane of the three-dimensional virtual environment is located at the first locked character
  • the target position of the virtual following object and the target position of the virtual following object are located on a straight line, and the target position of the virtual following object is located between the above-mentioned projection point and the target position of the first locked character.
  • the target position of the virtual following object 31 is represented by point O
  • the target position of self-object 32 is represented by point A
  • the target position of the first locking character 33 is represented by point B.
  • a point K is determined, and the projection point of the point K on the reference plane of the three-dimensional virtual environment is marked as point K', the point K' is on the straight line OB, and the point O is located between the point K' and the point B.
  • This point K is determined as the target position of the virtual camera 34
  • the direction of the ray KO is determined as the target direction of the virtual camera 34 .
  • the projection point K' of the above-mentioned point K in the reference plane refers to a straight line passing through the point K and perpendicular to the reference plane, and the intersection point of the line and the reference plane is the projection point K'.
  • the target position of the virtual camera is determined from the corresponding rotation track of the virtual camera, so that the target position of the virtual camera is more reasonable, thereby further improving the mirror movement of the virtual camera rationality.
  • Step 240 according to the target position and target orientation of the virtual camera, and the actual position and actual orientation of the virtual camera in the first frame, interpolate to obtain a single-frame target position and a single-frame target orientation of the virtual camera in the second frame.
  • the target position of the virtual camera is determined, combined with the actual position of the virtual camera in the first picture frame, the single-frame target position of the virtual camera in the second picture frame can be obtained through the first interpolation algorithm.
  • the goal of the first interpolation algorithm is to make the position of the virtual camera gradually (or smoothly) approach the target position of the virtual camera.
  • the single-frame target orientation of the virtual camera in the second picture frame can be obtained through a second interpolation algorithm.
  • the goal of the second interpolation algorithm is to make the orientation of the virtual camera gradually (or smoothly) approach the target orientation of the virtual camera.
  • the process of determining the single-frame target position of the virtual camera in the second picture frame is as follows: determine the first interpolation coefficient according to the first distance, and the first distance refers to the distance between the first locked character and its own character , the first interpolation coefficient is used to determine the adjustment amount of the position of the virtual camera; according to the target position of the virtual camera, the actual position of the virtual camera in the first picture frame and the first interpolation coefficient, determine the position of the virtual camera in the second picture frame Single frame target position.
  • the first interpolation coefficient is positively correlated with the first distance.
  • FIG. 8 it shows a relationship curve 81 between the first distance and the first interpolation coefficient.
  • the first interpolation coefficient can be determined according to the first distance.
  • the first interpolation coefficient may be a value between [0,1].
  • calculate the distance between the target position of the virtual camera and the actual position of the virtual camera in the first picture frame multiply the distance by the first interpolation coefficient to obtain the position adjustment amount, and then calculate the virtual camera in the first frame
  • the actual position in the picture frame is translated to the direction of the target position of the virtual camera by the position adjustment amount to obtain the single-frame target position of the virtual camera in the second picture frame.
  • the interpolation coefficient related to the position of the virtual camera is determined by the above method.
  • the displacement of the virtual camera changes correspondingly.
  • the displacement of the virtual camera is also relatively small, so as to ensure that the own character and the locked character do not leave the field of view as much as possible, and the content of the screen changes smoothly.
  • the process of determining the single-frame target orientation of the virtual camera in the second picture frame is as follows: determine the second interpolation coefficient according to the second distance, and the second distance refers to the distance between the first locked character and the central axis of the picture Distance, the second interpolation coefficient is used to determine the adjustment amount of the orientation of the virtual camera; according to the target orientation of the virtual camera, the actual orientation of the virtual camera in the first frame and the second interpolation coefficient, determine the position of the virtual camera in the second frame The single-frame target orientation of .
  • the second interpolation coefficient is positively correlated with the second distance.
  • FIG. 9 it shows a schematic diagram of the relationship between the second distance and the second interpolation coefficient.
  • the own character is represented by 32
  • the first locked character is represented by 33
  • the central axis of the screen is represented by 91 .
  • the second interpolation coefficient may be a value between [0,1].
  • the second interpolation coefficient is closer to 0; when the distance between the first locked character 33 and the central axis 91 of the screen is larger, the second interpolation coefficient is closer to 0. close to 1.
  • FIG. 9 shows a schematic diagram of the relationship between the second distance and the second interpolation coefficient.
  • the own character is represented by 32
  • the first locked character is represented by 33
  • the central axis of the screen is represented by 91 .
  • the second interpolation coefficient may be a value between [0,1].
  • the second interpolation coefficient is closer to 0; when the distance between the first locked character 33 and the central axi
  • the angle ⁇ between the target orientation of the virtual camera 34 and the actual orientation of the virtual camera 34 in the first picture frame is calculated, and the angle ⁇ is multiplied by the second interpolation coefficient, Obtain the orientation adjustment amount ⁇ , and then deflect the actual orientation toward the direction of the target orientation by the orientation adjustment amount ⁇ to obtain the single-frame target orientation of the virtual camera 34 in the second picture frame.
  • Step 250 Generate and display a second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
  • the client After the client determines the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame, it can control the virtual camera to be placed according to the above-mentioned single-frame target position and single-frame target orientation, and display it in the 3D virtual environment.
  • the virtual following object is the visual focus, and the three-dimensional virtual environment is photographed to obtain a second picture frame, and then the second picture frame is displayed.
  • the second picture frame may be a next picture frame of the first picture frame, and the second picture frame may be displayed after the first picture frame is displayed.
  • the first picture frame is the picture frame at the current moment
  • the second picture frame is the picture frame at the next moment of the current moment
  • the target position of the single frame is the real position of the virtual camera at the next moment
  • the above single-frame target orientation is the real orientation of the virtual camera at the next moment.
  • the embodiment of the present application takes the switching process from the first picture frame to the second picture frame as an example to introduce and explain the picture switching process in the role-locked state. It should be understood that in the role-locked state, any two The switching process between picture frames can be realized according to the switching process from the first picture frame to the second picture frame introduced above.
  • the virtual follower by using the virtual following object in the three-dimensional virtual environment as the visual focus of the virtual camera, in the locked state of the character, based on the position information of the own character and the locked character, the virtual follower can be determined.
  • the position information of the object and then update the position and orientation of the virtual camera based on the position information of the virtual following object; when determining the position information of the virtual following object, the position information of the own role and the locked role are taken into account, thus avoiding locking the role It is blocked by its own character, so that the determined position information of the virtual following object is more reasonable and accurate, thereby ensuring that in the picture taken by the virtual camera with the virtual following object as the visual focus, the own character and the locked character can be displayed in a more reasonable and clear
  • the method is presented to the user, which improves the rationality of the mirror movement of the virtual camera in the role-locked state, thereby improving the display effect of the picture.
  • the virtual The following object is closer to the virtual camera than the first locked character, thereby avoiding the first locked object from blocking the view of the virtual camera, thereby further improving the rationality of the virtual camera's mirror movement in the character locked state.
  • this embodiment of the present application may also support switching locked roles in a locked role state.
  • the process may include the following steps (1110-1130):
  • Step 1110 in the character locked state, in response to the field of view adjustment operation on the own character, control the virtual camera to rotate around the virtual following object.
  • the user can control the virtual camera to rotate around the virtual follower object on the virtual camera's rotation track to switch and lock the character by performing the field of view adjustment operation for the character.
  • the field of view adjustment operation is used to adjust the viewing angle of the virtual camera, for example, the rotation direction and rotation speed of the virtual camera can be determined according to the field of view adjustment operation.
  • the rotation direction of the virtual camera can be determined according to the direction of the sliding operation, and the rotation direction of the virtual camera can be determined according to the sliding speed or sliding distance of the sliding operation. spinning speed.
  • the client in response to the field of view adjustment operation for its own character, switches from the role-locked state to the non-role-locked state, and in the non-role-locked state, controls the virtual camera around the Virtually follows the rotation of the object.
  • the character-locked state and the non-role-locked state have corresponding virtual cameras respectively.
  • the virtual camera used in the role-locked state is called the first virtual camera
  • the virtual camera used in the non-role-locked state is called the second virtual camera.
  • the first virtual camera is in the working state
  • the second virtual camera is in the non-working state.
  • the client can follow the method flow introduced in the embodiment described in Figure 2 above to check the position and orientation of the first virtual camera. renew.
  • the client switches from the role-locked state to the non-role-locked state, and controls the currently used virtual camera to switch from the first virtual camera to the second virtual camera, according to the field of view
  • the adjustment operation controls the rotation of the second virtual camera around the virtual following object.
  • the size of the rotation orbit of the first virtual camera and the second virtual camera and the position compared to the reference plane are the same, so as to ensure the seamless switching between the first virtual camera and the second virtual camera, allowing the user to feel The camera switching process is not performed, thereby improving the virtual camera switching efficiency and user experience.
  • Step 1120 during the rotation process, determine the pre-locked character in the three-dimensional virtual environment, and display a third frame, in which the pre-locked character and the pre-locked mark corresponding to the pre-locked character are displayed.
  • the client determines the pre-locked character in the 3D virtual environment according to the position of each virtual character in the 3D virtual environment, as well as information such as the position and orientation of the virtual camera. For example, the visual focus of the virtual camera (that is, the virtual following object) is determined based on the position and orientation of the virtual camera, and the virtual object closest to the visual focus is determined as the pre-locked character.
  • the pre-locked character refers to a virtual character that is about to be locked or a virtual character that may be locked.
  • the screen frame displayed by the client will display a pre-locked mark corresponding to the pre-locked character to remind the user which virtual character is currently pre-locked.
  • Step 1130 in response to the lock confirmation operation for the pre-locked character, determine the pre-locked character as the second locked character, and display the fourth picture frame, in which the second locked character and the locked character corresponding to the second locked character are displayed. mark.
  • the lock confirmation operation refers to a trigger operation performed by a user to determine a pre-locked role as a locked role. Still taking the above view adjustment operation as an example where the user's finger slides on the screen, if the user's finger leaves the screen and the slide operation ends, the end of the slide operation is determined as a lock confirmation operation.
  • the client may determine the corresponding pre-locked role at the end of the sliding operation as the second locked role.
  • the client determines the pre-locked role as the second locked role, it will also switch from the non-role-locked state (or pre-locked state) to the role-locked state.
  • the role-locked state according to Figure 2 above
  • the position and orientation of the virtual camera are updated.
  • the client switches from the non-role-locked state (or pre-locked state) to the role-locked state, it will also control the currently used virtual camera to switch from the second virtual camera to the first virtual camera, and then follow the above Figure 2
  • the position and orientation of the first virtual camera are updated.
  • the lock mark is a mark for distinguishing a locked character from other unlocked characters.
  • the locked flag may be different from the pre-locked flag, so that the user can distinguish whether the virtual character is a pre-locked character or a locked character based on different flags.
  • the own character 32 locks the first locked character 33 , and a lock mark corresponding to the first locked character 33 is displayed in the picture frame. 41.
  • the user can trigger the adjustment of the field of view of the character 32 by performing a sliding operation on the screen.
  • the client controls the virtual camera to rotate around the virtual following object according to the direction and displacement of the sliding operation.
  • the client will predict the pre-locked character in the 3D virtual environment.
  • the client will display the pre-locked mark 42 corresponding to the pre-locked character 38 in the picture frame, and the user can know the current state based on the pre-locked mark 42 Which avatar is pre-locked. If the current pre-locked character 38 meets the user's expectations, the user can stop performing the sliding operation, such as controlling the finger to leave the screen. At this time, the client determines the pre-locked character 38 as the second locked character, and displays it in the picture frame.
  • the locking mark 41 corresponding to the second locking role is shown in part (c) of FIG. 12 .
  • the field of view adjustment of the own character is also supported, so as to realize the switching of the locked character.
  • the pre-locked role is automatically predicted by the client, and the pre-locked mark corresponding to the pre-locked role is displayed, so that the user can intuitively and clearly see which virtual character is currently in the pre-locked state, In this way, it is convenient for users to switch and lock roles accurately and efficiently.
  • the updating process of the virtual camera may include the following steps (1310-1350):
  • Step 1310 in the non-role-locked state, using the own character as the following target, update the position of the virtual following object by interpolation, and obtain the single-frame target position of the virtual following object in the fifth picture frame.
  • the visual focus of the virtual camera is still the virtual follower object.
  • the position update of the virtual follower object only needs to consider the position change of its own character, without considering the position of the locked character. Variety.
  • the single-frame target position of the virtual following object is determined by a third interpolation algorithm, and the goal of the third interpolation algorithm is to allow the virtual following object to smoothly follow its own character.
  • the third interpolation coefficient is determined according to the third distance, the third distance refers to the distance between the own character and the virtual following object, and the third interpolation coefficient is used to determine the position of the virtual following object Adjustment amount; wherein, the third interpolation coefficient and the third distance are positively correlated; according to the actual position of the own character in the first picture frame, the actual position of the virtual following object in the first picture frame and the third interpolation coefficient, determine the virtual Follows the object's single-frame target position in the fifth image frame.
  • the third interpolation coefficient may also be a value between [0,1], and calculate the actual position of the own character in the first picture frame and the actual position of the virtual following object in the first picture frame The distance between is multiplied by the third interpolation coefficient to obtain the position adjustment amount, and then the actual position of the virtual follower object in the first picture frame is translated to the direction of its own character by the above position adjustment amount to obtain the virtual follower object
  • the fifth picture frame may be a next picture frame of the first picture frame.
  • Step 1320 Determine the single-frame target position of the virtual camera in the fifth picture frame according to the single-frame target position of the virtual following object in the fifth picture frame.
  • the single-frame target position of the virtual camera in the fifth picture frame can be determined according to the established positional relationship between the virtual camera and the virtual following object.
  • the self-character 32 is used as the following target, and the position of the virtual following object 31 is updated by interpolation to obtain the single-frame target position of the virtual following object 31, and then according to the virtual Following the single-frame target position of the object 31 , the single-frame target position of the virtual camera 34 is further determined.
  • Step 1330 if the field of view adjustment operation for the own character is not obtained, determine the actual orientation of the virtual camera in the first frame as the single-frame target orientation of the virtual camera in the fifth frame.
  • the client maintains the orientation of the virtual camera in the previous frame.
  • Step 1340 when the field of view adjustment operation for the own character is obtained, adjust the actual orientation of the virtual camera in the first frame according to the field of view adjustment operation, and obtain the single-frame target orientation of the virtual camera in the fifth frame .
  • the client In the non-role-locked state, if the user performs a field of view adjustment operation for his own character to adjust the direction of the field of view, the client needs to update the direction of the virtual camera.
  • the client updates the orientation of the virtual camera according to the field of view adjustment operation. For example, if the field of view adjustment operation is a sliding operation on the screen as an example, the client can determine the adjustment direction and angle of the orientation of the virtual camera based on information such as the direction and displacement of the sliding operation, and then combine the orientation in the previous frame, Determine the target orientation in the next frame.
  • Step 1350 Generate and display a fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame.
  • the client After the client determines the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame, it can control the virtual camera to be placed according to the above-mentioned single-frame target position and single-frame target orientation, and display it in the 3D virtual environment.
  • the virtual following object is the visual focus, and the three-dimensional virtual environment is photographed to obtain a fifth picture frame, and then the fifth picture frame is displayed.
  • the virtual camera in the non-role-locked state, by controlling the virtual following object to follow its own character to move smoothly, and then the virtual camera takes the virtual following object as the visual focus to capture the picture, because the virtual following object is in the three-dimensional virtual environment Slowly follow your own character, even if your own character has irregular displacement or a large misalignment with other virtual characters, the virtual camera will move smoothly, avoiding violent shaking of the screen content, and improving the user viewing experience.
  • the client after starting to update the virtual camera, the client first determines whether the role is locked. If it is in the role-locked state, it is further determined whether the user performs a field of view adjustment operation in the role-locked state. If the role is locked, whether the user performs the vision adjustment operation. If the user does not perform the field of view adjustment operation, the client determines the target position of the virtual following object according to the target position of its own character and the target position of the first locked character. Then judge whether the offset distance of the target position of the virtual following object relative to the target position of the own character exceeds the maximum offset. If the maximum offset is exceeded, adjust the target position of the virtual following object; if the maximum offset is not exceeded, maintain the position and orientation of the virtual camera.
  • the target position of the virtual following object is outside the angle area behind the own character. If it is outside the range of the angle behind, adjust the target position of the virtual following object; if it is within the range of the angle behind, then determine the target position and orientation of the virtual camera according to the target position of the virtual following object. Then, based on the target position and target orientation of the virtual camera, and the current actual position and actual orientation of the virtual camera, the single-frame target position and single-frame target orientation of the virtual camera are obtained by interpolation. In this way, the update of the virtual camera is completed in the locked state of the character.
  • the client controls the virtual camera to rotate around the virtual following object to determine the pre-locked role. In this way, the updating of the virtual camera is completed in the pre-locked state.
  • the self-character is used as the follow target, and the position of the virtual follow object is updated by interpolation. Then it is judged whether the user performs the field of view adjustment operation. If the field of view adjustment operation is performed, the single-frame target orientation is determined according to the field of view adjustment operation; if the field of view adjustment operation is not performed, the current actual orientation of the virtual camera is determined as the single-frame target orientation. In this way, the update of the virtual camera is completed in the non-role locked state.
  • the position and orientation of the virtual camera need to be updated every frame, and then based on the updated position and orientation, and with the visual focus of the virtual following object, the 3D virtual environment is captured to obtain the picture frame displayed to the user.
  • FIG. 16 shows a flowchart of a screen display method provided by another embodiment of the present application.
  • the execution subject of each step of the method may be the terminal 10 in the solution implementation environment shown in FIG.
  • the execution subject of each step is the "client" for introduction and description.
  • the method may include the following steps (1610-1620):
  • Step 1610 displaying the first picture frame
  • the first picture frame is a picture obtained by shooting the three-dimensional virtual environment through the virtual camera with the virtual following object in the three-dimensional virtual environment as the visual focus.
  • Step 1620 in response to the movement of at least one of the own character and the first locked character, based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame, display the second picture frame; wherein, the single-frame The target position and single frame target orientation are determined according to the target position and target orientation of the virtual camera, the target position and target orientation of the virtual camera are determined according to the target position of the virtual following object, and the target position of the virtual camera is consistent with the The distance between the target positions is smaller than the distance between the target position of the virtual camera and the target position of the first locked character.
  • the first locked character refers to the locked target corresponding to the own character in the character locked state.
  • step 1620 may include several sub-steps as follows:
  • interpolation obtains the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame;
  • this embodiment of the present application may also support switching locked roles in a locked role state.
  • the method also includes:
  • the pre-locked character in the three-dimensional virtual environment is determined, and a third picture frame is displayed, and the pre-locked character and the pre-locked mark corresponding to the pre-locked character are displayed in the third picture frame;
  • the pre-locked character is determined as the second locked character, and a fourth picture frame is displayed, in which the second locked character and a lock mark corresponding to the second locked character are displayed.
  • the update process of the virtual camera may include the following steps:
  • the fifth picture frame is displayed; wherein, the single-frame target position of the virtual camera in the fifth picture frame is based on the virtual following object in the fifth picture.
  • the single-frame target position in the frame is determined, and the single-frame target orientation of the virtual camera in the fifth picture frame is determined according to the actual orientation of the virtual camera in the first picture frame.
  • the virtual follower by using the virtual following object in the three-dimensional virtual environment as the visual focus of the virtual camera, in the locked state of the character, based on the position information of the own character and the locked character, the virtual follower can be determined.
  • the position information of the object and then update the position and orientation of the virtual camera based on the position information of the virtual following object; when determining the position information of the virtual following object, the position information of the own role and the locked role are taken into account, thus avoiding locking the role It is blocked by its own character, so that the determined position information of the virtual following object is more reasonable and accurate, thereby ensuring that in the picture taken by the virtual camera with the virtual following object as the visual focus, the virtual camera's mirror movement in the locked state of the character is improved. Therefore, the own role and the locked role can be presented to the user in a more reasonable and clear manner, which improves the display effect of the screen.
  • the virtual The following object is closer to the virtual camera than the first locked character, thereby avoiding the first locked object from blocking the view of the virtual camera, thereby further improving the rationality of the virtual camera's mirror movement in the character locked state.
  • FIG. 17 shows a block diagram of a screen display device provided by an embodiment of the present application.
  • the device has the function of realizing the above-mentioned method example, and the function may be realized by hardware, or may be realized by executing corresponding software by the hardware.
  • the device may be the terminal described above, or be set in the terminal.
  • the apparatus 1700 may include: a screen display module 1710 , an object position determination module 1720 , a camera position determination module 1730 and a single frame position determination module 1740 .
  • the picture display module 1710 is configured to display a first picture frame, and the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with the virtual following object in the three-dimensional virtual environment as the visual focus through the virtual camera.
  • the object position determining module 1720 is configured to determine the target position of the virtual following object according to the target position of the own character and the target position of the first locked character; wherein, the first locked character means that in the locked state of the character, the Describe the locked target corresponding to your own role.
  • the camera position determination module 1730 is configured to determine the target position and target orientation of the virtual camera according to the target position of the virtual following object; wherein, the distance between the target position of the virtual camera and the target position of the virtual following object The distance is smaller than the distance between the target position of the virtual camera and the target position of the first locked character.
  • the single-frame position determination module 1740 is configured to interpolate the virtual camera in the second picture frame according to the target position and target orientation of the virtual camera, and the actual position and actual orientation of the virtual camera in the first picture frame.
  • the picture display module 1710 is further configured to generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
  • the single frame position determining module 1740 is configured to:
  • the target position of the virtual following object determine the rotation orbit where the virtual camera is located; wherein, the plane where the rotation orbit is located is parallel to the reference plane of the three-dimensional virtual environment, and the central axis of the rotation orbit passes through the The target position of the virtual following object;
  • the target position and target orientation of the virtual camera are determined on the rotation track.
  • the single frame position determination module 1740 is further configured to:
  • the first distance refers to the distance between the first locked character and the own character, and the first interpolation coefficient is used to determine the adjustment of the position of the virtual camera quantity
  • the second distance refers to the distance between the first locked character and the central axis of the screen, and the second interpolation coefficient is used to determine the adjustment amount of the orientation of the virtual camera ;
  • the actual orientation of the virtual camera in the first picture frame and the second interpolation coefficient determine the single-frame target orientation of the virtual camera in the second picture frame .
  • the first interpolation coefficient is positively correlated with the first distance
  • the second interpolation coefficient is positively correlated with the second distance
  • the object location determining module 1720 is configured to:
  • the target position of the own character In the locked state of the character, take the target position of the own character as the follow target, and determine the undetermined target position of the virtual following object on the target line; wherein, the target line is perpendicular to the target position of the own character and the connecting line between the target position of the first locked character;
  • the undetermined target position of the virtual following object is adjusted to obtain the target position of the virtual following object.
  • the condition includes: an offset distance between the target position to be determined of the virtual following object and the target position of the own character is less than or equal to a maximum offset.
  • the object position determination module 1720 is further configured to: if the offset distance between the target position to be determined of the virtual following object and the target position of the own character is greater than the maximum offset, then use the maximum offset As a reference, adjust the undetermined target position of the virtual following object to obtain the target position of the virtual following object; wherein, the offset distance of the target position of the virtual following object relative to the target position of the own character, Less than or equal to the maximum offset.
  • the condition includes: the to-be-determined target position of the virtual following object is within an angle area behind the own character.
  • the object position determining module 1720 is further configured to, if the pending target position of the virtual following object is outside the angle area behind the own character, then use the angle area behind the own character as a reference to determine the Adjusting the pending target position of the virtual following object to obtain the target position of the virtual following object; wherein, the target position of the virtual following object is located within the angled area behind the own character.
  • the camera position determining module 1730 is configured to control the virtual camera to rotate around the virtual follower object in response to the field of view adjustment operation for the own character in the character locked state.
  • the picture display module 1710 is configured to determine the pre-locked character in the three-dimensional virtual environment during the rotation process, and display a third picture frame, in which the pre-locked character and the The pre-lock flag corresponding to the pre-lock role.
  • the screen display module 1710 is further configured to determine the pre-locked character as the second locked character in response to the lock confirmation operation for the pre-locked character, and display a fourth picture frame, in which the There are the second locking role and the locking mark corresponding to the second locking role.
  • the object position determining module 1720 is further configured to update the position of the virtual following object in an interpolation manner with the own character as the following target in the non-role-locked state to obtain the virtual following object The single-frame target position of the object in the fifth picture frame.
  • the single-frame position determining module 1740 is further configured to determine the single-frame target position of the virtual camera in the fifth picture frame according to the single-frame target position of the virtual following object in the fifth picture frame ; In the case that the field of view adjustment operation for the own character is not obtained, the actual orientation of the virtual camera in the first frame is determined as the orientation of the virtual camera in the fifth frame Single-frame target orientation; when the field of view adjustment operation for the own character is obtained, the actual orientation of the virtual camera in the first frame is adjusted according to the field of view adjustment operation to obtain the virtual A single-frame target orientation of the camera in the fifth picture frame.
  • the picture display module 1710 is further configured to generate and display the fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame.
  • the object location determining module 1720 is also configured to:
  • a third interpolation coefficient is determined according to a third distance, the third distance refers to the distance between the own character and the virtual following object, and the third interpolation coefficient is used to determine The adjustment amount of the position of the virtual following object; wherein, the third interpolation coefficient and the third distance are positively correlated;
  • the actual position of the own character in the first picture frame determines that the virtual following object is in the The single frame target position in the fifth picture frame.
  • the virtual follower by using the virtual following object in the three-dimensional virtual environment as the visual focus of the virtual camera, in the locked state of the character, based on the position information of the own character and the locked character, the virtual follower can be determined.
  • the position information of the object and then update the position and orientation of the virtual camera based on the position information of the virtual following object; when determining the position information of the virtual following object, the position information of the own role and the locked role are taken into account, thus avoiding locking the role It is blocked by its own character, so that the determined position information of the virtual following object is more reasonable and accurate, thereby ensuring that in the picture taken by the virtual camera with the virtual following object as the visual focus, the own character and the locked character can be displayed in a more reasonable and clear
  • the method is presented to the user, which improves the rationality of the mirror movement of the virtual camera in the role-locked state, thereby improving the display effect of the picture.
  • the virtual The following object is closer to the virtual camera than the first locked character, thereby avoiding the first locked object from blocking the view of the virtual camera, thereby further improving the rationality of the virtual camera's mirror movement in the character locked state.
  • the device 1700 may include: a screen display module 1710 .
  • the picture display module 1710 is configured to display a first picture frame, and the first picture frame is a picture obtained by shooting the three-dimensional virtual environment with the virtual following object in the three-dimensional virtual environment as the visual focus through the virtual camera.
  • the screen display module 1710 is further configured to respond to the movement of at least one of the own character and the first locked character, based on the single-frame target position and single-frame target orientation of the virtual camera in the second frame, display The second picture frame; wherein, the single-frame target position and the single-frame target orientation are determined according to the target position and target orientation of the virtual camera, and the target position and target orientation of the virtual camera are determined according to the The target position of the virtual following object is determined, and the distance between the target position of the virtual camera and the target position of the virtual following object is smaller than the target position of the virtual camera and the target position of the first locked character
  • the first locked character refers to the locked target corresponding to the own character in the character locked state.
  • the apparatus 1700 may further include: an object position determination module 1720 , a camera position determination module 1730 and a single frame position determination module 1740 .
  • the object position determination module 1720 is configured to determine the target position of the own character and the target position of the first locked character in response to the movement of at least one of the own character and the first locked character;
  • the target position of the virtual following object is determined according to the target position of the own character and the target position of the first locked character.
  • the camera position determination module 1730 is configured to determine the target position and target orientation of the virtual camera according to the target position of the virtual following object.
  • the single-frame position determination module 1740 is configured to interpolate and obtain the position of the virtual camera at The single-frame target position and single-frame target orientation in the second picture frame.
  • the picture display module 1710 is further configured to generate and display the second picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the second picture frame.
  • the camera position determination module 1730 is further configured to control the virtual camera to rotate around the virtual following object in response to the field of view adjustment operation for the own character in the character locked state .
  • the picture display module 1710 is also used to determine the pre-locked character in the three-dimensional virtual environment during the rotation process, and display a third picture frame, the third picture frame displays the pre-locked character and the pre-locked character.
  • the pre-lock mark corresponding to the above pre-lock role.
  • the screen display module 1710 is further configured to determine the pre-locked character as the second locked character in response to the lock confirmation operation for the pre-locked character, and display a fourth picture frame, in which the There are the second locking role and the locking mark corresponding to the second locking role.
  • the object position determining module 1720 is further configured to update the position of the virtual following object with the own character as the following target in the non-role-locked state, and obtain the position of the virtual following object at the Single-frame object positions in five-picture frames.
  • the picture display module 1710 is further configured to display the fifth picture frame based on the single-frame target position and single-frame target orientation of the virtual camera in the fifth picture frame; wherein, the virtual camera is in the fifth picture frame
  • the single-frame target position in the fifth picture frame is determined according to the single-frame target position of the virtual following object in the fifth picture frame, and the single-frame target orientation of the virtual camera in the fifth picture frame is Determined according to the actual orientation of the virtual camera in the first picture frame.
  • the virtual follower by using the virtual following object in the three-dimensional virtual environment as the visual focus of the virtual camera, in the locked state of the character, based on the position information of the own character and the locked character, the virtual follower can be determined.
  • the position information of the object and then update the position and orientation of the virtual camera based on the position information of the virtual following object; when determining the position information of the virtual following object, the position information of the own role and the locked role are taken into account, thus avoiding locking the role It is blocked by its own character, so that the determined position information of the virtual following object is more reasonable and accurate, thereby ensuring that in the picture taken by the virtual camera with the virtual following object as the visual focus, the own character and the locked character can be displayed in a more reasonable and clear
  • the method is presented to the user, which improves the rationality of the mirror movement of the virtual camera in the role-locked state, thereby improving the display effect of the picture.
  • the virtual The following object is closer to the virtual camera than the first locked character, thereby avoiding the first locked object from blocking the view of the virtual camera, thereby further improving the rationality of the virtual camera's mirror movement in the character locked state.
  • the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to the needs.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and the method embodiment provided by the above embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
  • FIG. 18 shows a structural block diagram of a terminal device 1800 provided by an embodiment of the present application.
  • the terminal device 1800 may be the terminal device 10 in the implementation environment shown in FIG. 1 , and is used to implement the screen display method provided in the foregoing embodiments. Specifically:
  • the terminal device 1800 includes: a processor 1801 and a memory 1802 .
  • the processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • Processor 1801 can be realized by at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) .
  • the processor 1801 can also include a main processor and a coprocessor, the main processor is a processor for processing data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
  • the processor 1801 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1801 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1802 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1802 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • non-transitory computer-readable storage medium in the memory 1802 is used to store computer programs, and the computer programs are configured to be executed by one or more processors to implement the above screen display method.
  • the terminal device 1800 may optionally further include: a peripheral device interface 1803 and at least one peripheral device.
  • the processor 1801, the memory 1802, and the peripheral device interface 1803 may be connected through buses or signal lines.
  • Each peripheral device can be connected to the peripheral device interface 1803 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1804 , a display screen 1805 , an audio circuit 1806 and a power supply 1807 .
  • FIG. 18 does not constitute a limitation on the terminal device 1800, and may include more or less components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium, and when the computer program is executed by a processor, the upper screen display method is realized.
  • the computer-readable storage medium may include: ROM (Read-Only Memory, read-only memory), RAM (Random Access Memory, random access memory), SSD (Solid State Drives, solid state drive) or optical discs, etc.
  • the random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium.
  • the processor of the terminal device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal device executes the above screen display method.
  • the information including but not limited to target device information, target personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application All are authorized by the subject or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • the user account, three-dimensional virtual environment, etc. involved in this application are all obtained under the condition of sufficient authorization.
  • the "plurality” mentioned herein refers to two or more than two.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
  • the character "/” generally indicates that the contextual objects are an "or” relationship.
  • the numbering of the steps described herein only exemplarily shows a possible sequence of execution among the steps. In some other embodiments, the above-mentioned steps may not be executed according to the order of the numbers, such as two different numbers The steps are executed at the same time, or two steps with different numbers are executed in the reverse order as shown in the illustration, which is not limited in this embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种画面显示方法、装置、终端、存储介质及程序产品,涉及计算机和互联网技术领域。所述方法包括:显示第一画面帧(210);根据自身角色的目标位置和第一锁定角色的目标位置,确定虚拟跟随对象的目标位置(220);根据虚拟跟随对象的目标位置,确定虚拟相机的目标位置和目标朝向,第一锁定角色是指自身角色对应的锁定目标(230);根据虚拟相机的目标位置和目标朝向,以及虚拟相机在第一画面帧中的实际位置和实际朝向,插值得到虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向(240);基于虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示第二画面帧(250)。本申请提升了自动运镜场景下的画面显示效果。

Description

画面显示方法、装置、终端、存储介质及程序产品
本申请要求于2022年01月04日提交的申请号为202210003178.6、发明名称为“画面显示方法、装置、终端、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机和互联网技术领域,特别涉及一种画面显示方法、装置、终端、存储介质及程序产品。
背景技术
目前,一些游戏应用程序提供有三维虚拟环境,用户可以控制虚拟角色在该三维虚拟环境中执行各种操作,从而给用户提供更加逼真的游戏环境。
在相关技术中,如果用户在三维虚拟环境中锁定了一个目标虚拟角色(以下称为“锁定角色”),则游戏应用程序会控制虚拟相机以用户自身控制的虚拟角色(以下称为“自身角色”)为视觉焦点,朝向锁定角色进行观察,并将该虚拟相机拍摄得到的画面呈现给用户。
然而,这种方式容易导致自身角色对锁定角色的遮挡,从而影响画面的显示效果。
发明内容
本申请实施例提供了一种画面显示方法、装置、终端、存储介质及程序产品,能够提高虚拟相机的运镜合理性,从而提升画面的显示效果,所述技术方案如下:
根据本申请实施例的一个方面,提供了一种画面显示方法,所述方法由终端设备执行,所述方法包括:
显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;其中,所述第一锁定角色是指在角色锁定状态下,所述自身角色对应的锁定目标;
根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;其中,所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离;
根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向;
基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
根据本申请实施例的一个方面,提供了一种画面显示方法,所述方法由终端设备执行,所述方法包括:
显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧;其中,所述单帧目标位置和所述单帧目标朝向是根据所述虚拟相机的目标位置和目标朝向确定的,所述虚拟相机的目标位置和目标朝向是根据所述虚拟跟随对象的目标位置确定的,且所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离,所述第一锁定角色是指在所述角色锁定状态下,所述自身角色对应的锁定目标。
根据本申请实施例的一个方面,提供了一种画面显示装置,所述装置包括:
画面显示模块,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
对象位置确定模块,用于根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;其中,所述第一锁定角色是指在角色锁定状态下,所述自身角色对应的锁定目标;
相机位置确定模块,用于根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;其中,所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离;
单帧位置确定模块,用于根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向;
所述画面显示模块,还用于基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
根据本申请实施例的一个方面,提供了一种画面显示装置,所述装置包括:
画面显示模块,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
所述画面显示模块,还用于响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧;其中,所述单帧目标位置和所述单帧目标朝向是根据所述虚拟相机的目标位置和目标朝向确定的,所述虚拟相机的目标位置和目标朝向是根据所述虚拟跟随对象的目标位置确定的,且所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离,所述第一锁定角色是指在所述角色锁定状态下,所述自身角色对应的锁定目标。
根据本申请实施例的一个方面,提供了一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述画面显示方法。
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述画面显示方法。
根据本申请实施例的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端设备执行上述画面显示方法。
本申请实施例提供的技术方案可以包括如下有益效果:
通过以三维虚拟环境中的虚拟跟随对象作为虚拟相机的视觉焦点,在角色锁定状态下,基于自身角色和锁定角色的位置信息,确定虚拟跟随对象的位置信息,然后基于该虚拟跟随对象的位置信息,更新虚拟相机的位置和朝向;由于在确定虚拟跟随对象的位置信息时,兼顾了自身角色和锁定角色的位置信息,如此避免了锁定角色被自身角色遮挡,从而使得确定出的虚拟跟随对象的位置信息更加合理准确,进而保证在虚拟相机以该虚拟跟随对象作为视觉焦点拍摄的画面中,自身角色和锁定角色能够以一个更加合理清晰的方式呈现给用户,提高了角色锁定状态下虚拟相机的运镜合理性,进而提升了画面的显示效果。
另外,通过保持虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,使得在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,从而避免了第一锁定对象对虚拟相机的视野遮挡,进而进一步提高了角色锁定状态下虚拟相机的运镜合理性。
附图说明
图1是本申请一个实施例提供的方案实施环境的示意图;
图2是本申请一个实施例提供的画面显示方法的流程图;
图3是本申请一个实施例提供的确定虚拟跟随对象的待定目标位置的示意图;
图4是本申请一个实施例提供的自身角色的背后夹角区域的示意图;
图5是本申请一个实施例提供的以虚拟跟随对象为视觉焦点拍摄得到的画面的示意图;
图6是本申请一个实施例提供的虚拟相机所在的旋转轨道的示意图;
图7是本申请一个实施例提供的确定虚拟相机的目标位置和目标朝向的示意图;
图8是本申请一个实施例提供的第一距离和第一插值系数之间的关系示意图;
图9是本申请一个实施例提供的第二距离和第二插值系数之间的关系示意图;
图10是本申请一个实施例提供的确定虚拟相机的单帧目标朝向的示意图;
图11是本申请一个实施例提供的在角色锁定状态下对锁定角色进行切换的流程图;
图12是本申请一个实施例提供的预锁定角色确定和标记的示意图;
图13是本申请一个实施例提供的在非角色锁定状态下虚拟相机的更新过程的流程图;
图14是本申请一个实施例提供的在非角色锁定状态下虚拟相机的更新过程的示意图;
图15是本申请一个实施例提供的虚拟相机的更新过程的流程图;
图16是本申请另一个实施例提供的画面显示方法的流程图;
图17是本申请一个实施例提供的画面显示装置的框图;
图18是本申请一个实施例提供的终端设备的结构框图。
具体实施方式
在对本申请实施例进行介绍说明之前,首先对本申请中涉及的相关名词进行解释说明。
1.虚拟环境
虚拟环境是应用程序(如游戏应用程序)的客户端在终端设备(其也被称之为终端)上运行时显示(或提供)的环境,该虚拟环境是指营造出的供虚拟对象进行活动(如游戏竞技、执行任务等)的环境,如该虚拟环境可以是虚拟房屋、虚拟岛屿、虚拟地图等。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。在本申请实施例中,虚拟环境是三维的,即由长、宽、高三个维度所构成的空间,因此其可以被称之为“三维虚拟环境”。
2.虚拟角色
虚拟角色是指用户帐号在应用程序中控制的角色。以应用程序为游戏应用程序为例,虚拟角色是指用户帐号在游戏应用程序中控制的游戏角色。虚拟角色可以是人物形态,可以是动物、卡通或者其它形态,本申请实施例对此不作限定。在本申请实施例中,虚拟角色同样是三维的,因此其可以被称之为“三维虚拟角色”。
在不同的游戏应用程序中,用户帐号控制虚拟角色所能执行的操作也可能有所不同。例如,在射击类游戏应用程序中,用户帐号可以控制虚拟角色执行击打、射击、投掷虚拟物品、奔跑、跳跃、施放技能等操作。
当然,除了游戏类应用程序之外,其它类型的应用程序中也可以向用户展示虚拟角色,并给虚拟角色提供相应的功能。例如,AR(Augmented Reality,增强现实)类应用程序、社交类应用程序、互动娱乐类应用程序等,本申请实施例对此不作限定。另外,对于不同的应用程序来说,其所提供的虚拟角色的形态也会有所不同,且相应的功能也会有所不同,这都可以根据实际需求预先进行配置,本申请实施例对此不作限定。
请参考图1,其示出了本申请一个实施例提供的方案实施环境的示意图。该方案实施环境可以包括:终端10和服务器20。
终端10可以是诸如手机、平板电脑、游戏主机、多媒体播放设备、PC(Personal Computer,个人计算机)、车载终端、智能电视等电子设备。终端10中可以安装目标应用程序的客户端, 上述目标应用程序可以是指能够提供三维虚拟环境的应用程序,例如游戏应用程序、仿真应用程序、娱乐应用程序等。示例性地,能够提供三维虚拟环境的游戏应用程序包括但不限于:三维动作游戏(3D Action Game,简称“3D ACT”)、三维射击类游戏、三维MOBA(Multiplayer Online Battle Arena,多人在线战术竞技游戏)类游戏等对应的应用程序。
服务器20用于为终端10中的目标应用程序的客户端提供后台服务。例如,服务器20可以是上述目标应用程序的后台服务器。服务器20可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。
终端10和服务器20之间可通过网络30进行互相通信。该网络30可以是有线网络,也可以是无线网络。
请参考图2,其示出了本申请一个实施例提供的画面显示方法的流程图。该方法各步骤的执行主体可以是图1所示方案实施环境中的终端10,如各步骤的执行主体可以是终端10中安装运行的目标应用程序的客户端。在下文方法实施例中,为了便于描述,以各步骤的执行主体为“客户端”进行介绍说明。该方法可以包括如下几个步骤(210~250):
步骤210,显示第一画面帧,第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对三维虚拟环境进行拍摄得到的画面。
客户端在将三维虚拟环境中的内容呈现给用户时,会显示一个个画面帧,该画面帧是通过虚拟相机对三维虚拟环境进行拍摄得到的图像。示例性地,上述第一画面帧可以是指虚拟相机在当前时刻下,对三维虚拟环境进行拍摄得到的图像。三维虚拟环境中会包括虚拟角色,如用户自身控制的虚拟角色(本申请实施例中称之为“自身角色”),以及其他用户或者系统(如AI(Artificial Intelligence,人工智能))控制的虚拟角色。可选地,三维虚拟环境中还可以包括一些其他的虚拟物品,如虚拟房屋、虚拟载具、虚拟树木等,本申请实施例对此不作限定。在本申请实施例中,可以采用虚拟相机技术生成画面帧,也即客户端以虚拟相机为观察视角,对三维虚拟环境进行观察,并实时(或固定间隔时长)对三维虚拟环境进行拍摄,以得到画面帧,该画面帧的内容跟随虚拟相机的位置变化而变化。
在本申请实施例中,虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,该虚拟跟随对象是一个非可见的对象。例如,该虚拟跟随对象并不是一个虚拟角色或虚拟物品,也没有外形,其可以看作是三维虚拟环境中的一个点。该虚拟跟随对象在三维虚拟环境中,会随着自身角色(可选地还包括其他虚拟角色)的位置变化,相应地发生位置变化。虚拟相机则会跟随该虚拟跟随对象进行移动(诸如位置和朝向),从而将三维虚拟环境中位于该虚拟跟随对象周围的内容拍摄下来,呈现在画面帧中以提供给用户。
步骤220,根据自身角色的目标位置和第一锁定角色的目标位置,确定虚拟跟随对象的目标位置;其中,第一锁定角色是指在角色锁定状态下,自身角色对应的锁定目标。
角色锁定状态是指自身角色以某一个其他虚拟角色为锁定目标的状态,该其他虚拟角色可以是其他用户或者系统控制的虚拟角色。在角色锁定状态下,虚拟相机的位置和朝向,需要随着自身角色和锁定角色的位置变化而变化,从而尽可能地使虚拟相机拍摄的画面帧中包含自身角色和锁定角色,让用户能够在画面帧中查看到自身角色和锁定角色。
在本申请实施例中,由于虚拟相机的视觉焦点是虚拟跟随对象,虚拟相机的位置和朝向会随着虚拟跟随对象的位置变化而变化,而虚拟跟随对象的位置,则会随着自身角色和锁定角色的位置变化而变化。锁定角色即为自身角色对应的锁定目标。在一些实施例中,锁定角色会被标记显示,且自身角色对应的操作会作用于锁定角色上。可选地,上述第一锁定角色可以是自身角色锁定的任意一个或多个其他虚拟角色。
在一些实施例中,以自身角色的锁定目标为第一锁定角色为例,对虚拟相机在角色锁定状态下的更新过程进行介绍说明,步骤220还可以包括如下几个子步骤:
1.以自身角色的目标位置为跟随目标,在目标直线上确定虚拟跟随对象的待定目标位置; 其中,目标直线垂直于自身角色的目标位置和第一锁定角色的目标位置之间的连线。
在角色锁定状态下,一方面,虚拟跟随对象仍然需要以自身角色为跟随目标,跟随自身角色的移动而移动,另一方面,为了让画面帧中呈现出当前锁定的第一锁定角色,虚拟跟随对象的目标位置还需要考虑第一锁定角色的目标位置。
在本申请实施例中,目标位置可以理解为一个规划位置,是指所要或期望移动至的位置。例如,自身角色的目标位置是指自身角色所要或期望移动至的位置(如第一画面帧对应的下一帧),第一锁定角色的目标位置是指该第一锁定角色所要或期望移动至的位置。自身角色的目标位置可以根据用户对自身角色的控制操作所确定。第一锁定角色的目标位置可以根据系统或其他用户对该第一锁定角色的控制操作所确定。
如图3所示,其示例性示出了一种确定虚拟跟随对象31的待定目标位置的示意图。自身角色32的目标位置在图3中以点A表示,第一锁定角色33的目标位置在图3中以点B表示,目标直线CD垂直于直线AB,在目标直线CD上确定虚拟跟随对象31的待定目标位置,如图3中点O所示。在图3中,目标直线CD垂直于直线AB且经过点A的直线,也即目标直线垂直于自身角色32的目标位置(点A)和第一锁定角色33的目标位置(点B)之间的连线,且目标直线经过自身角色32的目标位置(点A)。在一些其他实施例中,目标直线CD也可以是垂直于直线AB,但不经过点A的直线。
2.若虚拟跟随对象的待定目标位置满足条件,则将虚拟跟随对象的待定目标位置,确定为虚拟跟随对象的目标位置。
3.若虚拟跟随对象的待定目标位置不满足条件,则对虚拟跟随对象的待定目标位置进行调整,得到虚拟跟随对象的目标位置。
在本申请实施例中,在确定出虚拟跟随对象的待定目标位置之后,需要判断该待定目标位置是否满足条件,在满足条件的情况下,将该待定目标位置确定为虚拟跟随对象的目标位置。另外,在不满足条件的情况下,需要对该待定目标位置进行调整,得到虚拟跟随对象的目标位置,该调整得到的目标位置满足上述条件。该条件的设定是为了让虚拟跟随对象的目标位置在一个比较合适的位置,虚拟相机以该虚拟跟随对象为视觉焦点进行拍摄时,能够将自身对象和第一锁定对象都拍摄进画面中,且自身对象和第一锁定对象的位置不存在重叠,从而提升画面显示效果。
可选地,上述条件包括:虚拟跟随对象的待定目标位置相对于自身角色的目标位置的偏移距离小于或等于最大偏移量。若虚拟跟随对象的待定目标位置相对于自身角色的目标位置的偏移距离大于最大偏移量,则以最大偏移量为基准,对虚拟跟随对象的待定目标位置进行调整,得到虚拟跟随对象的目标位置;其中,虚拟跟随对象的目标位置相对于自身角色的目标位置的偏移距离,小于或等于最大偏移量。可选地,该最大偏移量可以是一个大于0的数值。可选地,该最大偏移量可以是一个固定值,也可以是一个根据虚拟相机的位置动态确定的数值。例如,如图3所示,假设线段CA的长度为最大偏移量,如果线段OA的长度大于线段CA的长度,则将点C确定为虚拟跟随对象31的目标位置;如果线段OA的长度小于或等于线段CA的长度,则将点O确定为虚拟跟随对象31的目标位置。通过上述方式,能够避免虚拟跟随对象与自身对象之间的距离过远,导致虚拟相机拍摄的画面帧中不包含自身对象的情况发生,进一步提高了虚拟相机的运镜合理性。
可选地,上述条件还包括:虚拟跟随对象的待定目标位置相对于自身角色的目标位置的偏移距离大于最小偏移量。若虚拟跟随对象的待定目标位置相对于自身角色的目标位置的偏移距离小于或等于最小偏移量,则以该最小偏移量为基准,对虚拟跟随对象的待定目标位置进行调整,得到虚拟跟随对象的目标位置;其中,虚拟跟随对象的目标位置相对于自身角色的目标位置的偏移距离,大于最小偏移量。可选地,该最小偏移量的取值可以是0,也可以是一个大于0的数值,本申请实施例对此不作限定。另外,最小偏移量小于上文介绍的最大偏移量。可选地,该最小偏移量可以是一个固定值,也可以是一个根据虚拟相机的位置动态 确定的数值。例如,如图3所示,如果点O和点A重合,则将点O沿点C的方向移动一定距离后得到虚拟跟随对象31的目标位置;如果点O和点A不重合,则将点O确定为虚拟跟随对象31的目标位置。通过上述方式,能够避免虚拟跟随对象在自身对象和第一锁定对象的连线上,从而导致虚拟相机拍摄的画面帧中第一锁定对象被自身对象遮挡的情况发生,进一步提高了虚拟相机的运镜合理性。
可选地,上述条件还包括:虚拟跟随对象的待定目标位置位于自身角色的背后夹角区域之内。若虚拟跟随对象的待定目标位置位于自身角色的背后夹角区域之外,则以自身角色的背后夹角区域为基准,对虚拟跟随对象的待定目标位置进行调整,得到虚拟跟随对象的目标位置;其中,虚拟跟随对象的目标位置位于自身角色的背后夹角区域之内。其中,自身角色的背后夹角区域是指以经过自身角色的目标位置和第一锁定对象的目标位置的直线为中轴线,且朝向第一锁定对象的相反方向的夹角区域。在本申请实施例中,对该背后夹角区域的大小不作限定,例如其可以是90度、120度、150度、180度等,这可以根据实际需求进行设定。如图4所示,其示例性示出了一种背后夹角区域的示意图。自身角色32的目标位置在图4中以点A表示,第一锁定角色33的目标位置在图4中以点B表示,自身角色32的背后夹角区域以角α表示。如果虚拟跟随对象31的待定目标位置O位于角α之外,则将该点O移动至角α的边上得到虚拟跟随对象31的目标位置;如果虚拟跟随对象31的待定目标位置O位于角α之内,则将该点O确定为虚拟跟随对象31的目标位置。通过上述方式,能够保证自身角色相比于第一锁定角色更加地靠近虚拟相机,使得用户能够通过近大远小的显示效果,直观地区分自身角色和第一锁定角色。
如图5所示,其示例性示出了采用上述方式确定出满足条件的虚拟跟随对象的目标位置之后,采用虚拟相机以该虚拟跟随对象为视觉焦点,对三维虚拟环境进行拍摄得到的画面。从图5中可以看出,一方面,自身角色32和第一锁定角色33都在画面中,且自身角色32并不会遮挡第一锁定角色33,另一方面,自身角色32相比于第一锁定角色33距离虚拟相机更近,自身角色32的尺寸大于第一锁定角色33的尺寸,使得用户可以更加直观地区分这两个角色。
步骤230,根据虚拟跟随对象的目标位置,确定虚拟相机的目标位置和目标朝向;其中,虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离。
在确定出虚拟跟随对象的目标位置之后,便可以根据虚拟跟随对象的目标位置确定出虚拟相机的目标位置和目标朝向。在本申请实施例中,虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,从而实现在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,避免了第一锁定对象对虚拟相机的视野遮挡,进一步提高了虚拟相机的运镜合理性。
在一些实施例中,步骤230包括如下几个子步骤:
1.根据虚拟跟随对象的目标位置,确定虚拟相机所在的旋转轨道;其中,旋转轨道所在的平面与三维虚拟环境的参考平面平行,且旋转轨道的中轴线经过虚拟跟随对象的目标位置。
本申请实施例中的旋转轨道是指虚拟相机的移动轨道,虚拟相机可以在该旋转轨道上跟随虚拟对象自动移动,该旋转轨道可以是指圆形、椭圆形等,本申请实施例对此不作限定。如图6所示,虚拟跟随对象31的目标位置以点O表示,自身对象32的目标位置以点A表示,上述虚拟跟随对象31的目标位置(即点O)和自身对象32的目标位置(即点A)位于三维虚拟环境的参考平面中。虚拟相机34所在的旋转轨道35所在的平面与三维虚拟环境的参考平面平行,且旋转轨道35的中轴线36经过虚拟跟随对象31的目标位置(即点O)。其中,三维虚拟环境的参考平面可以是三维虚拟环境的水平面(如地平面),三维虚拟环境中的虚拟对象在该参考平面之上,虚拟相机34所在的旋转轨道35所在的平面也在该参考平面之上,从而可以以一定的俯视视角对三维虚拟环境中的内容进行拍摄。
2.根据虚拟跟随对象的目标位置和第一锁定角色的目标位置,在旋转轨道上确定虚拟相机的目标位置和目标朝向。
虚拟相机的目标位置是指理论上虚拟相机所要或所期望到达的位置,虚拟相机的目标朝向是指理论上虚拟相机所要或所期望的朝向,下文中的虚拟相机的单帧目标位置则是指实际上虚拟相机所要或所期望到达的位置,用于使得虚拟相机从当前位置过渡到目标位置,下文中的虚拟相机的单帧目标朝向则是指实际上虚拟相机所要或所期望的朝向,用于使得虚拟相机从当前朝向过渡到目标朝向。如果第一锁定角色的目标位置和虚拟跟随对象的目标位置是在三维虚拟环境的参考平面中标定的,则虚拟相机的目标位置在三维虚拟环境的参考平面中的投影点,位于第一锁定角色的目标位置和虚拟跟随对象的目标位置所在的直线上,且虚拟跟随对象的目标位置位于上述投影点和第一锁定角色的目标位置之间。
如图7所示,虚拟跟随对象31的目标位置以点O表示,自身对象32的目标位置以点A表示,第一锁定角色33的目标位置以点B表示,在旋转轨道35上,能够唯一确定出一个点K,该点K在三维虚拟环境的参考平面中的投影点记为点K’,该点K’在直线OB上,且点O位于点K’和点B之间。将该点K确定为虚拟相机34的目标位置,将射线KO的朝向确定为虚拟相机34的目标朝向。上述点K在参考平面中的投影点K’,是指作一条经过点K且垂直于参考平面的直线,该直线与参考平面的交点即为投影点K’。基于虚拟跟随对象的目标位置和第一锁定角色的目标位置,从虚拟相机对应的旋转轨道上确定出虚拟相机的目标位置,使得虚拟相机的目标位置更加地合理,从而进一步提高虚拟相机的运镜合理性。
步骤240,根据虚拟相机的目标位置和目标朝向,以及虚拟相机在第一画面帧中的实际位置和实际朝向,插值得到虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向。
在确定出虚拟相机的目标位置之后,结合虚拟相机在第一画面帧中的实际位置,通过第一插值算法可以得到虚拟相机在第二画面帧中的单帧目标位置。该第一插值算法的目标是让虚拟相机的位置逐渐(或者说平滑地)靠近虚拟相机的目标位置。
类似地,在确定出虚拟相机的目标朝向之后,结合虚拟相机在第一画面帧中的实际朝向,通过第二插值算法可以得到虚拟相机在第二画面帧中的单帧目标朝向。该第二插值算法的目标是让虚拟相机的朝向逐渐(或者说平滑地)靠近虚拟相机的目标朝向。
在一些实施例中,确定虚拟相机在第二画面帧中的单帧目标位置的过程如下:根据第一距离确定第一插值系数,第一距离是指第一锁定角色与自身角色之间的距离,第一插值系数用于确定虚拟相机的位置的调整量;根据虚拟相机的目标位置、虚拟相机在第一画面帧中的实际位置和第一插值系数,确定虚拟相机在第二画面帧中的单帧目标位置。
可选地,第一插值系数与第一距离呈正相关关系。示例性地,如图8所示,其示出了第一距离和第一插值系数之间的关系曲线81。基于该关系曲线81,可以根据第一距离确定出第一插值系数。例如,第一插值系数可以是一个取值在[0,1]之间的数值。可选地,计算虚拟相机的目标位置和虚拟相机在第一画面帧中的实际位置之间的距离,将该距离与第一插值系数相乘,得到位置调整量,然后将虚拟相机在第一画面帧中的实际位置向虚拟相机的目标位置的方向平移上述位置调整量,得到虚拟相机在第二画面帧中的单帧目标位置。通过上述方式确定虚拟相机位置相关的插值系数,在自身角色与锁定角色之间的距离变化较为大幅时,虚拟相机的位移变化也相应较大,在自身角色与锁定角色之间的距离变化较为小幅时,虚拟相机的位移变化也相应较小,从而保证自身角色和锁定角色尽可能地都不脱离视野,且画面内容平滑变化。
在一些实施例中,确定虚拟相机在第二画面帧中的单帧目标朝向的过程如下:根据第二距离确定第二插值系数,第二距离是指第一锁定角色与画面中轴线之间的距离,第二插值系数用于确定虚拟相机的朝向的调整量;根据虚拟相机的目标朝向、虚拟相机在第一画面帧中的实际朝向和第二插值系数,确定虚拟相机在第二画面帧中的单帧目标朝向。
可选地,第二插值系数与第二距离呈正相关关系。示例性地,如图9所示,其示出了第 二距离和第二插值系数之间的关系示意图。在图9中,自身角色以32表示,第一锁定角色以33表示,画面中轴线以91表示。例如,第二插值系数可以是一个取值在[0,1]之间的数值。当第一锁定角色33与画面中轴线91之间的距离越小,第二插值系数越接近于0;当第一锁定角色33与画面中轴线91之间的距离越大,第二插值系数越接近于1。可选地,如图10所示,计算虚拟相机34的目标朝向和虚拟相机34在第一画面帧中的实际朝向之间的夹角θ,将该夹角θ与第二插值系数相乘,得到朝向调整量γ,然后将实际朝向向目标朝向的方向偏转上述朝向调整量γ,得到虚拟相机34在第二画面帧中的单帧目标朝向。通过上述方式确定虚拟相机朝向相关的插值系数,当锁定角色靠近画面中轴线时,朝向改变较小,即便是锁定角色有频繁急促的位移,虚拟相机也不会大幅度晃动;当锁定角色远离画面中轴线时,朝向改变较大,即便锁定角色以很快的速度朝着视野范围外冲刺,虚拟相机也能够及时做出响应,保证锁定角色不脱离视野范围。
步骤250,基于虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示第二画面帧。
客户端在确定出虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向之后,便可控制虚拟相机按照上述单帧目标位置和单帧目标朝向进行放置,并以三维虚拟环境中的虚拟跟随对象为视觉焦点,对三维虚拟环境进行拍摄,得到第二画面帧,然后将该第二画面帧进行显示。
可选地,第二画面帧可以是第一画面帧的下一个画面帧,可以在显示完第一画面帧之后,显示第二画面帧。示例性地,若第一画面帧为当前时刻下的画面帧,则第二画面帧为当前时刻的下一时刻下的画面帧,上述单帧目标位置则是虚拟相机在下一时刻下的真实位置,上述单帧目标朝向则是虚拟相机在下一时刻下的真实朝向。
另外,本申请实施例以第一画面帧到第二画面帧的切换过程为例,对在角色锁定状态下的画面切换过程进行了介绍说明,应当理解的是,在角色锁定状态下任意两个画面帧之间的切换过程,均可按照上文介绍的第一画面帧到第二画面帧的切换过程进行实现。
综上所述,本申请实施例提供的技术方案,通过以三维虚拟环境中的虚拟跟随对象作为虚拟相机的视觉焦点,在角色锁定状态下,基于自身角色和锁定角色的位置信息,确定虚拟跟随对象的位置信息,然后基于该虚拟跟随对象的位置信息,更新虚拟相机的位置和朝向;由于在确定虚拟跟随对象的位置信息时,兼顾了自身角色和锁定角色的位置信息,如此避免了锁定角色被自身角色遮挡,从而使得确定出的虚拟跟随对象的位置信息更加合理准确,进而保证在虚拟相机以该虚拟跟随对象作为视觉焦点拍摄的画面中,自身角色和锁定角色能够以一个更加合理清晰的方式呈现给用户,提高了角色锁定状态下虚拟相机的运镜合理性,进而提升了画面的显示效果。
另外,通过保持虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,使得在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,从而避免了第一锁定对象对虚拟相机的视野遮挡,进而进一步提高了角色锁定状态下虚拟相机的运镜合理性。
在一些实施例中,如图11所示,本申请实施例还可以支持在角色锁定状态下,对锁定角色进行切换。该过程可以包括如下几个步骤(1110~1130):
步骤1110,在角色锁定状态下,响应于针对自身角色的视野调整操作,控制虚拟相机绕虚拟跟随对象进行旋转。
在角色锁定状态下,假设当前锁定的是第一锁定角色。用户可以通过执行针对自身角色的视野调整操作,控制虚拟相机绕虚拟跟随对象在虚拟相机的旋转轨道上进行旋转,以切换锁定角色。其中,虚拟相机的旋转轨道可以参见上文实施例中的介绍说明,此处不再赘述。视野调整操作用于调整虚拟相机的观察视角,例如,虚拟相机的旋转方向和旋转速度,可以 根据视野调整操作来确定。以视野调整操作是用户手指在屏幕(如非按键区域)中的滑动操作为例,可以根据该滑动操作的方向确定虚拟相机的旋转方向,根据该滑动操作的滑动速度或滑动距离确定虚拟相机的旋转速度。
在一些实施例中,在角色锁定状态下,响应于针对自身角色的视野调整操作,客户端从角色锁定状态切换至非角色锁定状态,在非角色锁定状态下,根据视野调整操作控制虚拟相机绕虚拟跟随对象进行旋转。
可选地,角色锁定状态和非角色锁定状态,具有分别对应的虚拟相机。为了便于描述,将角色锁定状态下使用的虚拟相机称为第一虚拟相机,将非角色锁定状态使用的虚拟相机称为第二虚拟相机。在角色锁定状态下,第一虚拟相机处于工作状态,第二虚拟相机处于非工作状态,客户端可以按照上文图2所述实施例介绍的方法流程,对第一虚拟相机的位置和朝向进行更新。在角色锁定状态下,响应于针对自身角色的视野调整操作,客户端从角色锁定状态切换至非角色锁定状态,并且控制当前使用的虚拟相机从第一虚拟相机切换为第二虚拟相机,根据视野调整操作控制第二虚拟相机绕虚拟跟随对象进行旋转。可选地,第一虚拟相机和第二虚拟相机的旋转轨道的尺寸以及相较于参考平面的位置相同,从而保证第一虚拟相机和第二虚拟相机实现无缝切换,让用户从画面中感受不到相机切换过程,从而提升了虚拟相机的切换效率以及用户体验。
步骤1120,在旋转的过程中,确定三维虚拟环境中的预锁定角色,显示第三画面帧,第三画面帧中显示有预锁定角色以及预锁定角色对应的预锁定标记。
在虚拟相机进行旋转的过程中,第一锁定角色不再被锁定,客户端此时处于非角色锁定状态,此时的非角色锁定状态也可以称为预锁定状态。在预锁定状态下,客户端根据三维虚拟环境中各个虚拟角色的位置,以及虚拟相机的位置和朝向等信息,确定出三维虚拟环境中的预锁定角色。例如,基于虚拟相机的位置和朝向确定虚拟相机的视觉焦点(即虚拟跟随对象),将距离该视觉焦点最近的虚拟对象确定为预锁定角色。其中,预锁定角色是指即将被锁定的虚拟角色或者说有可能被锁定的虚拟角色。与此同时,在预锁定状态下,如果存在预锁定角色,客户端显示的画面帧中会显示与该预锁定角色对应的预锁定标记,以提示用户当前哪个虚拟角色处于被预锁定。
步骤1130,响应于针对预锁定角色的锁定确认操作,将预锁定角色确定为第二锁定角色,显示第四画面帧,第四画面帧中显示有第二锁定角色以及第二锁定角色对应的锁定标记。
锁定确认操作是指用户所执行的将预锁定角色确定为锁定角色的触发操作。仍然以上述视野调整操作是用户手指在屏幕中的滑动操作为例,如果用户手指离开屏幕,滑动操作结束,则将该滑动操作结束的操作确定为锁定确认操作。可客户端以将滑动操作结束时对应的预锁定角色确定为第二锁定角色。
可选地,客户端在将预锁定角色确定为第二锁定角色之后,还会从非角色锁定状态(或者说预锁定状态)切换至角色锁定状态,在角色锁定状态下,按照上文图2所述实施例介绍的方法流程,对虚拟相机的位置和朝向进行更新。
可选地,如果角色锁定状态和非角色锁定状态,具有分别对应的虚拟相机。那么客户端在从非角色锁定状态(或者说预锁定状态)切换至角色锁定状态的同时,还会控制当前使用的虚拟相机从第二虚拟相机切换为第一虚拟相机,然后按照上文图2所述实施例介绍的方法流程,对第一虚拟相机的位置和朝向进行更新。
另外,锁定标记是用于将锁定角色和其他非锁定角色进行区分的标记。锁定标记可以不同于预锁定标记,从而能够让用户基于不同的标记,区分虚拟角色是预锁定角色还是锁定角色。
示例性地,如图12所示,在图12的(a)部分示出的角色锁定状态下,自身角色32锁定第一锁定角色33,画面帧中显示有第一锁定角色33对应的锁定标记41。此时,用户通过在屏幕上执行滑动操作,可以触发对自身角色32的视野进行调整。在滑动操作的过程中,客 户端根据该滑动操作的方向和位移等信息,控制虚拟相机绕虚拟跟随对象进行旋转,在旋转的过程中,客户端会预测三维虚拟环境中的预锁定角色。如图12的(b)部分所示,在确定出预锁定角色38之后,客户端会在画面帧中显示该预锁定角色38对应的预锁定标记42,用户可以基于该预锁定标记42获知当前哪个虚拟角色处于预锁定的状态。如果当前的预锁定角色38符合用户的预期,那么用户可以停止执行滑动操作,如控制手指离开屏幕,此时客户端将预锁定角色38确定为第二锁定角色,并在画面帧中显示与该第二锁定角色对应的锁定标记41,如图12的(c)部分所示。
在本申请实施例中,还通过在角色锁定状态下,支持对自身角色的视野调整,从而实现对锁定角色的切换。另外,在切换过程中,通过客户端自动预测出预锁定角色,并显示与该预锁定角色对应的预锁定标记,使得用户能够直观而又清晰地看到当前哪个虚拟角色处于预锁定的状态,从而方便用户精准而又高效地切换锁定角色。
在一些实施例中,如图13所示,在非角色锁定状态下,虚拟相机的更新过程可以包括如下几个步骤(1310~1350):
步骤1310,在非角色锁定状态下,以自身角色为跟随目标,以插值方式更新虚拟跟随对象的位置,得到虚拟跟随对象在第五画面帧中的单帧目标位置。
在非角色锁定状态下,虚拟相机的视觉焦点仍然是虚拟跟随对象,此时由于不存在锁定角色,因此虚拟跟随对象的位置更新,仅需考虑自身角色的位置变化,而无需考虑锁定角色的位置变化。可选地,在非角色锁定状态下,通过第三插值算法确定虚拟跟随对象的单帧目标位置,该第三插值算法的目标是让虚拟跟随对象平滑地跟随自身角色。
可选地,在非角色锁定状态下,根据第三距离确定第三插值系数,第三距离是指自身角色与虚拟跟随对象之间的距离,第三插值系数用于确定虚拟跟随对象的位置的调整量;其中,第三插值系数和第三距离呈正相关关系;根据自身角色在第一画面帧中的实际位置、虚拟跟随对象在第一画面帧中的实际位置和第三插值系数,确定虚拟跟随对象在第五画面帧中的单帧目标位置。示例性地,第三插值系数也可以是一个取值在[0,1]之间的数值,计算自身角色在第一画面帧中的实际位置和虚拟跟随对象在第一画面帧中的实际位置之间的距离,将该距离与第三插值系数相乘,得到位置调整量,然后将虚拟跟随对象在第一画面帧中的实际位置向自身角色的方向平移上述位置调整量,得到虚拟跟随对象在第五画面帧中的单帧目标位置。第五画面帧可以是第一画面帧的下一个画面帧。通过上述方式,当自身角色离虚拟跟随对象较远时,虚拟跟随对象的跟随速度较快;当自身角色离虚拟跟随对象较近时,虚拟跟随对象的跟随速度较慢。由于虚拟跟随对象在三维虚拟环境中缓慢跟随自身角色,即便自身角色出现不规则位移或是与其他虚拟角色发生大幅度的错位,虚拟相机也会平滑移动,提高了非锁定角色状态下虚拟相机的运镜合理性。
步骤1320,根据虚拟跟随对象在第五画面帧中的单帧目标位置,确定虚拟相机在第五画面帧中的单帧目标位置。
有了虚拟跟随对象在第五画面帧中的单帧目标位置之后,根据虚拟相机和虚拟跟随对象之间既定的位置关系,便可以确定出虚拟相机在第五画面帧中的单帧目标位置。
示例性地,如图14所示,在非角色锁定状态下,以自身角色32为跟随目标,以插值方式更新虚拟跟随对象31的位置,得到虚拟跟随对象31的单帧目标位置,然后根据虚拟跟随对象31的单帧目标位置,进一步确定出虚拟相机34的单帧目标位置。
步骤1330,在未获取到针对自身角色的视野调整操作的情况下,将虚拟相机在第一画面帧中的实际朝向,确定为虚拟相机在所述第五画面帧中的单帧目标朝向。
在非角色锁定状态下,如果用户没有执行针对自身角色的视野调整操作来调整视野朝向,那么客户端保持虚拟相机在上一帧中的朝向。
步骤1340,在获取到针对自身角色的视野调整操作的情况下,根据视野调整操作对虚拟 相机在第一画面帧中的实际朝向进行调整,得到虚拟相机在第五画面帧中的单帧目标朝向。
在非角色锁定状态下,如果用户执行针对自身角色的视野调整操作来调整视野朝向,那么客户端需要更新虚拟相机的朝向。可选地,客户端根据视野调整操作来更新虚拟相机的朝向。例如,以视野调整操作是针对屏幕的滑动操作为例,客户端可以根据该滑动操作的方向和位移等信息,确定虚拟相机的朝向的调整方向和调整角度,然后结合上一帧中的朝向,确定出在下一帧中的目标朝向。
步骤1350,基于虚拟相机在第五画面帧中的单帧目标位置和单帧目标朝向,生成并显示第五画面帧。
客户端在确定出虚拟相机在第五画面帧中的单帧目标位置和单帧目标朝向之后,便可控制虚拟相机按照上述单帧目标位置和单帧目标朝向进行放置,并以三维虚拟环境中的虚拟跟随对象为视觉焦点,对三维虚拟环境进行拍摄,得到第五画面帧,然后将该第五画面帧进行显示。
在本申请实施例中,还通过在非角色锁定状态下,通过控制虚拟跟随对象跟随自身角色平滑移动,然后虚拟相机以虚拟跟随对象为视觉焦点拍摄得到画面,由于虚拟跟随对象在三维虚拟环境中缓慢跟随自身角色,即便自身角色出现不规则位移或是与其他虚拟角色发生大幅度的错位,虚拟相机也会平滑移动,避免画面内容出现剧烈抖动等现象,提升了用户观看体验。
下面,结合图15对本申请技术方案进行概述说明。
如图15所示,在开始更新虚拟相机之后,客户端首先判断是否处于角色锁定状态。如果处于角色锁定状态,则进一步判断在角色锁定状态下,用户是否执行视野调整操作。如果在角色锁定状态下,用户是否执行视野调整操作。如果用户未执行视野调整操作,则客户端根据自身角色的目标位置和第一锁定角色的目标位置,确定虚拟跟随对象的目标位置。然后判断虚拟跟随对象的目标位置相对于自身角色的目标位置的偏移距离,是否超出最大偏移量。如果超出最大偏移量,则调整虚拟跟随对象的目标位置;如果未超出最大偏移量,则保持虚拟相机的位置和朝向。进一步地,判断虚拟跟随对象的目标位置是否位于自身角色的背后夹角区域之外。如果位于背后夹角区域之外,则调整虚拟跟随对象的目标位置;如果位于背后夹角范围之内,则根据虚拟跟随对象的目标位置,确定虚拟相机的目标位置和目标朝向。然后,基于该虚拟相机的目标位置和目标朝向,以及虚拟相机当前的实际位置和实际朝向,插值得到虚拟相机的单帧目标位置和单帧目标朝向。这样就在角色锁定状态下,对虚拟相机更新完成。
在角色锁定状态下,如果用户执行视野调整操作,则客户端控制虚拟相机绕虚拟跟随对象进行旋转,确定预锁定角色。这样就在预锁定状态下,对虚拟相机更新完成。
在非角色锁定状态下,以自身角色为跟随目标,以插值方式更新虚拟跟随对象的位置。然后判断用户是否执行视野调整操作。如果执行视野调整操作,则根据视野调整操作确定单帧目标朝向;如果未执行视野调整操作,则将虚拟相机当前的实际朝向确定为单帧目标朝向。这样就在非角色锁定状态下,对虚拟相机更新完成。
在客户端运行过程中,虚拟相机的位置和朝向需要在每一帧都进行更新,然后基于更新后的位置和朝向,并以虚拟跟随对象的视觉焦点,对三维虚拟环境进行拍摄,得到画面帧显示给用户。
请参考图16,其示出了本申请另一个实施例提供的画面显示方法的流程图。该方法各步骤的执行主体可以是图1所示方案实施环境中的终端10,如各步骤的执行主体可以是终端10中安装运行的目标应用程序的客户端。在下文方法实施例中,为了便于描述,以各步骤的执行主体为“客户端”进行介绍说明。该方法可以包括如下几个步骤(1610~1620):
步骤1610,显示第一画面帧,第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对三维虚拟环境进行拍摄得到的画面。
步骤1620,响应于自身角色和第一锁定角色中的至少之一的移动,基于虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示第二画面帧;其中,单帧目标位置和单帧目标朝向是根据虚拟相机的目标位置和目标朝向确定的,虚拟相机的目标位置和目标朝向是根据虚拟跟随对象的目标位置确定的,且虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,第一锁定角色是指在角色锁定状态下,自身角色对应的锁定目标。
在角色锁定状态下,由于自身角色和第一锁定角色的位置都有可能发生移动,因此需要随着自身角色和第一锁定角色的位置变化,适应性地调整虚拟相机的位置和朝向,从而尽可能地在虚拟相机拍摄的画面帧中包含自身角色和锁定角色。
在示例性实施例中,步骤1620可以包括如下几个子步骤:
1.响应于自身角色和所述第一锁定角色中的至少之一的移动,确定自身角色的目标位置和第一锁定角色的目标位置;
2.根据自身角色的目标位置和第一锁定角色的目标位置,确定虚拟跟随对象的目标位置;
3.根据虚拟跟随对象的目标位置,确定虚拟相机的目标位置和目标朝向;
4.根据虚拟相机的目标位置和目标朝向,以及虚拟相机在第一画面帧中的实际位置和实际朝向,插值得到虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向;
5.基于虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示第二画面帧。
可选地,本申请实施例还可以支持在角色锁定状态下,对锁定角色进行切换。该方法还包括:
在角色锁定状态下,响应于针对自身角色的视野调整操作,控制虚拟相机绕虚拟跟随对象进行旋转;
在旋转的过程中,确定三维虚拟环境中的预锁定角色,显示第三画面帧,该第三画面帧中显示有预锁定角色以及预锁定角色对应的预锁定标记;
响应于针对预锁定角色的锁定确认操作,将预锁定角色确定为第二锁定角色,显示第四画面帧,该第四画面帧中显示有第二锁定角色以及第二锁定角色对应的锁定标记。
可选地,在非角色锁定状态下,虚拟相机的更新过程可以包括如下步骤:
在非角色锁定状态下,以自身角色为跟随目标,更新虚拟跟随对象的位置,得到虚拟跟随对象在第五画面帧中的单帧目标位置;
基于虚拟相机在第五画面帧中的单帧目标位置和单帧目标朝向,显示第五画面帧;其中,虚拟相机在第五画面帧中的单帧目标位置是根据虚拟跟随对象在第五画面帧中的单帧目标位置确定的,虚拟相机在第五画面帧中的单帧目标朝向是根据虚拟相机在第一画面帧中的实际朝向确定的。
对于本实施例中未详细说明的细节,和参见上文其他方法实施例中的介绍说明。
综上所述,本申请实施例提供的技术方案,通过以三维虚拟环境中的虚拟跟随对象作为虚拟相机的视觉焦点,在角色锁定状态下,基于自身角色和锁定角色的位置信息,确定虚拟跟随对象的位置信息,然后基于该虚拟跟随对象的位置信息,更新虚拟相机的位置和朝向;由于在确定虚拟跟随对象的位置信息时,兼顾了自身角色和锁定角色的位置信息,如此避免了锁定角色被自身角色遮挡,从而使得确定出的虚拟跟随对象的位置信息更加合理准确,进而保证在虚拟相机以该虚拟跟随对象作为视觉焦点拍摄的画面中,提高了角色锁定状态下虚拟相机的运镜合理性,进而自身角色和锁定角色能够以一个更加合理清晰的方式呈现给用户,提升了画面的显示效果。
另外,通过保持虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟 相机的目标位置与第一锁定角色的目标位置之间的距离,使得在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,从而避免了第一锁定对象对虚拟相机的视野遮挡,进而进一步提高了角色锁定状态下虚拟相机的运镜合理性。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图17,其示出了本申请一个实施例提供的画面显示装置的框图。该装置具有实现上述方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是上文介绍的终端,也可以设置在终端中。如图17所示,该装置1700可以包括:画面显示模块1710、对象位置确定模块1720、相机位置确定模块1730和单帧位置确定模块1740。
画面显示模块1710,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面。
对象位置确定模块1720,用于根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;其中,所述第一锁定角色是指在角色锁定状态下,所述自身角色对应的锁定目标。
相机位置确定模块1730,用于根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;其中,所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离。
单帧位置确定模块1740,用于根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向。
所述画面显示模块1710,还用于基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
在一些实施例中,所述单帧位置确定模块1740,用于:
根据所述虚拟跟随对象的目标位置,确定所述虚拟相机所在的旋转轨道;其中,所述旋转轨道所在的平面与所述三维虚拟环境的参考平面平行,且所述旋转轨道的中轴线经过所述虚拟跟随对象的目标位置;
根据所述虚拟跟随对象的目标位置和所述第一锁定角色的目标位置,在所述旋转轨道上确定所述虚拟相机的目标位置和目标朝向。
在一些实施例中,所述单帧位置确定模块1740,还用于:
根据第一距离确定第一插值系数,所述第一距离是指所述第一锁定角色与所述自身角色之间的距离,所述第一插值系数用于确定所述虚拟相机的位置的调整量;
根据所述虚拟相机的目标位置、所述虚拟相机在所述第一画面帧中的实际位置和所述第一插值系数,确定所述虚拟相机在所述第二画面帧中的单帧目标位置;
根据第二距离确定第二插值系数,所述第二距离是指所述第一锁定角色与画面中轴线之间的距离,所述第二插值系数用于确定所述虚拟相机的朝向的调整量;
根据所述虚拟相机的目标朝向、所述虚拟相机在所述第一画面帧中的实际朝向和所述第二插值系数,确定所述虚拟相机在所述第二画面帧中的单帧目标朝向。
在一些实施例中,所述第一插值系数与所述第一距离呈正相关关系,所述第二插值系数与所述第二距离呈正相关关系。
在一些实施例中,所述对象位置确定模块1720,用于:
在所述角色锁定状态下,以所述自身角色的目标位置为跟随目标,在目标直线上确定所述虚拟跟随对象的待定目标位置;其中,所述目标直线垂直于所述自身角色的目标位置和所述第一锁定角色的目标位置之间的连线;
若所述虚拟跟随对象的待定目标位置满足条件,则将所述虚拟跟随对象的待定目标位置, 确定为所述虚拟跟随对象的目标位置;
若所述虚拟跟随对象的待定目标位置不满足所述条件,则对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置。
可选地,所述条件包括:所述虚拟跟随对象的待定目标位置相对于所述自身角色的目标位置的偏移距离小于或等于最大偏移量。所述对象位置确定模块1720,还用于若所述虚拟跟随对象的待定目标位置相对于所述自身角色的目标位置的偏移距离大于所述最大偏移量,则以所述最大偏移量为基准,对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置;其中,所述虚拟跟随对象的目标位置相对于所述自身角色的目标位置的偏移距离,小于或等于所述最大偏移量。
可选地,所述条件包括:所述虚拟跟随对象的待定目标位置位于所述自身角色的背后夹角区域之内。所述对象位置确定模块1720,还用于若所述虚拟跟随对象的待定目标位置位于所述自身角色的背后夹角区域之外,则以所述自身角色的背后夹角区域为基准,对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置;其中,所述虚拟跟随对象的目标位置位于所述自身角色的背后夹角区域之内。
在一些实施例中,所述相机位置确定模块1730,用于在所述角色锁定状态下,响应于针对所述自身角色的视野调整操作,控制所述虚拟相机绕所述虚拟跟随对象进行旋转。
所述画面显示模块1710,用于在旋转的过程中,确定所述三维虚拟环境中的预锁定角色,显示第三画面帧,所述第三画面帧中显示有所述预锁定角色以及所述预锁定角色对应的预锁定标记。
所述画面显示模块1710,还用于响应于针对所述预锁定角色的锁定确认操作,将所述预锁定角色确定为第二锁定角色,显示第四画面帧,所述第四画面帧中显示有所述第二锁定角色以及所述第二锁定角色对应的锁定标记。
在一些实施例中,所述对象位置确定模块1720,还用于在非角色锁定状态下,以所述自身角色为跟随目标,以插值方式更新所述虚拟跟随对象的位置,得到所述虚拟跟随对象在第五画面帧中的单帧目标位置。
所述单帧位置确定模块1740,还用于根据所述虚拟跟随对象在所述第五画面帧中的单帧目标位置,确定所述虚拟相机在所述第五画面帧中的单帧目标位置;在未获取到针对所述自身角色的视野调整操作的情况下,将所述虚拟相机在所述第一画面帧中的实际朝向,确定为所述虚拟相机在所述第五画面帧中的单帧目标朝向;在获取到针对所述自身角色的视野调整操作的情况下,根据所述视野调整操作对所述虚拟相机在所述第一画面帧中的实际朝向进行调整,得到所述虚拟相机在所述第五画面帧中的单帧目标朝向。
所述画面显示模块1710,还用于基于所述虚拟相机在所述第五画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第五画面帧。
可选地,所述对象位置确定模块1720,还用于:
在所述非角色锁定状态下,根据第三距离确定第三插值系数,所述第三距离是指所述自身角色与所述虚拟跟随对象之间的距离,所述第三插值系数用于确定所述虚拟跟随对象的位置的调整量;其中,所述第三插值系数和所述第三距离呈正相关关系;
根据所述自身角色在所述第一画面帧中的实际位置、所述虚拟跟随对象在所述第一画面帧中的实际位置和所述第三插值系数,确定所述虚拟跟随对象在所述第五画面帧中的单帧目标位置。
综上所述,本申请实施例提供的技术方案,通过以三维虚拟环境中的虚拟跟随对象作为虚拟相机的视觉焦点,在角色锁定状态下,基于自身角色和锁定角色的位置信息,确定虚拟跟随对象的位置信息,然后基于该虚拟跟随对象的位置信息,更新虚拟相机的位置和朝向;由于在确定虚拟跟随对象的位置信息时,兼顾了自身角色和锁定角色的位置信息,如此避免了锁定角色被自身角色遮挡,从而使得确定出的虚拟跟随对象的位置信息更加合理准确,进 而保证在虚拟相机以该虚拟跟随对象作为视觉焦点拍摄的画面中,自身角色和锁定角色能够以一个更加合理清晰的方式呈现给用户,提高了角色锁定状态下虚拟相机的运镜合理性,进而提升了画面的显示效果。
另外,通过保持虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,使得在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,从而避免了第一锁定对象对虚拟相机的视野遮挡,进而进一步提高了角色锁定状态下虚拟相机的运镜合理性。
本申请另一示例性实施例还提供了一种画面显示装置,如图17所示,该装置1700可以包括:画面显示模块1710。
所述画面显示模块1710,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面。
所述画面显示模块1710,还用于响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧;其中,所述单帧目标位置和所述单帧目标朝向是根据所述虚拟相机的目标位置和目标朝向确定的,所述虚拟相机的目标位置和目标朝向是根据所述虚拟跟随对象的目标位置确定的,且所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离,所述第一锁定角色是指在所述角色锁定状态下,所述自身角色对应的锁定目标。
在一些实施例中,如图17所示,该装置1700还可以包括:对象位置确定模块1720、相机位置确定模块1730和单帧位置确定模块1740。
所述对象位置确定模块1720,用于响应于所述自身角色和所述第一锁定角色中的至少之一的移动,确定所述自身角色的目标位置和所述第一锁定角色的目标位置;根据所述自身角色的目标位置和所述第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置。
所述相机位置确定模块1730,用于根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向。
所述单帧位置确定模块1740,用于根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向。
所述画面显示模块1710,还用于基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
在一些实施例中,所述相机位置确定模块1730,还用于在所述角色锁定状态下,响应于针对所述自身角色的视野调整操作,控制所述虚拟相机绕所述虚拟跟随对象进行旋转。
所述画面显示模块1710,还用于在旋转的过程中,确定所述三维虚拟环境中的预锁定角色,显示第三画面帧,所述第三画面帧中显示有所述预锁定角色以及所述预锁定角色对应的预锁定标记。
所述画面显示模块1710,还用于响应于针对所述预锁定角色的锁定确认操作,将所述预锁定角色确定为第二锁定角色,显示第四画面帧,所述第四画面帧中显示有所述第二锁定角色以及所述第二锁定角色对应的锁定标记。
在一些实施例中,所述对象位置确定模块1720,还用于在非角色锁定状态下,以所述自身角色为跟随目标,更新所述虚拟跟随对象的位置,得到所述虚拟跟随对象在第五画面帧中的单帧目标位置。
所述画面显示模块1710,还用于基于所述虚拟相机在所述第五画面帧中的单帧目标位置和单帧目标朝向,显示所述第五画面帧;其中,所述虚拟相机在所述第五画面帧中的单帧目标位置是根据虚拟跟随对象在所述第五画面帧中的单帧目标位置确定的,所述虚拟相机在所 述第五画面帧中的单帧目标朝向是根据所述虚拟相机在所述第一画面帧中的实际朝向确定的。
综上所述,本申请实施例提供的技术方案,通过以三维虚拟环境中的虚拟跟随对象作为虚拟相机的视觉焦点,在角色锁定状态下,基于自身角色和锁定角色的位置信息,确定虚拟跟随对象的位置信息,然后基于该虚拟跟随对象的位置信息,更新虚拟相机的位置和朝向;由于在确定虚拟跟随对象的位置信息时,兼顾了自身角色和锁定角色的位置信息,如此避免了锁定角色被自身角色遮挡,从而使得确定出的虚拟跟随对象的位置信息更加合理准确,进而保证在虚拟相机以该虚拟跟随对象作为视觉焦点拍摄的画面中,自身角色和锁定角色能够以一个更加合理清晰的方式呈现给用户,提高了角色锁定状态下虚拟相机的运镜合理性,进而提升了画面的显示效果。
另外,通过保持虚拟相机的目标位置与虚拟跟随对象的目标位置之间的距离,小于虚拟相机的目标位置与第一锁定角色的目标位置之间的距离,使得在虚拟相机的视野范围内,虚拟跟随对象相比于第一锁定角色更加地靠近虚拟相机,从而避免了第一锁定对象对虚拟相机的视野遮挡,进而进一步提高了角色锁定状态下虚拟相机的运镜合理性。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图18,其示出了本申请一个实施例提供的终端设备1800的结构框图。该终端设备1800可以是图1所示实施环境中的终端设备10,用于实施上述实施例中提供的画面显示方法。具体来讲:
通常,终端设备1800包括有:处理器1801和存储器1802。
处理器1801可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1801可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1801也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1801可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1801还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1802可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1802还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1802中的非暂态的计算机可读存储介质用于存储计算机程序,所述计算机程序经配置以由一个或者一个以上处理器执行,以实现上述画面显示方法。
在一些实施例中,终端设备1800还可选包括有:外围设备接口1803和至少一个外围设备。处理器1801、存储器1802和外围设备接口1803之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1803相连。具体地,外围设备包括:射频电路1804、显示屏1805、音频电路1806和电源1807中的至少一种。
本领域技术人员可以理解,图18中示出的结构并不构成对终端设备1800的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算 机程序,所述计算机程序在被处理器执行时以实现上画面显示方法。
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取存储器可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取存储器)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。
在一示例性实施例中,还提供了一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中。终端设备的处理器从所述计算机可读存储介质中读取所述计算机指令,所述处理器执行所述计算机指令,使得所述终端设备执行上述画面显示方法。
需要说明的是,本申请所涉及的信息(包括但不限于对象设备信息、对象个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经对象授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的用户帐号、三维虚拟环境等都是在充分授权的情况下获取的。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (19)

  1. 一种画面显示方法,所述方法由终端设备执行,所述方法包括:
    显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
    根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;其中,所述第一锁定角色是指在角色锁定状态下,所述自身角色对应的锁定目标;
    根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;其中,所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离;
    根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向;
    基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
  2. 根据权利要求1所述的方法,其中,所述根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向,包括:
    根据所述虚拟跟随对象的目标位置,确定所述虚拟相机所在的旋转轨道;其中,所述旋转轨道所在的平面与所述三维虚拟环境的参考平面平行,且所述旋转轨道的中轴线经过所述虚拟跟随对象的目标位置;
    根据所述虚拟跟随对象的目标位置和所述第一锁定角色的目标位置,在所述旋转轨道上确定所述虚拟相机的目标位置和目标朝向。
  3. 根据权利要求1所述的方法,其中,所述根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,包括:
    根据第一距离确定第一插值系数,所述第一距离是指所述第一锁定角色与所述自身角色之间的距离,所述第一插值系数用于确定所述虚拟相机的位置的调整量;
    根据所述虚拟相机的目标位置、所述虚拟相机在所述第一画面帧中的实际位置和所述第一插值系数,确定所述虚拟相机在所述第二画面帧中的单帧目标位置;
    根据第二距离确定第二插值系数,所述第二距离是指所述第一锁定角色与画面中轴线之间的距离,所述第二插值系数用于确定所述虚拟相机的朝向的调整量;
    根据所述虚拟相机的目标朝向、所述虚拟相机在所述第一画面帧中的实际朝向和所述第二插值系数,确定所述虚拟相机在所述第二画面帧中的单帧目标朝向。
  4. 根据权利要求3所述的方法,其中,所述第一插值系数与所述第一距离呈正相关关系,所述第二插值系数与所述第二距离呈正相关关系。
  5. 根据权利要求1所述的方法,其中,所述根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置,包括:
    以所述自身角色的目标位置为跟随目标,在目标直线上确定所述虚拟跟随对象的待定目标位置;其中,所述目标直线垂直于所述自身角色的目标位置和所述第一锁定角色的目标位置之间的连线;
    若所述虚拟跟随对象的待定目标位置满足条件,则将所述虚拟跟随对象的待定目标位置,确定为所述虚拟跟随对象的目标位置;
    若所述虚拟跟随对象的待定目标位置不满足所述条件,则对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置。
  6. 根据权利要求5所述的方法,其中,所述条件包括:所述虚拟跟随对象的待定目标位置相对于所述自身角色的目标位置的偏移距离小于或等于最大偏移量;
    所述若所述虚拟跟随对象的待定目标位置不满足所述条件,则对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置,包括:
    若所述虚拟跟随对象的待定目标位置相对于所述自身角色的目标位置的偏移距离大于所述最大偏移量,则以所述最大偏移量为基准,对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置;
    其中,所述虚拟跟随对象的目标位置相对于所述自身角色的目标位置的偏移距离,小于或等于所述最大偏移量。
  7. 根据权利要求5所述的方法,其中,所述条件包括:所述虚拟跟随对象的待定目标位置位于所述自身角色的背后夹角区域之内;
    所述若所述虚拟跟随对象的待定目标位置不满足所述条件,则对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置,包括:
    若所述虚拟跟随对象的待定目标位置位于所述自身角色的背后夹角区域之外,则以所述自身角色的背后夹角区域为基准,对所述虚拟跟随对象的待定目标位置进行调整,得到所述虚拟跟随对象的目标位置;
    其中,所述虚拟跟随对象的目标位置位于所述自身角色的背后夹角区域之内。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    在所述角色锁定状态下,响应于针对所述自身角色的视野调整操作,控制所述虚拟相机绕所述虚拟跟随对象进行旋转;
    在旋转的过程中,确定所述三维虚拟环境中的预锁定角色,显示第三画面帧,所述第三画面帧中显示有所述预锁定角色以及所述预锁定角色对应的预锁定标记;
    响应于针对所述预锁定角色的锁定确认操作,将所述预锁定角色确定为第二锁定角色,显示第四画面帧,所述第四画面帧中显示有所述第二锁定角色以及所述第二锁定角色对应的锁定标记。
  9. 根据权利要求1所述的方法,其中,所述显示第一画面帧之后,还包括:
    在非角色锁定状态下,以所述自身角色为跟随目标,以插值方式更新所述虚拟跟随对象的位置,得到所述虚拟跟随对象在第五画面帧中的单帧目标位置;
    根据所述虚拟跟随对象在所述第五画面帧中的单帧目标位置,确定所述虚拟相机在所述第五画面帧中的单帧目标位置;
    在未获取到针对所述自身角色的视野调整操作的情况下,将所述虚拟相机在所述第一画面帧中的实际朝向,确定为所述虚拟相机在所述第五画面帧中的单帧目标朝向;
    在获取到针对所述自身角色的视野调整操作的情况下,根据所述视野调整操作对所述虚拟相机在所述第一画面帧中的实际朝向进行调整,得到所述虚拟相机在所述第五画面帧中的单帧目标朝向;
    基于所述虚拟相机在所述第五画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第五画面帧。
  10. 根据权利要求9所述的方法,其中,所述在非角色锁定状态下,以所述自身角色为跟随目标,以插值方式更新所述虚拟跟随对象的位置,得到所述虚拟跟随对象在第五画面帧中 的单帧目标位置,包括:
    在所述非角色锁定状态下,根据第三距离确定第三插值系数,所述第三距离是指所述自身角色与所述虚拟跟随对象之间的距离,所述第三插值系数用于确定所述虚拟跟随对象的位置的调整量;其中,所述第三插值系数和所述第三距离呈正相关关系;
    根据所述自身角色在所述第一画面帧中的实际位置、所述虚拟跟随对象在所述第一画面帧中的实际位置和所述第三插值系数,确定所述虚拟跟随对象在所述第五画面帧中的单帧目标位置。
  11. 一种画面显示方法,所述方法由终端设备执行,所述方法包括:
    显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
    响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧;其中,所述单帧目标位置和所述单帧目标朝向是根据所述虚拟相机的目标位置和目标朝向确定的,所述虚拟相机的目标位置和目标朝向是根据所述虚拟跟随对象的目标位置确定的,且所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离,所述第一锁定角色是指在所述角色锁定状态下,所述自身角色对应的锁定目标。
  12. 根据权利要求11所述的方法,其中,所述响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧,包括:
    响应于所述自身角色和所述第一锁定角色中的至少之一的移动,确定所述自身角色的目标位置和所述第一锁定角色的目标位置;
    根据所述自身角色的目标位置和所述第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;
    根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;
    根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向;
    基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
  13. 根据权利要求11所述的方法,其中,所述方法还包括:
    在所述角色锁定状态下,响应于针对所述自身角色的视野调整操作,控制所述虚拟相机绕所述虚拟跟随对象进行旋转;
    在旋转的过程中,确定所述三维虚拟环境中的预锁定角色,显示第三画面帧,所述第三画面帧中显示有所述预锁定角色以及所述预锁定角色对应的预锁定标记;
    响应于针对所述预锁定角色的锁定确认操作,将所述预锁定角色确定为第二锁定角色,显示第四画面帧,所述第四画面帧中显示有所述第二锁定角色以及所述第二锁定角色对应的锁定标记。
  14. 根据权利要求11所述的方法,其中,所述显示第一画面帧之后,还包括:
    在非角色锁定状态下,以所述自身角色为跟随目标,更新所述虚拟跟随对象的位置,得到所述虚拟跟随对象在第五画面帧中的单帧目标位置;
    基于所述虚拟相机在所述第五画面帧中的单帧目标位置和单帧目标朝向,显示所述第五 画面帧;其中,所述虚拟相机在所述第五画面帧中的单帧目标位置是根据虚拟跟随对象在所述第五画面帧中的单帧目标位置确定的,所述虚拟相机在所述第五画面帧中的单帧目标朝向是根据所述虚拟相机在所述第一画面帧中的实际朝向确定的。
  15. 一种画面显示装置,所述装置包括:
    画面显示模块,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
    对象位置确定模块,用于根据自身角色的目标位置和第一锁定角色的目标位置,确定所述虚拟跟随对象的目标位置;其中,所述第一锁定角色是指在角色锁定状态下,所述自身角色对应的锁定目标;
    相机位置确定模块,用于根据所述虚拟跟随对象的目标位置,确定所述虚拟相机的目标位置和目标朝向;其中,所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离;
    单帧位置确定模块,用于根据所述虚拟相机的目标位置和目标朝向,以及所述虚拟相机在所述第一画面帧中的实际位置和实际朝向,插值得到所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向;
    所述画面显示模块,还用于基于所述虚拟相机在所述第二画面帧中的单帧目标位置和单帧目标朝向,生成并显示所述第二画面帧。
  16. 一种画面显示装置,所述装置包括:
    画面显示模块,用于显示第一画面帧,所述第一画面帧是通过虚拟相机以三维虚拟环境中的虚拟跟随对象为视觉焦点,对所述三维虚拟环境进行拍摄得到的画面;
    所述画面显示模块,还用于在角色锁定状态下,响应于自身角色和第一锁定角色中的至少之一的移动,基于所述虚拟相机在第二画面帧中的单帧目标位置和单帧目标朝向,显示所述第二画面帧;其中,所述单帧目标位置和所述单帧目标朝向是根据所述虚拟相机的目标位置和目标朝向确定的,所述虚拟相机的目标位置和目标朝向是根据所述虚拟跟随对象的目标位置确定的,且所述虚拟相机的目标位置与所述虚拟跟随对象的目标位置之间的距离,小于所述虚拟相机的目标位置与所述第一锁定角色的目标位置之间的距离,所述第一锁定角色是指在所述角色锁定状态下,所述自身角色对应的锁定目标。
  17. 一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至14任一项所述的画面显示方法。
  18. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如上述权利要求1至14任一项所述的画面显示方法。
  19. 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以实现如权利要求1至14任一项所述的画面显示方法。
PCT/CN2022/127196 2022-01-04 2022-10-25 画面显示方法、装置、终端、存储介质及程序产品 WO2023130809A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/340,676 US20230330532A1 (en) 2022-01-04 2023-06-23 Methods, terminal device, and storage medium for picture display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210003178.6A CN114307145B (zh) 2022-01-04 2022-01-04 画面显示方法、装置、终端及存储介质
CN202210003178.6 2022-01-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/340,676 Continuation US20230330532A1 (en) 2022-01-04 2023-06-23 Methods, terminal device, and storage medium for picture display

Publications (1)

Publication Number Publication Date
WO2023130809A1 true WO2023130809A1 (zh) 2023-07-13

Family

ID=81022336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127196 WO2023130809A1 (zh) 2022-01-04 2022-10-25 画面显示方法、装置、终端、存储介质及程序产品

Country Status (3)

Country Link
US (1) US20230330532A1 (zh)
CN (2) CN116983628A (zh)
WO (1) WO2023130809A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116983628A (zh) * 2022-01-04 2023-11-03 腾讯科技(深圳)有限公司 画面显示方法、装置、终端及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696973A (zh) * 2004-05-11 2005-11-16 世嘉股份有限公司 图像处理程序、游戏处理程序及游戏信息处理装置
CN106600668A (zh) * 2016-12-12 2017-04-26 中国科学院自动化研究所 一种与虚拟角色进行互动的动画生成方法、装置及电子设备
CN107358656A (zh) * 2017-06-16 2017-11-17 珠海金山网络游戏科技有限公司 一种三维游戏的ar处理系统及其处理方法
JP2019122496A (ja) * 2018-01-12 2019-07-25 株式会社バンダイナムコスタジオ シミュレーションシステム及びプログラム
US20190374855A1 (en) * 2018-06-11 2019-12-12 Nintendo Co., Ltd. Systems and methods for adjusting a stereoscopic effect
CN111420402A (zh) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、终端及存储介质
CN112791405A (zh) * 2021-01-15 2021-05-14 网易(杭州)网络有限公司 游戏对象的锁定方法和装置
CN114307145A (zh) * 2022-01-04 2022-04-12 腾讯科技(深圳)有限公司 画面显示方法、装置、终端、存储介质及程序产品

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6960212B2 (ja) * 2015-09-14 2021-11-05 株式会社コロプラ 視線誘導のためのコンピュータ・プログラム
JP6266814B1 (ja) * 2017-01-27 2018-01-24 株式会社コロプラ 情報処理方法及び当該情報処理方法をコンピュータに実行させるためのプログラム
CN107050859B (zh) * 2017-04-07 2020-10-27 福州智永信息科技有限公司 一种基于unity3D的拖动相机在场景中位移的方法
CN110548289B (zh) * 2019-09-18 2023-03-17 网易(杭州)网络有限公司 一种三维控件显示的方法和装置
CN111603770B (zh) * 2020-05-21 2023-05-05 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及介质
CN111803946B (zh) * 2020-07-22 2024-02-09 网易(杭州)网络有限公司 游戏中的镜头切换方法、装置和电子设备
CN112169330B (zh) * 2020-09-25 2021-12-31 腾讯科技(深圳)有限公司 虚拟环境的画面显示方法、装置、设备及介质
CN112473138B (zh) * 2020-12-10 2023-11-17 网易(杭州)网络有限公司 游戏的显示控制方法及装置、可读存储介质、电子设备
CN113101658B (zh) * 2021-03-29 2023-08-29 北京达佳互联信息技术有限公司 虚拟空间中的视角切换方法、装置及电子设备
CN113134233B (zh) * 2021-05-14 2023-06-20 腾讯科技(深圳)有限公司 控件的显示方法、装置、计算机设备及存储介质
CN113440846B (zh) * 2021-07-15 2024-05-10 网易(杭州)网络有限公司 游戏的显示控制方法、装置、存储介质及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696973A (zh) * 2004-05-11 2005-11-16 世嘉股份有限公司 图像处理程序、游戏处理程序及游戏信息处理装置
CN106600668A (zh) * 2016-12-12 2017-04-26 中国科学院自动化研究所 一种与虚拟角色进行互动的动画生成方法、装置及电子设备
CN107358656A (zh) * 2017-06-16 2017-11-17 珠海金山网络游戏科技有限公司 一种三维游戏的ar处理系统及其处理方法
JP2019122496A (ja) * 2018-01-12 2019-07-25 株式会社バンダイナムコスタジオ シミュレーションシステム及びプログラム
US20190374855A1 (en) * 2018-06-11 2019-12-12 Nintendo Co., Ltd. Systems and methods for adjusting a stereoscopic effect
CN111420402A (zh) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、终端及存储介质
CN112791405A (zh) * 2021-01-15 2021-05-14 网易(杭州)网络有限公司 游戏对象的锁定方法和装置
CN114307145A (zh) * 2022-01-04 2022-04-12 腾讯科技(深圳)有限公司 画面显示方法、装置、终端、存储介质及程序产品

Also Published As

Publication number Publication date
CN114307145A (zh) 2022-04-12
CN116983628A (zh) 2023-11-03
CN114307145B (zh) 2023-06-27
US20230330532A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
WO2021258994A1 (zh) 虚拟场景的显示方法、装置、设备及存储介质
JP7331124B2 (ja) 仮想オブジェクトの制御方法、装置、端末及び記憶媒体
JP7387758B2 (ja) インタフェース表示方法、装置、端末、記憶媒体及びコンピュータプログラム
CN107213636B (zh) 镜头移动方法、装置、存储介质和处理器
US20230059116A1 (en) Mark processing method and apparatus, computer device, storage medium, and program product
WO2022127376A1 (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
WO2023020125A1 (zh) 虚拟环境画面的显示方法、装置、终端、介质及程序产品
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
CN113599816B (zh) 画面显示方法、装置、终端及存储介质
JP7216493B2 (ja) ゲームシステム及びプログラム
WO2023130809A1 (zh) 画面显示方法、装置、终端、存储介质及程序产品
CN111589114B (zh) 虚拟对象的选择方法、装置、终端及存储介质
US20230310989A1 (en) Object control method and apparatus in virtual scene, terminal device, computer-readable storage medium, and computer program product
CN115454250A (zh) 用于增强现实交互的方法、装置、设备和存储介质
CN112473138B (zh) 游戏的显示控制方法及装置、可读存储介质、电子设备
TW202228827A (zh) 虛擬場景中畫面展示方法及裝置、電腦設備、電腦可讀存儲介質及電腦程式產品
CN114504817A (zh) 虚拟射击道具的配置方法和装置、存储介质及电子设备
CN112891940A (zh) 图像数据处理方法及装置、存储介质、计算机设备
CN111973984A (zh) 虚拟场景的坐标控制方法、装置、电子设备及存储介质
CN115920377B (zh) 游戏中动画的播放方法、装置、介质及电子设备
CN114053704B (zh) 信息显示方法、装置、终端及存储介质
WO2024067168A1 (zh) 基于社交场景的消息显示方法、装置、设备、介质及产品
CN114900679B (zh) 一种三维模型展示方法、装置、电子设备及可读存储介质
WO2023213185A1 (zh) 直播画面数据处理方法、装置、设备、存储介质及程序
WO2024001504A1 (zh) 画面显示方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918261

Country of ref document: EP

Kind code of ref document: A1