WO2022156490A1 - 虚拟场景中画面展示方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟场景中画面展示方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2022156490A1
WO2022156490A1 PCT/CN2021/141708 CN2021141708W WO2022156490A1 WO 2022156490 A1 WO2022156490 A1 WO 2022156490A1 CN 2021141708 W CN2021141708 W CN 2021141708W WO 2022156490 A1 WO2022156490 A1 WO 2022156490A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
virtual vehicle
vehicle
picture
target
Prior art date
Application number
PCT/CN2021/141708
Other languages
English (en)
French (fr)
Inventor
汪涛
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020237016527A priority Critical patent/KR20230085934A/ko
Priority to JP2023527789A priority patent/JP2023547721A/ja
Publication of WO2022156490A1 publication Critical patent/WO2022156490A1/zh
Priority to US17/992,599 priority patent/US20230086441A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5252Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6669Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera using a plurality of virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character change rooms
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8017Driving on land or water; Flying

Definitions

  • the present application relates to the technical field of virtual scenes, and in particular, to a method, apparatus, computer equipment, storage medium and computer program product for displaying pictures in a virtual scene.
  • the user can simulate the rearview mirror function of the actual driving vehicle in the game interface.
  • the rearview mirror function control is superimposed on the virtual scene image, and by receiving the user's trigger operation on the rearview mirror function control, the virtual scene image displayed on the display screen of the terminal can be directly switched to the master virtual vehicle rear view.
  • the embodiments of the present application provide a method, device, computer equipment, storage medium and computer program product for displaying pictures in a virtual scene, which can improve the efficiency of human-computer interaction.
  • An embodiment of the present application provides a method for displaying pictures in a virtual scene, and the method includes:
  • the virtual scene picture includes a first virtual vehicle
  • a target virtual vehicle is determined from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle vehicle;
  • An auxiliary picture is displayed in the virtual scene picture; the auxiliary picture takes the target virtual vehicle as the focus and is captured by a virtual camera set corresponding to the first virtual vehicle.
  • the embodiment of the present application also provides a method for displaying pictures in a virtual scene, the method comprising:
  • the virtual scene picture includes a first virtual vehicle
  • a first auxiliary picture is displayed in the virtual scene picture; the first auxiliary picture takes the first target virtual vehicle as the focus and is captured by a virtual camera set corresponding to the first virtual vehicle; the first auxiliary picture is The target virtual vehicle is a virtual vehicle with the smallest relative distance from the first virtual vehicle, and the relative distance is less than or equal to the first distance;
  • the virtual vehicle In response to the virtual vehicle having the smallest relative distance from the first virtual vehicle, and the relative distance being less than or equal to the first distance, the virtual vehicle is transformed into a second target virtual vehicle, in the virtual scene picture A second auxiliary picture is displayed, and the second auxiliary picture is a picture captured by the virtual camera set corresponding to the first virtual vehicle with the second target virtual vehicle as the focus.
  • the embodiment of the present application also provides a device for displaying pictures in a virtual scene, and the device includes:
  • a main image display module configured to display a virtual scene image, wherein the virtual scene image includes a first virtual vehicle
  • a target determination module configured to determine a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is located in the first virtual vehicle a virtual vehicle behind a virtual vehicle;
  • the auxiliary picture display module is configured to display an auxiliary picture in the virtual scene picture; the auxiliary picture takes the target virtual vehicle as a focus and is shot by a virtual camera set corresponding to the first virtual vehicle.
  • the embodiment of the present application also provides a device for displaying pictures in a virtual scene, and the device includes:
  • a main image display module configured to display a virtual scene image, wherein the virtual scene image includes a first virtual vehicle
  • a first auxiliary picture display module configured to display a first auxiliary picture in the virtual scene picture; the first auxiliary picture takes the first target virtual vehicle as the focus, and uses a virtual camera set corresponding to the first virtual vehicle.
  • a picture for shooting; the first target virtual vehicle is a virtual vehicle with the smallest relative distance from the first virtual vehicle, and the relative distance is less than or equal to the first distance;
  • a second auxiliary screen display module configured to transform the virtual vehicle whose relative distance from the first virtual vehicle is the smallest and whose relative distance is less than or equal to the first distance into a second target virtual vehicle vehicle, displaying a second auxiliary picture in the virtual scene picture, and the second auxiliary picture takes the second target virtual vehicle as the focus and is captured by the virtual camera set corresponding to the first virtual vehicle screen.
  • An embodiment of the present application further provides a computer device, the computer device includes a processor and a memory, and the memory stores at least one instruction, at least a piece of program, a code set or an instruction set, the at least one instruction, the At least one section of program, the code set or the instruction set is loaded and executed by the processor to implement the above method for displaying pictures in a virtual scene.
  • An embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one piece of program, code set or instruction set is stored in the storage medium, the at least one instruction, the at least one piece of program, the code
  • the set or instruction set is loaded and executed by the processor to implement the above method for displaying pictures in a virtual scene.
  • Embodiments of the present application also provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the terminal reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal executes the method for displaying pictures in a virtual scene provided in various optional implementation manners of the foregoing aspects.
  • the target virtual vehicle By detecting the relative distance between the first virtual vehicle and the second virtual vehicle in real time, the target virtual vehicle is determined, and the virtual scene is photographed by the virtual camera with the target virtual vehicle as the focus, and the auxiliary image obtained by the photographing is displayed. Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency when controlling virtual vehicles, and improve the efficiency of human-computer interaction.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a display interface of a virtual scene provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for displaying pictures in a virtual scene provided by an embodiment of the present application
  • FIG. 4 is a flowchart of a method for displaying images in a virtual scene provided by an embodiment of the present application
  • FIG. 5 is a flowchart of a method for displaying pictures in a virtual scene provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of the setting position of a virtual camera for shooting an auxiliary picture provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of an obtuse angle determination process between a target lens direction and a vehicle rear reference line provided by an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a process for determining a lens direction provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of focus switching corresponding to an auxiliary screen provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an auxiliary picture when the first obtuse angle between the target lens direction and the rear reference line is greater than the first angle according to an embodiment of the present application;
  • FIG. 11 is a logical flowchart of a method for displaying images in a virtual scene provided by an embodiment of the present application.
  • FIG. 12 is a structural block diagram of an apparatus for displaying images in a virtual scene provided by an embodiment of the present application.
  • FIG. 13 is a structural block diagram of an apparatus for displaying images in a virtual scene provided by an embodiment of the present application.
  • FIG. 14 is a structural block diagram of a computer device provided by an embodiment of the present application.
  • first ⁇ second involved is only to distinguish similar objects, and does not represent a specific ordering of objects. It is understood that “first ⁇ second” can be used when permitted.
  • the specific order or sequence is interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
  • a virtual scene is a virtual scene displayed (or provided) when an application runs on a terminal.
  • the virtual scene may be a simulated environment scene of the real world, a semi-simulated and semi-fictional three-dimensional environment scene, or a purely fictional three-dimensional environment scene.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene.
  • the following embodiments illustrate that the virtual scene is a three-dimensional virtual scene, but this is not limited.
  • the virtual scene may also be used for virtual scene battles between at least two virtual characters.
  • the virtual scene can also be used for a battle between at least two virtual characters using virtual firearms.
  • the virtual scene can also be used for a battle between at least two virtual characters using virtual firearms within a target area, and the target area is constantly decreasing as time goes by in the virtual scene.
  • the virtual scene is usually generated by an application program in a computer device such as a terminal and displayed based on hardware (such as a screen) in the terminal.
  • the terminal may be a mobile terminal such as a smart phone, a tablet computer, or an e-book reader; or, the terminal may also be a personal computer device such as a notebook computer or a stationary computer.
  • Virtual objects refer to movable objects in a virtual scene.
  • the movable object may be at least one of a virtual character, a virtual animal, and a virtual vehicle.
  • the virtual object when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional solid model created based on animation skeleton technology.
  • Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene, and occupies a part of the space in the three-dimensional virtual scene.
  • a virtual vehicle refers to a virtual vehicle in which the virtual object can realize the driving operation according to the user's control of the operation controls in the virtual environment.
  • the functions that the virtual vehicle can achieve include acceleration, deceleration, braking, backing, steering, drifting, and the use of props, etc.
  • the above functions can be realized automatically, for example, the virtual vehicle can be automatically accelerated, or the virtual vehicle can be automatically steered; the above functions can also be triggered according to the user's control of the operation controls, for example, when the user triggers the brake control, the virtual vehicle executes brake action.
  • the racing game is mainly played in a virtual competition scene. Multiple virtual vehicles are racing games for the purpose of achieving the specified competition goal.
  • the user can control the virtual vehicle corresponding to the terminal to communicate with other users.
  • the virtual vehicle controlled by the game is raced; the user can also control the virtual vehicle corresponding to the terminal, and the AI-controlled virtual vehicle generated by the client program corresponding to the racing game can compete in the race.
  • FIG. 1 shows a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment may include: a first terminal 110 , a server 120 and a second terminal 130 .
  • the first terminal 110 installs and runs an application 111 supporting a virtual environment.
  • the application 111 may be a multiplayer online battle program, or the application 111 may also be an offline application.
  • the user interface of the application 111 is displayed on the screen of the first terminal 110 .
  • the application 111 may be a racing game (Racing Game, RCG), a sandbox (Sandbox) game including a racing function, or other types of games including a racing function.
  • RCG Racing Game
  • Sandbox sandbox
  • the first terminal 110 is a terminal used by the first user 112.
  • the first user 112 uses the first terminal 110 to control the first virtual vehicle located in the virtual environment to perform activities.
  • the first virtual vehicle may be referred to as the master virtual vehicle of the first user 112. object.
  • the activities of the first virtual vehicle include, but are not limited to, at least one of acceleration, deceleration, braking, backing, steering, drifting, and using props.
  • the first virtual vehicle may be a virtual vehicle, or a virtual model with a virtual vehicle function that is modeled according to other means of transport (such as ships, airplanes), etc.; the first virtual vehicle may also be based on a real vehicle.
  • the second terminal 130 has an application program 131 that supports a virtual environment installed and running, and the application program 131 may be a multiplayer online battle program.
  • the application program 131 may be a multiplayer online battle program.
  • the user interface of the application 131 is displayed on the screen of the second terminal 130 .
  • the client can be any one of an RCG game program, a Sandbox game, and other game programs including a racing function.
  • the application program 131 is an RCG game as an example.
  • the second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual vehicle located in the virtual environment to realize driving operations, and the second virtual vehicle may be referred to as the first virtual vehicle.
  • Two users 132 host virtual vehicles.
  • a third virtual vehicle may also exist in the virtual environment, the third virtual vehicle is controlled by an AI corresponding to the application 131 , and the third virtual vehicle may be referred to as an AI-controlled virtual vehicle.
  • the first virtual vehicle, the second virtual vehicle and the third virtual vehicle are in the same virtual world, and the first virtual vehicle and the second virtual vehicle may belong to the same faction, the same team, the same organization, have Friendship or temporary communication rights. In some embodiments, the first virtual vehicle and the second virtual vehicle may belong to different factions, different teams, different organizations, or have an adversarial relationship.
  • the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms (Android or IOS).
  • the first terminal 110 may generally refer to one of the multiple terminals, and the second terminal 130 may generally refer to another one of the multiple terminals. In this embodiment, only the first terminal 110 and the second terminal 130 are used as examples for illustration.
  • the device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: smart phones, tablet computers, e-book readers, MP3 players, MP4 players, laptop computers, and desktop computers. at least one.
  • terminals there are multiple other terminals that can access the server 120 in different embodiments.
  • the first terminal 110, the second terminal 130 and other terminals are connected to the server 120 through a wireless network or a wired network.
  • the server 120 includes at least one of a server, a server cluster composed of multiple servers, a cloud computing platform and a virtualization center.
  • the server 120 is used to provide background services for applications supporting a three-dimensional virtual environment.
  • the server 120 undertakes the main computing work, and the terminal undertakes the secondary computing work; or, the server 120 undertakes the secondary computing work, and the terminal undertakes the main computing work; or, a distributed computing architecture is used between the server 120 and the terminal. collaborative computing.
  • the server 120 includes a memory 121, a processor 122, a user account database 123, a battle service module 124, and a user-oriented input/output interface (Input/Output Interface, I/O interface) 125.
  • the processor 122 is used for loading the instructions stored in the server 120, and processing the data in the user account database 123 and the battle service module 124;
  • the user account database 123 is used for storing the first terminal 110, the second terminal 130 and other terminals used by The data of the user account, such as the avatar of the user account, the nickname of the user account, the combat effectiveness index of the user account, and the service area where the user account is located;
  • the battle service module 124 is used to provide multiple battle rooms for users to battle, such as 1V1 battles, 3V3 battle, 5V5 battle, etc.;
  • the user-oriented I/O interface 125 is used to establish communication and exchange data with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network.
  • the virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene.
  • the display interface of the virtual scene includes a scene screen 200 including a currently controlled virtual vehicle 210 , an environment screen 220 of the three-dimensional virtual scene, and a virtual vehicle 240 .
  • the virtual vehicle 240 may be a virtual object controlled by a user corresponding to another terminal or a virtual object controlled by an application program.
  • the currently controlled virtual vehicle 210 and the virtual vehicle 240 are three-dimensional models in the three-dimensional virtual scene
  • the environment screen of the three-dimensional virtual scene displayed in the scene screen 200 is the third-person perspective corresponding to the currently controlled virtual vehicle 210
  • the displayed environment picture 220 of the three-dimensional virtual scene is a road 224 , a sky 225 , a hill 221 and a factory building 222 .
  • the currently controlled virtual vehicle 210 can perform operations such as steering, acceleration, and drift under the control of the user. Under the control of the user, the virtual vehicle in the virtual scene can display different three-dimensional models. For example, the screen of the terminal supports touch operations, and The scene screen 200 of the virtual scene includes a virtual control. When the user touches the virtual control, the currently controlled virtual vehicle 210 can perform a specified operation (such as a deformation operation) in the virtual scene and display the current corresponding 3D model.
  • a specified operation such as a deformation operation
  • FIG. 3 shows a flowchart of a method for displaying pictures in a virtual scene provided by an embodiment of the present application.
  • the above-mentioned method may be executed by a computer device, and the computer device may be a terminal or a server, or the above-mentioned computer device may also include the above-mentioned terminal and server.
  • the method for displaying pictures in the virtual scene includes:
  • Step 301 the computer device displays a virtual scene image, and the virtual scene image includes a first virtual vehicle.
  • Step 302 Determine a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle.
  • Step 303 displaying an auxiliary picture in the virtual scene picture;
  • the auxiliary picture is a picture captured by a virtual camera set corresponding to the first virtual vehicle with the target virtual vehicle as the focus.
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is photographed by the virtual camera with the target virtual vehicle as the focus, and the result obtained by photographing is shown.
  • auxiliary screen Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, thereby improving the interaction efficiency and human-computer interaction efficiency when controlling virtual vehicles.
  • FIG. 4 shows a flowchart of a method for displaying pictures in a virtual scene provided by an embodiment of the present application.
  • the above-mentioned method may be executed by a computer device, and the computer device may be a terminal or a server, or the above-mentioned computer device may also include the above-mentioned terminal and server.
  • the method for displaying pictures in the virtual scene includes:
  • step 401 the computer device displays a virtual scene image, and the virtual scene image includes a first virtual vehicle.
  • Step 402 displaying a first auxiliary picture in the virtual scene picture;
  • the first auxiliary picture is a picture taken by a virtual camera set corresponding to the first virtual vehicle with the first target virtual vehicle as the focus;
  • the first target virtual vehicle is a The relative distance between the first virtual vehicles is the smallest, and the relative distance is less than or equal to the virtual vehicles of the first distance.
  • Step 403 in response to the virtual vehicle with the smallest relative distance from the first virtual vehicle and the relative distance less than or equal to the first distance being transformed into a second target virtual vehicle, displaying a second auxiliary image in the virtual scene image, the second auxiliary image
  • the picture takes the second target virtual vehicle as the focus, and is shot by a virtual camera set corresponding to the first virtual vehicle.
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is performed by a virtual camera with the target virtual vehicle as the focus.
  • the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency and human-computer interaction efficiency when the user controls the virtual vehicle.
  • FIG. 5 shows a flowchart of a method for displaying pictures in a virtual scene according to an embodiment of the present application.
  • the above-mentioned method may be executed by a computer device, and the computer device may be a terminal or a server, or the above-mentioned computer device may also include the above-mentioned terminal and server.
  • the terminal can display an auxiliary picture on the virtual scene picture by performing the following steps.
  • Step 501 the computer device displays a virtual scene picture.
  • the terminal displays a virtual scene image including the first virtual vehicle.
  • the virtual scene picture may be a virtual scene that includes the first virtual vehicle and competes with other virtual vehicles.
  • the first virtual vehicle is a virtual vehicle controlled by the terminal, and other virtual vehicles may be virtual vehicles controlled by other terminals or AI-controlled.
  • the virtual scene picture is a virtual scene picture observed from a third-person perspective of the first virtual vehicle.
  • the third-person perspective of the first virtual vehicle is the perspective corresponding to the virtual camera of the main screen set at the rear and upper part of the first virtual vehicle
  • the virtual scene image observed from the third-person perspective of the first virtual vehicle is the first virtual scene.
  • the virtual scene picture is a virtual scene picture observed from a first-person perspective of the first virtual vehicle.
  • the first-person perspective of the first virtual vehicle is the perspective corresponding to the virtual camera of the main picture set on the driver position of the first virtual vehicle
  • the virtual scene image observed from the first-person perspective of the first virtual vehicle is the first-person perspective of the first virtual vehicle.
  • the virtual scene picture is covered and displayed in the display area of the terminal, and the virtual scene picture is the main display picture when the first virtual vehicle is controlled to perform a racing competition in the virtual scene, and is used to display the first virtual vehicle In the path screen during the racing competition, the user controls the first virtual vehicle by obtaining the path screen ahead.
  • controls or display information are superimposed on the virtual scene picture.
  • the controls may include a direction control for receiving a trigger operation to control the moving direction of the first virtual vehicle, a brake control for receiving a trigger operation to control the first virtual vehicle to brake, and an acceleration control for controlling the first virtual vehicle to accelerate and move controls, etc.
  • the displayed information may include account identifications used to indicate the first virtual vehicle and other virtual vehicles, as well as ranking information of the order of the positions of each virtual vehicle at the current moment, a map used to indicate the complete virtual scene, and a brief description of each virtual vehicle. Map information of the location of the virtual vehicle on the map, etc.
  • a viewing angle switching control is superimposed on the virtual scene image, and in response to a user's designated operation on the viewing angle switching control, the virtual scene image can be displayed in a first-person perspective of the first virtual vehicle and a third-person perspective of the first virtual vehicle. Switch between viewing angles.
  • the terminal when the virtual scene picture displayed by the terminal is a virtual scene picture corresponding to the first-person perspective of the first virtual vehicle, in response to the user's designated operation on the perspective switching control, the terminal displays the first-person perspective of the first virtual vehicle.
  • the virtual scene picture corresponding to the perspective is switched to the virtual scene picture corresponding to the third-person perspective;
  • the terminal switches the perspective in response to the user.
  • the terminal switches the virtual scene picture corresponding to the third-person perspective of the first virtual vehicle to the virtual scene picture corresponding to the first-person perspective.
  • the virtual vehicle corresponding to the same user account may be a plurality of different types of virtual vehicles.
  • the terminal In response to receiving the virtual vehicle information sent by the server, the terminal displays each different type of virtual vehicle information corresponding to the virtual vehicle information on the vehicle selection interface. virtual vehicle.
  • the target virtual vehicle corresponding to the selection operation is determined, and the target virtual vehicle is determined as the first virtual vehicle. The scene is displayed on the terminal.
  • Step 502 Obtain the relative distance between the first virtual vehicle and the second virtual vehicle.
  • the terminal acquires the relative distance between the first virtual vehicle and each second virtual vehicle, and the second virtual vehicle is a virtual vehicle located behind the first virtual vehicle.
  • an area not exceeding the rear reference line is determined as the rear of the first virtual vehicle, and each virtual vehicle located behind the first virtual vehicle is determined is the second virtual vehicle.
  • the rear reference line of the first virtual vehicle is a straight line where the rear of the first virtual vehicle is located, and the rear reference line is parallel to the horizontal plane in the virtual scene, and is connected with the front and rear of the first virtual vehicle. vertical.
  • the length of the line connecting the rear of the first virtual vehicle and the center point of the second virtual vehicle is determined as a relative distance.
  • the center point of the second virtual vehicle may be the center of gravity of the virtual vehicle.
  • the calculated relative distance refers to the distance in the virtual scene.
  • Step 503 Determine a target virtual vehicle from the second virtual vehicles based on the relative distance between the first virtual vehicle and at least one second virtual vehicle.
  • the terminal determines whether the relative distance satisfies the specified condition based on the determined relative distance between each second virtual vehicle and the first virtual vehicle, and when the relative distance satisfies the specified condition, determines that the relative distance corresponds to The second virtual vehicle of is the target virtual vehicle. If there is no relative distance that satisfies the specified condition, there is no target virtual vehicle at the current moment.
  • a second virtual vehicle that simultaneously satisfies that the relative distance from the first virtual vehicle is less than or equal to the first distance and that the relative distance is the smallest is determined as the target virtual vehicle.
  • a candidate virtual vehicle is acquired, and a target virtual vehicle is determined from the candidate virtual vehicle. Then, the candidate virtual vehicle with the smallest relative distance from the first virtual vehicle is determined as the target virtual vehicle.
  • the virtual vehicle to be selected is a second virtual vehicle whose relative distance from the first virtual vehicle is less than or equal to the first distance.
  • the candidate virtual vehicle may be a set of partial second virtual vehicles that satisfy the condition that the relative distance is less than or equal to the first distance.
  • the second virtual vehicle has virtual vehicle A, virtual vehicle B and virtual vehicle B.
  • virtual vehicle A, virtual vehicle B, and virtual vehicle C are obtained as virtual vehicles to be selected.
  • the relative distance corresponding to virtual vehicle A is 60 meters
  • the relative distance corresponding to virtual vehicle B is 30 meters
  • the relative distance corresponding to virtual vehicle C is 100 meters
  • the virtual vehicle B with the smallest relative distance is determined as the target virtual vehicle.
  • the relative relationship between the virtual vehicle A, the virtual vehicle B, and the virtual vehicle C respectively and the first virtual vehicle is obtained.
  • distance and then compare the relative distances between the three candidate virtual vehicles and the first virtual vehicle, if the relative distance corresponding to virtual vehicle A is 105 meters, the relative distance corresponding to virtual vehicle B is 110 meters, and the corresponding virtual vehicle C If the relative distance is 120 meters, it is determined that the virtual vehicle with the smallest relative distance is the virtual vehicle A, and then it is determined whether the virtual vehicle A is less than or equal to the first distance.
  • Step 504 acquiring the first obtuse angle between the direction of the target lens and the reference line of the rear of the vehicle.
  • a virtual camera for shooting an auxiliary picture is provided diagonally above the first virtual vehicle, and the virtual camera moves with the first virtual vehicle.
  • the terminal acquires the first obtuse angle between the target lens direction corresponding to the virtual camera and the rear reference line of the first virtual vehicle.
  • the direction of the target lens is the direction from the virtual camera to the center point of the target virtual vehicle
  • the rear reference line is the straight line where the rear of the first virtual vehicle is located
  • the rear reference line is parallel to the horizontal plane and parallel to the first virtual vehicle.
  • the line connecting the front and rear is vertical.
  • FIG. 6 is a schematic diagram of the setting position of a virtual camera used for shooting an auxiliary picture according to an embodiment of the present application.
  • a virtual vehicle 621 and a target virtual vehicle 631 in the virtual scene it can be determined that the virtual camera 611 is located in the right front of the first virtual vehicle 621 through the top view. If there is a first virtual vehicle 622 and a target virtual vehicle 632 in the virtual scene, it can be determined from the side view that the virtual camera 612 is located in front of and above the first virtual vehicle 622 .
  • the virtual camera may also be located at the upper left and front of the first virtual object.
  • FIG. 7 is a schematic diagram of a process of determining an obtuse angle between a target lens direction and a vehicle rear reference line according to an embodiment of the present application.
  • the relative distance 76 between the first virtual vehicle 72 and the second virtual vehicle is determined.
  • the second virtual vehicle is the target virtual vehicle 73
  • the rear reference line 75 of the vehicle 72 is then connected to the center point of the target virtual vehicle 73 through the virtual camera 71, and the direction of the connection is used as the target lens direction 74 of the virtual camera 71, wherein the target lens direction 74 and the first
  • the intersection of the rear reference lines 75 of a virtual vehicle 72 forms four included angles, including two acute angles and two
  • Step 505 in response to the first obtuse angle being less than or equal to the first angle, and based on the position of the target virtual vehicle, determine the first lens direction of the virtual camera.
  • the first lens direction of the virtual camera is determined based on the position of the target virtual vehicle, and the first lens direction is the virtual camera actually shooting the virtual camera.
  • the camera direction of the scene in response to the first obtuse angle obtained by the terminal being less than or equal to the first angle, the first lens direction of the virtual camera is determined based on the position of the target virtual vehicle, and the first lens direction is the virtual camera actually shooting the virtual camera. The camera direction of the scene.
  • the first lens direction in response to the first obtuse angle acquired by the terminal being less than or equal to the first angle, is determined as the target lens direction.
  • FIG. 8 is a schematic diagram of a process of determining a lens direction involved in an embodiment of the present application.
  • the first angle 83 is 165 degrees
  • the first obtuse angle formed by the intersection between the target lens direction and the rear reference line of the first virtual vehicle 82
  • the angle is 165 degrees.
  • the intersection between the target lens direction and the rear reference line of the first virtual vehicle 82 forms a first obtuse angle 84.
  • the first obtuse angle 84 Compared with the first angle 83, it can be determined that the first obtuse angle 84 is smaller than the first angle 83, so that the first lens direction is determined as the target lens direction.
  • Step 506 in response to the first obtuse angle being greater than the first angle, determine the second lens direction of the virtual camera.
  • the second lens direction of the virtual camera is determined to be the lens direction of the virtual camera actually shooting the virtual scene.
  • the second lens direction points between the target lens direction and the vehicle rear pointing direction
  • the angle of the second obtuse angle between the second lens direction and the vehicle rear reference line is the first angle
  • the vehicle rear pointing direction is A direction from the front of the first virtual vehicle to the rear of the vehicle.
  • the first angle 83 is 165 degrees
  • the direction of the target lens intersects with the rear reference line of the first virtual vehicle 82 .
  • the angle of the second obtuse angle is 165 degrees.
  • the first obtuse angle 85 is formed by the intersection between the direction of the target lens and the rear reference line of the first virtual vehicle 82 .
  • the first obtuse angle 85 By comparing the first obtuse angle 85 with the first angle 83 , it can be determined that the first obtuse angle 85 is greater than the first angle 83 , so that the first lens direction is determined as the target lens direction corresponding to the second obtuse angle being the first angle.
  • Step 507 in response to the auxiliary picture being displayed in the virtual scene picture, start the picture presentation timer.
  • the picture presentation timer is started.
  • the picture presentation timer is used to record the duration of continuous presentation of auxiliary pictures in the virtual scene picture, or the picture presentation timer can also be used to record the presentation duration of auxiliary pictures with the same focus.
  • the auxiliary image in response to determining that the target virtual vehicle exists in the second virtual vehicle, the auxiliary image is displayed in the virtual scene image, and the timing function of the image display timer is started to record the display duration of the auxiliary image.
  • the auxiliary picture can be displayed in any area on the virtual scene picture, and the size of the auxiliary picture can be adjusted.
  • the user can adjust the position of the auxiliary picture on the virtual scene picture and the size of the auxiliary picture on the screen setting interface. Define settings or selections.
  • timing in response to the existence of the target virtual vehicle, timing is performed by a screen display timer, and if the terminal receives feedback that the target virtual vehicle does not exist at this time, the timing function of the screen display timer is ended, and the screen is reset. Show timer.
  • the terminal determines the target virtual vehicle through calculation at a certain moment, starts the screen display timer, and starts timing, and when the duration of the continuous determination of the target virtual vehicle is 3 seconds, the duration of the timing through the screen display timer is 3s If, after 5s, the target virtual vehicle exceeds the first virtual vehicle, and it is not determined that there are other virtual vehicles that meet the conditions as the target virtual vehicle after calculation, the timing function of the timer is displayed on the screen, and the timing of the timer is reset. Set to zero.
  • the auxiliary picture in response to the display time of the virtual scene picture being greater than the third duration, is displayed in the virtual scene picture.
  • the time at which the virtual scene picture starts to be displayed is timed by a timer.
  • the terminal does not judge the target virtual object, and when the duration recorded by the timer exceeds the third duration, the terminal starts The step of determining the target virtual vehicle is detected in real time.
  • the game when each virtual vehicle enters the game, the game will automatically start the countdown. , without performing the display of the auxiliary screen, or without performing the calculation and determination step of the target virtual vehicle, when the start time of the racing mode exceeds the third time period, start to display the auxiliary screen based on the target virtual vehicle.
  • the auxiliary picture in response to the distance between the first virtual vehicle from the location where the movement started and the current location is greater than the specified distance, the auxiliary picture is displayed in the virtual scene picture.
  • the target virtual object may change frequently at the starting stage, and the above two methods are used to control the first virtual vehicle near the starting point.
  • the above two methods are used to control the first virtual vehicle near the starting point.
  • the lens direction of the virtual camera in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to the second distance, the lens direction of the virtual camera is maintained, and the assistance in the lens direction captured by the virtual camera is displayed on the virtual scene screen screen.
  • the lens of the virtual camera stops following the position of the target virtual vehicle. Move and keep still, the lens focus of the virtual camera is still the target virtual vehicle, if the relative distance between the target virtual vehicle and the first virtual vehicle becomes larger than the second distance, then the virtual camera lens The direction continues to follow the movement of the target virtual vehicle.
  • Step 508 in response to the target virtual vehicle being switched from the first target virtual vehicle to the second target virtual vehicle during the auxiliary screen presentation process, reset the screen presentation timer.
  • the screen display timer needs to be reset to zero.
  • first target virtual vehicle and the second target virtual vehicle are any two of at least one second virtual vehicle.
  • the focus of the virtual camera is switched from the first target virtual vehicle to the second target virtual vehicle.
  • the virtual scene displayed in the auxiliary screen is switched from the image shot with the first target virtual vehicle as the focus to the second target virtual vehicle The picture taken for the focal point.
  • FIG. 9 is a schematic diagram of focus switching corresponding to an auxiliary screen according to an embodiment of the present application.
  • the auxiliary screen 92 is displayed on the current virtual scene screen.
  • the relative distance between the first virtual vehicles is the smallest, so the auxiliary screen 92 is a screen shot with the first target virtual vehicle 93 as the focus.
  • the second target virtual vehicle 94 exceeds the first target virtual vehicle 93 at a later time and becomes the virtual vehicle with the smallest relative distance from the first virtual vehicle, the focus of the virtual camera is switched to the second target virtual vehicle. screen shot.
  • a line pattern for indicating the sprint effect is added to the auxiliary screen.
  • the line pattern 95 of the sprint effect is located at the edge of the auxiliary screen 92 .
  • the line pattern 95 of the sprint effect By adding the line pattern 95 of the sprint effect, the user's sense of tension can be enhanced, thereby improving the user's operating experience.
  • Step 509 in response to the time duration corresponding to the screen display timer reaching the first duration, the display of the auxiliary screen is ended.
  • the terminal when the terminal acquires that the duration recorded by the screen display timer reaches the first duration, the display of the auxiliary screen on the virtual scene screen ends.
  • the screen display timer is reset, when the focus of the virtual camera continues to remain on the same virtual vehicle
  • the display of the auxiliary picture on the virtual scene picture ends only when the duration of the above video reaches the first duration.
  • the virtual vehicle as the focus can be adjusted in real time during the display of the auxiliary picture, and the display can be continuously performed through a smooth picture. It is beneficial for the user to obtain the effective position information of the rear virtual vehicle through the auxiliary screen during the operation.
  • Step 510 in response to the duration of the first obtuse angle being greater than the first angle reaching the second duration, ending displaying the auxiliary picture.
  • the display of the auxiliary image can also end.
  • the target virtual vehicle when the lens direction of the virtual camera turns to the maximum angle, the target virtual vehicle is partially in the auxiliary picture or not in the auxiliary picture at all. Therefore, in order to make the auxiliary picture show meaningful picture content as much as possible , when the time for the lens direction of the virtual camera to turn to the maximum angle continues for the second time period, the display of the auxiliary picture is ended.
  • the second duration may be shorter than the first duration, that is, compared with the solution shown in step 509, the auxiliary picture ended in the above manner can end displaying the auxiliary picture earlier.
  • FIG. 10 is a schematic diagram of an auxiliary screen when the first obtuse angle between the direction of the target lens and the reference line at the rear of the vehicle is greater than the first angle according to an embodiment of the present application.
  • the target virtual vehicle behind the first virtual vehicle is in an overtaking state, and the first obtuse angle corresponding to the target virtual vehicle is greater than the first angle, so there is no target virtual vehicle in the displayed auxiliary screen 1001 , only Including footage from the edge of the track.
  • auxiliary screen 1001 to display the track edge screen has no actual gain to the user's operation, so if the auxiliary screen 1001 is still ended when the display duration reaches the first duration, terminal resources will be wasted.
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is shot with the virtual camera with the target virtual vehicle as the focus, and the result obtained by shooting is shown.
  • auxiliary screen Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency and human-computer interaction efficiency when the user controls the virtual vehicle.
  • FIG. 11 shows a logical flowchart of a method for displaying pictures in a virtual scene provided by an exemplary embodiment of the present application, as shown in FIG. 11 .
  • the logic flow can include the following steps:
  • the terminal detects whether there are other virtual vehicles within the triggering range of the first virtual vehicle, wherein the triggering range may be a range behind the first virtual vehicle and the relative distance is less than the first distance, when it is detected that there are other virtual vehicles within the penalty range
  • the current state of the first virtual vehicle is judged (S1101). If it is determined that the first virtual vehicle is currently in a state of just rushing out of the starting point ( S1102 ), the rear-view mirror function is not triggered, wherein the rear-view mirror function is a function of displaying an auxiliary screen ( S1103 ). If it is determined that the first virtual vehicle is not currently in the state of just rushing out of the starting point (S1104), it is determined to trigger the rearview mirror function (S1105).
  • the virtual camera is used to track and capture the picture of the target virtual vehicle (S1107). If the target virtual vehicle leaves the first virtual vehicle during the shooting The trigger range corresponding to the virtual vehicle (S1108), then control the virtual camera to detach from the tracking to shoot the target virtual vehicle (S1109), if the target virtual vehicle leaves the trigger range within a specified time (eg 3 seconds) after returning to the trigger range of the first virtual vehicle (S1110), continue to control the virtual camera to track and capture the picture of the target virtual vehicle (S1111). If the display duration of the auxiliary picture has reached the specified maximum display duration, for example, it can be 3 seconds, control to turn off the rearview mirror function, and end the display of the auxiliary picture (S1112).
  • the specified maximum display duration for example, it can be 3 seconds
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is shot with the virtual camera with the target virtual vehicle as the focus, and the result obtained by shooting is shown.
  • auxiliary screen Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency and human-computer interaction efficiency when the user controls the virtual vehicle.
  • FIG. 12 is a structural block diagram of an apparatus for displaying pictures in a virtual scene according to an embodiment of the present application.
  • the picture display device in the virtual scene can be set in a computer device to execute all or part of the steps in the method shown in the corresponding embodiment of FIG. 3 or FIG. 5 .
  • the device for displaying pictures in the virtual scene may include:
  • the main screen display module 1210 is configured to display a virtual scene image, and the virtual scene image includes the first virtual vehicle;
  • a target determination module 1220 configured to determine a target virtual vehicle from the second virtual vehicles based on a relative distance between the first virtual vehicle and at least one second virtual vehicle; the second virtual vehicle is located in the a virtual vehicle behind the first virtual vehicle;
  • the auxiliary image display module 1230 is configured to display an auxiliary image in the virtual scene image; the auxiliary image takes the target virtual vehicle as a focus and is captured by a virtual camera set corresponding to the first virtual vehicle.
  • the target determination module 1220 includes:
  • a candidate acquisition submodule configured to acquire a candidate virtual vehicle;
  • the candidate virtual vehicle is the second virtual vehicle whose relative distance from the first virtual vehicle is less than or equal to a first distance;
  • the first target determination submodule is configured to determine the target virtual vehicle from the candidate virtual vehicles.
  • the target determination sub-module includes:
  • a target determination unit configured to determine the candidate virtual vehicle with the smallest relative distance from the first virtual vehicle as the target virtual vehicle.
  • the target determination module 1220 includes:
  • a first obtaining submodule configured to obtain the second virtual vehicle with the smallest relative distance from the first virtual vehicle
  • a second target determination submodule configured to determine the second virtual vehicle as the target virtual vehicle in response to the relative distance between the second virtual vehicle and the first virtual vehicle being less than or equal to a first distance vehicle.
  • the apparatus further includes:
  • a distance obtaining module configured to obtain the first virtual vehicle and the at least one second virtual vehicle before determining a target virtual vehicle from the second virtual the relative distance between the second virtual vehicles.
  • the distance acquisition module includes:
  • the distance acquisition sub-module is configured to determine the length of the connection between the rear of the first virtual vehicle and the center point of the second virtual vehicle as the relative distance.
  • the apparatus further includes:
  • a timing module configured to start a picture display timer in response to the auxiliary picture being displayed in the virtual scene picture; the picture presentation timer is used to record the duration of the auxiliary picture being continuously displayed in the virtual scene picture;
  • the first picture ending module is configured to end displaying the auxiliary picture in response to the time duration corresponding to the picture presentation timer reaching the first duration.
  • the apparatus further includes:
  • the timing reset module is configured to, in response to the time duration corresponding to the screen display timing reaching the first duration, before ending the display of the auxiliary screen, in response to the target virtual vehicle being removed from the first target during the display process of the auxiliary screen
  • the virtual vehicle is switched to the second target virtual vehicle, and the screen display timer is reset; the first target virtual vehicle and the second target virtual vehicle are any two of at least one of the second virtual vehicles .
  • the virtual camera is located diagonally above the first virtual vehicle, and the virtual camera moves with the first virtual vehicle;
  • the device also includes:
  • an obtuse angle acquiring module configured to acquire a first obtuse angle between the direction of the target lens and the rear reference line before displaying the auxiliary image in the virtual scene image;
  • the direction of the target lens is directed from the virtual camera to the target virtual camera
  • the rear reference line is the straight line where the rear of the first virtual vehicle is located, and the rear reference line is parallel to the horizontal plane and parallel to the front and rear of the first virtual vehicle.
  • the connection is vertical;
  • a first direction determination module configured to determine a first lens direction of the virtual camera based on the position of the target virtual vehicle at the current moment in response to the first obtuse angle being less than or equal to the first angle; the first lens direction is The target lens direction.
  • the apparatus further includes:
  • a second direction determination module configured to determine a second lens direction of the virtual camera in response to the first obtuse angle being greater than the first angle; the second lens direction points to the target lens direction and the rear of the vehicle
  • the angle between the pointing directions and the second obtuse angle between the second lens direction and the rear reference line is the first angle; the rear pointing direction is from the front of the first virtual vehicle to the vehicle direction of the tail.
  • the apparatus further includes:
  • the second picture ending module is configured to end displaying the auxiliary picture in response to the duration of the first obtuse angle between the target lens direction and the rear reference line being greater than the first angle reaching a second duration.
  • the auxiliary picture display module 1230 includes:
  • a direction determination submodule configured to maintain the lens direction of the virtual camera in response to the relative distance between the target virtual vehicle and the first virtual vehicle being less than or equal to a second distance
  • the picture shooting sub-module is configured to display the auxiliary picture in the direction of the lens shot by the virtual camera on the virtual scene picture.
  • the auxiliary picture display module 1230 includes:
  • the auxiliary picture display submodule is configured to display the auxiliary picture in the virtual scene picture in response to the display time of the virtual scene picture being greater than a third duration.
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is shot with the virtual camera with the target virtual vehicle as the focus, and the result obtained by shooting is shown.
  • auxiliary screen Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency and human-computer interaction efficiency when the user controls the virtual vehicle.
  • FIG. 13 is a structural block diagram of an apparatus for displaying images in a virtual scene according to an embodiment of the present application.
  • the apparatus for displaying pictures in a virtual scene may be used in a terminal to perform all or part of the steps performed by the terminal in the method shown in the corresponding embodiment of FIG. 4 or FIG. 5 .
  • the device for displaying pictures in the virtual scene may include:
  • the main screen display module 1310 is configured to display a virtual scene image, and the virtual scene image includes a first virtual vehicle;
  • the first auxiliary picture display module 1320 is configured to display a first auxiliary picture in the virtual scene picture; the first auxiliary picture takes the first target virtual vehicle as the focus, and uses the virtual scene corresponding to the first virtual vehicle.
  • a picture captured by a camera; the first target virtual vehicle is a virtual vehicle with the smallest relative distance from the first virtual vehicle, and the relative distance is less than or equal to the first distance;
  • the second auxiliary screen display module 1330 is configured to transform the virtual vehicle whose relative distance from the first virtual vehicle is the smallest and whose relative distance is less than or equal to the first distance into a second target A virtual vehicle, displaying a second auxiliary image in the virtual scene image, and the second auxiliary image takes the second target virtual vehicle as the focus and is captured by the virtual camera set corresponding to the first virtual vehicle screen.
  • the relative distance between the first virtual vehicle and the second virtual vehicle is detected in real time, the target virtual vehicle is determined, and the virtual scene is shot with the virtual camera with the target virtual vehicle as the focus, and the result obtained by shooting is shown.
  • auxiliary screen Since the relative distance between the second virtual vehicle and the first virtual vehicle may change frequently, the above solution can flexibly determine the target virtual vehicle corresponding to each moment, and display an auxiliary screen focusing on the target virtual vehicle, Therefore, the auxiliary picture can display effective pictures as much as possible, which improves the efficiency of the auxiliary picture in transmitting information beneficial to the user's operation, and ensures that the user can observe the effective picture behind the vehicle while observing the picture in front of the virtual vehicle normally. content, so as to improve the interaction efficiency and human-computer interaction efficiency when the user controls the virtual vehicle.
  • FIG. 14 is a structural block diagram of a computer device 1400 shown in an embodiment of the present application.
  • the computer device 1400 may be a user terminal, such as a smart phone, a tablet computer, a moving picture expert compression standard audio layer 3 (Moving Picture Experts Group Audio Layer III, MP3), a moving picture expert compression standard audio layer 4 (Moving Picture Experts Group Audio Layer III, MP3) Layer IV, MP4) player, laptop or desktop computer.
  • Computer device 1400 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, and the like by other names.
  • computer device 1400 includes: processor 1401 and memory 1402 .
  • the processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1401 can use at least one hardware form among digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA field programmable gate array
  • PLA programmable logic array
  • the processor 1401 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in a wake-up state, also called a central processing unit (CPU); the coprocessor is a A low-power processor for processing data in a standby state.
  • CPU central processing unit
  • the processor 1401 may be integrated with a graphics processor (Graphics Processing Unit, GPU), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1401 may further include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1402 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1401 to implement all of the methods provided in the embodiments of the present application or part of the steps.
  • the computer device 1400 may also optionally include: a peripheral device interface 1403 and at least one peripheral device.
  • the processor 1401, the memory 1402 and the peripheral device interface 1403 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1403 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1404 , a display screen 1405 , a camera assembly 1406 , an audio circuit 1407 , a positioning assembly 1408 and a power supply 1409 .
  • FIG. 14 does not constitute a limitation on the computer device 1400, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • a non-transitory computer-readable storage medium including instructions, such as a memory including at least one instruction, at least one segment of program, code set or instruction set, the above at least one instruction, at least one segment
  • the program, code set or instruction set can be executed by the processor to complete all or part of the steps of the method shown in the above-mentioned embodiment corresponding to FIG. 3 or FIG. 4 or FIG. 5 .
  • the non-transitory computer-readable storage medium may be Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disc Read-Only Memory (CD) -ROM), magnetic tapes, floppy disks, and optical data storage devices, etc.
  • the embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the terminal reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the terminal executes the method for displaying pictures in a virtual scene provided in various optional implementation manners of the foregoing aspects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请是关于一种虚拟场景中画面展示方法、装置、计算机设备、存储介质及计算机程序产品,涉及虚拟场景技术领域。该方法包括:展示虚拟场景画面,虚拟场景画面中包括第一虚拟车辆;基于第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从第二虚拟车辆中确定目标虚拟车辆;第二虚拟车辆是位于第一虚拟车辆后方的虚拟车辆;在虚拟场景画面中展示辅助画面;辅助画面是以目标虚拟车辆为焦点,通过对应第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。

Description

虚拟场景中画面展示方法、装置、设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202110090636.X、申请日为2021年01月22日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及虚拟场景技术领域,特别涉及一种虚拟场景中画面展示方法、装置、计算机设备、存储介质及计算机程序产品。
背景技术
在操控虚拟车辆的游戏类应用程序中,比如,在赛车类游戏中,用户可以在游戏界面中模拟实际驾驶车辆的后视镜功能。
在相关技术中,在虚拟场景画面上叠加有后视镜功能控件,通过接收用户对后视镜功能控件的触发操作,可以直接将终端的显示屏幕上展示的虚拟场景画面切换为主控虚拟车辆后面的视角。
然而,相关技术,通过触发后视镜功能控件直接全屏展示后面视角的虚拟场景画面,会出现用户无法观察到虚拟车辆前方的画面的情况,影响用户控制虚拟车辆时的交互效率,使得人机交互效率低。
发明内容
本申请实施例提供了一种虚拟场景中画面展示方法、装置、计算机设备、存储介质及计算机程序产品,能够提高人机交互效率。
本申请实施例提供了一种虚拟场景中画面展示方法,所述方法包括:
展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述第二虚拟车辆中确定目标虚拟车辆;所述第二虚拟车辆是位于所述第一虚拟车辆后方的虚拟车辆;
在所述虚拟场景画面中展示辅助画面;所述辅助画面是以所述目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
本申请实施例还提供了一种虚拟场景中画面展示方法,所述方法包括:
展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
在所述虚拟场景画面中展示第一辅助画面;所述第一辅助画面是以第一目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面;所述第一目标虚拟车辆是与所述第一虚拟车辆之间的相对距离最小,且所述相对距离小于等于第一距离的虚拟车辆;
响应于与所述第一虚拟车辆之间的所述相对距离最小,且所述相对距离小于等于所 述第一距离的所述虚拟车辆变换为第二目标虚拟车辆,在所述虚拟场景画面中展示第二辅助画面,所述第二辅助画面是以所述第二目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的所述虚拟摄像头进行拍摄的画面。
本申请实施例还提供了一种虚拟场景中画面展示装置,所述装置包括:
主画面展示模块,配置为展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
目标确定模块,配置为基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述第二虚拟车辆中确定目标虚拟车辆;所述第二虚拟车辆是位于所述第一虚拟车辆后方的虚拟车辆;
辅画面展示模块,配置为在所述虚拟场景画面中展示辅助画面;所述辅助画面是以所述目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
本申请实施例还提供了一种虚拟场景中画面展示装置,所述装置包括:
主画面展示模块,配置为展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
第一辅画面展示模块,配置为在所述虚拟场景画面中展示第一辅助画面;所述第一辅助画面是以第一目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面;所述第一目标虚拟车辆是与所述第一虚拟车辆之间的相对距离最小,且所述相对距离小于等于第一距离的虚拟车辆;
第二辅画面展示模块,配置为响应于与所述第一虚拟车辆之间的所述相对距离最小,且所述相对距离小于等于所述第一距离的所述虚拟车辆变换为第二目标虚拟车辆,在所述虚拟场景画面中展示第二辅助画面,所述第二辅助画面是以所述第二目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的所述虚拟摄像头进行拍摄的画面。
本申请实施例还提供了一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述虚拟场景中画面展示方法。
本申请实施例提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述虚拟场景中画面展示方法。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式中提供的虚拟场景中画面展示方法。
本申请实施例提供的技术方案的有益效果至少包括:
通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高控制虚拟车辆时的交互效率、以及提高人机交互效率。
附图说明
图1是本申请实施例提供的实施环境的示意图;
图2是本申请实施例提供的虚拟场景的显示界面示意图;
图3是本申请实施例提供的一种虚拟场景中画面展示方法的流程图;
图4是本申请实施例提供的一种虚拟场景中画面展示方法的流程图;
图5是本申请实施例提供的一种虚拟场景中画面展示方法流程图;
图6是本申请实施例提供的一种用于拍摄辅助画面的虚拟摄像头的设置位置示意图;
图7是本申请实施例提供的一种目标镜头方向与车尾参考线之间的钝角确定过程示意图;
图8是本申请实施例提供的一种镜头方向确定过程的示意图;
图9是本申请实施例提供的一种辅助画面对应的焦点切换示意图;
图10是本申请实施例提供的一种目标镜头方向与车尾参考线之间的第一钝角大于第一角度时辅助画面的示意图;
图11是本申请实施例提供的一种虚拟场景中画面展示方法的逻辑流程图;
图12是本申请实施例提供的一种虚拟场景中画面展示装置的结构方框图;
图13是本申请实施例提供的一种虚拟场景中画面展示装置的结构方框图;
图14是本申请实施例提供的计算机设备的结构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
应当理解的是,在本文中提及的“若干个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
为了便于理解,下面对本申请涉及的几个名词进行解释。
1)虚拟场景
虚拟场景是应用程序在终端上运行时显示(或提供)的虚拟的场景。该虚拟场景可以是对真实世界的仿真环境场景,也可以是半仿真半虚构的三维环境场景,还可以是纯虚构的三维环境场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景和三维虚拟场景中的任意一种,下述实施例以虚拟场景是三维虚拟场景来举例说明,但对此不加以限定。在一些实施例中,该虚拟场景还可用于至少两个虚拟角色之间的虚拟场景 对战。该虚拟场景还可用于至少两个虚拟角色之间使用虚拟枪械进行对战。在一些实施例中,该虚拟场景还可用于在目标区域范围内,至少两个虚拟角色之间使用虚拟枪械进行对战,该目标区域范围会随虚拟场景中的时间推移而不断变小。
虚拟场景通常由终端等计算机设备中的应用程序生成基于终端中的硬件(比如屏幕)进行展示。该终端可以是智能手机、平板电脑或者电子书阅读器等移动终端;或者,该终端也可以是笔记本电脑或者固定式计算机的个人计算机设备。
2)虚拟对象
虚拟对象是指在虚拟场景中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、虚拟载具中的至少一种。在一些实施例中,当虚拟场景为三维虚拟场景时,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟场景中具有自身的形状、体积以及朝向,并占据三维虚拟场景中的一部分空间。
3)虚拟车辆
虚拟车辆是指虚拟对象在虚拟环境中能够根据用户对操作控件的控制,实现行驶操作的虚拟车辆,该虚拟车辆能实现的功能可以包括加速、减速、刹车、后退、转向、漂移以及使用道具等,上述功能可以是自动实现的、例如虚拟车辆可以自动加速,或者该虚拟车辆可以自动转向;上述功能还可以是根据用户对操作控件的控制触发实现的,例如当用户触发刹车控件,虚拟车辆执行刹车动作。
4)赛车游戏
赛车游戏主要是在虚拟的比赛场景下进行的,多个虚拟车辆以实现指定比赛目标为目的实现的竞速类游戏,在该虚拟比赛场景中,用户可以控制终端对应的虚拟车辆,与其他用户控制的虚拟车辆进行竞速比赛;用户也可以控制终端对应的虚拟车辆,与赛车游戏对应的客户端程序生成的AI控制的虚拟车辆进行竞速比赛。
图1示出了本申请实施例提供的实施环境的示意图。该实施环境可以包括:第一终端110、服务器120和第二终端130。
第一终端110安装和运行有支持虚拟环境的应用程序111,该应用程序111可以是多人在线对战程序,或者,该应用程序111也可以是离线类应用程序。当第一终端运行应用程序111时,第一终端110的屏幕上显示应用程序111的用户界面。该应用程序111可以是竞速游戏(Racing Game,RCG)、包含赛车功能的沙盒(Sandbox)类游戏、或者是包含赛车功能的其他类型游戏。在本实施例中,以该应用程序111是RCG来举例说明。第一终端110是第一用户112使用的终端,第一用户112使用第一终端110控制位于虚拟环境中的第一虚拟车辆进行活动,第一虚拟车辆可以称为第一用户112的主控虚拟对象。第一虚拟车辆的活动包括但不限于:加速、减速、刹车、后退、转向、漂移以及使用道具等中的至少一种。示意性的,第一虚拟车辆可以是虚拟的车辆、或者根据其他交通工具(例如船舶、飞机)等建模出的具有虚拟车辆功能的虚拟模型;第一虚拟车辆还可以是根据现实中具有的现实车辆模型建模出的虚拟车辆。
第二终端130安装和运行有支持虚拟环境的应用程序131,该应用程序131可以是多人在线对战程序。当第二终端130运行应用程序131时,第二终端130的屏幕上显示应用程序131的用户界面。该客户端可以是RCG游戏程序、Sandbox游戏、以及其他包含赛车功能的游戏程序中的任意一种,在本实施例中,以该应用程序131是RCG游戏来举例说明。
在一些实施例中,第二终端130是第二用户132使用的终端,第二用户132使用第二终端130控制位于虚拟环境中的第二虚拟车辆实现行驶操作,第二虚拟车辆可以 称为第二用户132的主控虚拟车辆。
在一些实施例中,该虚拟环境中还可以存在第三虚拟车辆,该第三虚拟车辆是由该应用程序131对应的AI控制的,该第三虚拟车辆可以称为AI控制虚拟车辆。
在一些实施例中,第一虚拟车辆、第二虚拟车辆以及第三虚拟车辆处于同一虚拟世界中,第一虚拟车辆和第二虚拟车辆可以属于同一个阵营、同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。在一些实施例中,第一虚拟车辆和第二虚拟车辆可以属于不同的阵营、不同的队伍、不同的组织或具有敌对关系。
在一些实施例中,第一终端110和第二终端130上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台(安卓或IOS)上的同一类型应用程序。第一终端110可以泛指多个终端中的一个,第二终端130可以泛指多个终端中的另一个,本实施例仅以第一终端110和第二终端130来举例说明。第一终端110和第二终端130的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。
图1中仅示出了两个终端,但在不同实施例中存在多个其它终端可以接入服务器120。在一些实施例中,还存在一个或多个终端是开发者对应的终端,在该终端上安装有支持虚拟环境的应用程序的开发和编辑平台,开发者可在该终端上对应用程序进行编辑和更新,并将更新后的应用程序安装包通过有线或无线网络传输至服务器120,第一终端110和第二终端130可从服务器120下载应用程序安装包实现对应用程序的更新。
第一终端110、第二终端130以及其它终端通过无线网络或有线网络与服务器120相连。
服务器120包括一台服务器、多台服务器组成的服务器集群、云计算平台和虚拟化中心中的至少一种。服务器120用于为支持三维虚拟环境的应用程序提供后台服务。在一些实施例中,服务器120承担主要计算工作,终端承担次要计算工作;或者,服务器120承担次要计算工作,终端承担主要计算工作;或者,服务器120和终端之间采用分布式计算架构进行协同计算。
在一个示意性的例子中,服务器120包括存储器121、处理器122、用户账号数据库123、对战服务模块124、面向用户的输入/输出接口(Input/Output Interface,I/O接口)125。其中,处理器122用于加载服务器120中存储的指令,处理用户账号数据库123和对战服务模块124中的数据;用户账号数据库123用于存储第一终端110、第二终端130以及其它终端所使用的用户账号的数据,比如用户账号的头像、用户账号的昵称、用户账号的战斗力指数,用户账号所在的服务区;对战服务模块124用于提供多个对战房间供用户进行对战,比如1V1对战、3V3对战、5V5对战等;面向用户的I/O接口125用于通过无线网络或有线网络和第一终端110和/或第二终端130建立通信交换数据。
其中,虚拟场景可以是三维虚拟场景,或者,虚拟场景也可以是二维虚拟场景。以虚拟场景是三维虚拟场景为例,请参考图2,其示出了本申请一个示例性的实施例提供的虚拟场景的显示界面示意图。如图2所示,虚拟场景的显示界面包含场景画面200,该场景画面200中包括当前控制的虚拟车辆210、三维虚拟场景的环境画面220、以及虚拟车辆240。其中,虚拟车辆240可以是其它终端对应用户控制的虚拟对象或者应用程序控制的虚拟对象。
在图2中,当前控制的虚拟车辆210与虚拟车辆240是在三维虚拟场景中的三维模型,在场景画面200中显示的三维虚拟场景的环境画面为当前控制的虚拟车辆210对应的第三人称视角所观察到的物体,其中该虚拟车辆210对应的第三人称视角是指 从该虚拟车辆的后上方设置的虚拟摄像头观察到的视角画面,示例性的,如图2所示,在当前控制的虚拟车辆210对应的第三人称视角的观察下,显示的三维虚拟场景的环境画面220为道路224、天空225、小山221以及厂房222。
当前控制的虚拟车辆210可以在用户的控制下进行转向、加速、漂移等操作,在用户的控制下虚拟场景中的虚拟车辆可以展示不同的三维模型,比如,终端的屏幕支持触控操作,且虚拟场景的场景画面200中包含虚拟控件,则用户触控该虚拟控件时,当前控制的虚拟车辆210可以在虚拟场景执行指定操作(例如变形操作)并且展示当前对应的三维模型。
图3示出了本申请实施例提供的虚拟场景中画面展示方法的流程图。其中,上述方法可以由计算机设备执行,该计算机设备可以是终端,也可以是服务器,或者,上述计算机设备也可以包含上述终端和服务器。如图3所示,该虚拟场景中画面展示方法,包括:
步骤301,计算机设备展示虚拟场景画面,虚拟场景画面中包括第一虚拟车辆。
步骤302,基于第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从第二虚拟车辆中确定目标虚拟车辆;第二虚拟车辆是位于第一虚拟车辆后方的虚拟车辆。
步骤303,在虚拟场景画面中展示辅助画面;辅助画面是以目标虚拟车辆为焦点,通过对应第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
应用本申请实施例,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高控制虚拟车辆时的交互效率以及人机交互效率。
图4示出了本申请实施例提供的虚拟场景中画面展示方法的流程图。其中,上述方法可以由计算机设备执行,该计算机设备可以是终端,也可以是服务器,或者,上述计算机设备也可以包含上述终端和服务器。如图4所示,该虚拟场景中画面展示方法,包括:
步骤401,计算机设备展示虚拟场景画面,虚拟场景画面中包括第一虚拟车辆。
步骤402,在虚拟场景画面中展示第一辅助画面;第一辅助画面是以第一目标虚拟车辆为焦点,通过对应第一虚拟车辆设置的虚拟摄像头进行拍摄的画面;第一目标虚拟车辆是与第一虚拟车辆之间的相对距离最小,且相对距离小于等于第一距离的虚拟车辆。
步骤403,响应于与第一虚拟车辆之间的相对距离最小,且相对距离小于等于第一距离的虚拟车辆变换为第二目标虚拟车辆,在虚拟场景画面中展示第二辅助画面,第二辅助画面是以第二目标虚拟车辆为焦点,通过对应第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
综上所述,本申请所示方案,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面 能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高用户控制虚拟车辆时的交互效率以及人机交互效率。
图5示出了本申请实施例示出的虚拟场景中画面展示方法流程图。其中,上述方法可以由计算机设备执行,该计算机设备可以是终端,也可以是服务器,或者,上述计算机设备也可以包含上述终端和服务器。如图5所示,以计算机设备是终端为例,终端可以通过执行以下步骤来在虚拟场景画面上展示辅助画面。
步骤501,计算机设备展示虚拟场景画面。
在本申请实施例中,终端展示包含第一虚拟车辆的虚拟场景画面。
其中,该虚拟场景画面可以是包含第一虚拟车辆的,并且与其他虚拟车辆进行赛车比赛的虚拟场景。第一虚拟车辆是由该终端控制的虚拟车辆,其它虚拟车辆可以是其他终端控制的或者是AI控制的虚拟车辆。
在一些实施例中,该虚拟场景画面是根据该第一虚拟车辆的第三人称视角观察到的虚拟场景画面。其中,该第一虚拟车辆的第三人称视角是该第一虚拟车辆的后上方设置的主画面虚拟摄像头对应的视角,该第一虚拟车辆的第三人称视角观察到的虚拟场景画面是该第一虚拟车辆后上方设置的主画面虚拟摄像头观察到的虚拟场景画面。
或者,该虚拟场景画面是该第一虚拟车辆的第一人称视角观察到的虚拟场景画面。其中,该第一虚拟车辆的第一人称视角是该第一虚拟车辆的驾驶员位置上设置的主画面虚拟摄像头对应的视角,该第一虚拟车辆的第一人称视角观察到的虚拟场景画面是该第一虚拟车辆的驾驶员位置上设置的主画面虚拟摄像头观察到的虚拟场景画面。
在一些实施例中,虚拟场景画面是铺满展示在终端的显示区域中,该虚拟场景画面是控制第一虚拟车辆在虚拟场景中进行赛车比赛时的主显示画面,用于展示第一虚拟车辆在赛车比赛过程中的路径画面,用户通过获取到前方的路径画面对第一虚拟车辆进行操控。
其中,在虚拟场景画面上叠加有控件或者显示信息。
比如,控件可以包括用于接收触发操作控制第一虚拟车辆的移动方向的方向控件、用于接收触发操作控制第一虚拟车辆进行刹车的刹车控件以及用于控制第一虚拟车辆进行加速移动的加速控件等。而显示信息可以包括用于指示第一虚拟车辆以及其它虚拟车辆对应的账号身份标识,以及当前时刻各个虚拟车辆位置的先后顺序的排行信息、用于指示完整的虚拟场景的地图,以及简略的各个虚拟车辆在地图上的位置的地图信息等。
在一些实施例中,虚拟场景画面上叠加有视角切换控件,响应于用户对该视角切换控件的指定操作,虚拟场景画面可以在第一虚拟车辆的第一人称视角以及该第一虚拟车辆的第三人称视角之间切换。
例如,当终端显示的该虚拟场景画面是该第一虚拟车辆的第一人称视角对应的虚拟场景画面时,响应于用户对该视角切换控件的指定操作,该终端将该第一虚拟车辆的第一人称视角对应的虚拟场景画面切换为第三人称视角对应的虚拟场景画面;当终端显示的该虚拟场景画面是该第一虚拟车辆对应的第三人称视角对应的虚拟场景画面时,响应于用户对该视角切换控件的指定操作,该终端将该第一虚拟车辆的第三人称视角对应的虚拟场景画面切换为第一人称视角对应的虚拟场景画面。
在一些实施例中,同一用户账号对应的虚拟车辆可以是多个不同类型的虚拟车辆,终端响应于接收到服务器下发的虚拟车辆信息,在车辆选择界面上展示虚拟车辆信息对应的各个不同类型的虚拟车辆。响应于接收到用户对车辆选择界面的选择操 作,确定选择操作对应的目标虚拟车辆,并将目标虚拟车辆确定为第一虚拟车辆,同理,服务器接收到指定的虚拟场景标识,将对应的虚拟场景展示在终端上。
步骤502,获取第一虚拟车辆与第二虚拟车辆之间的相对距离。
在本申请实施例中,终端获取第一虚拟车辆与各个第二虚拟车辆之间的相对距离,第二虚拟车辆是位于第一虚拟车辆后方的虚拟车辆。
在一些实施例中,通过获取第一虚拟车辆的车尾参考线,将未超过该车尾参考线的区域确定为第一虚拟车辆的后方,且将位于第一虚拟车辆后方的各个虚拟车辆确定为第二虚拟车辆。
其中,第一虚拟车辆的车尾参考线是第一虚拟车辆的车尾所在直线,并且该车尾参考线与虚拟场景中的水平面平行,且与第一虚拟车辆的车头和车尾的连线垂直。
在一些实施例中,将第一虚拟车辆的车尾与第二虚拟车辆的中心点之间的连线长度,确定为相对距离。
其中,第二虚拟车辆的中心点可以是该虚拟车辆的重心。并且计算得到的相对距离是指虚拟场景中的距离。
步骤503,基于第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从第二虚拟车辆中确定目标虚拟车辆。
在本申请实施例中,终端基于确定的各个第二虚拟车辆与第一虚拟车辆之间的相对距离,判断该相对距离是否满足指定条件,当该相对距离满足指定条件时,确定该相对距离对应的第二虚拟车辆是目标虚拟车辆,若没有满足指定条件的相对距离,则当前时刻不存在目标虚拟车辆。
在一些实施例中,在同一时刻,只存在一个目标虚拟车辆,或者不存在目标虚拟车辆。
在一些实施例中,将同时满足与第一虚拟车辆之间的相对距离小于等于第一距离以及该相对距离最小的第二虚拟车辆,确定为目标虚拟车辆。
1)首先,获取待选虚拟车辆,从待选虚拟车辆中确定目标虚拟车辆。然后,将与第一虚拟车辆之间的相对距离最小的待选虚拟车辆,确定为目标虚拟车辆。
其中,待选虚拟车辆是与第一虚拟车辆之间的相对距离小于等于第一距离的第二虚拟车辆。该待选虚拟车辆可以是满足相对距离小于等于第一距离的条件的部分第二虚拟车辆的集合。
比如,若第一距离为100米,当获取到虚拟场景中的第一虚拟车辆后方与该第一虚拟车辆的相对距离小于等于100米的第二虚拟车辆具有虚拟车辆甲、虚拟车辆乙以及虚拟车辆丙时,获取虚拟车辆甲、虚拟车辆乙以及虚拟车辆丙为待选虚拟车辆。比较三个待选虚拟车辆分别与第一虚拟车辆之间的相对距离,若虚拟车辆甲对应的相对距离是60米、虚拟车辆乙对应的相对距离是30米、虚拟车辆丙对应的相对距离是100米,则将相对距离最小的虚拟车辆乙确定为目标虚拟车辆。
2)首先,获取与第一虚拟车辆之间的相对距离最小的第二虚拟车辆,然后,响应于第二虚拟车辆与第一虚拟车辆之间的相对距离小于等于第一距离,将第二虚拟车辆确定为目标虚拟车辆。
比如,当获取到虚拟场景中的第一虚拟车辆后方具有虚拟车辆甲、虚拟车辆乙以及虚拟车辆丙时,获取虚拟车辆甲、虚拟车辆乙以及虚拟车辆丙分别与第一虚拟车辆之间的相对距离,然后比较三个待选虚拟车辆分别与第一虚拟车辆之间的相对距离,若虚拟车辆甲对应的相对距离是105米、虚拟车辆乙对应的相对距离是110米、虚拟车辆丙对应的相对距离是120米,则确定相对距离最小的虚拟车辆是虚拟车辆甲,然后判断虚拟车辆甲是否小于等于第一距离,若第一距离是100米,则判断虚拟车辆甲 不符合,当前时刻不具有目标虚拟车辆。若第一距离是105米,则判断虚拟车辆甲符合小于等于第一距离的条件,确定当前时刻的目标虚拟车辆是虚拟车辆甲。
步骤504,获取目标镜头方向与车尾参考线之间的第一钝角。
在本申请实施例中,在第一虚拟车辆的斜上方具有用于拍摄辅助画面的虚拟摄像头,并且该虚拟摄像头随着第一虚拟车辆移动。终端获取该虚拟摄像头对应的目标镜头方向与第一虚拟车辆的车尾参考线之间的第一钝角。
其中,目标镜头方向是从虚拟摄像机指向目标虚拟车辆的中心点的方向,车尾参考线是第一虚拟车辆的车尾所在直线,并且车尾参考线与水平面平行,且与第一虚拟车辆的车头和车尾的连线垂直。
示例性的,图6是本申请实施例涉及的一种用于拍摄辅助画面的虚拟摄像头的设置位置示意图。如图6所示,当虚拟场景中具有第一虚拟车辆621以及目标虚拟车辆631时,通过俯视图可以确定虚拟摄像机611位于第一虚拟车辆621的右前方。若虚拟场景中具有第一虚拟车辆622以及目标虚拟车辆632时,通过侧视图可以确定虚拟摄像机612位于第一虚拟车辆622的前上方。
其中,该虚拟摄像机也可以位于第一虚拟对象的左前上方。
比如,图7是本申请实施例涉及的一种目标镜头方向与车尾参考线之间的钝角确定过程示意图。如图7所示,通过将第一虚拟车辆72的车尾与第二虚拟车辆73的中心点之间进行连线,确定第一虚拟车辆72与该第二虚拟车辆之间的相对距离76,通过判断相对距离可以确定该第二虚拟车辆是目标虚拟车辆73,通过第一虚拟车辆72的车尾,作平行于水平面,垂直于车头车尾连线的直线,将该直线获取为第一虚拟车辆72的车尾参考线75,然后通过虚拟摄像机71向目标虚拟车辆73的中心点进行连线,该连线的指向方向作为虚拟摄像机71的目标镜头方向74,其中,目标镜头方向74与第一虚拟车辆72的车尾参考线75之间相交形成了四个夹角,其中包括两个锐角以及两个钝角,且两个锐角的角度相同,两个钝角的角度相同,将其中的第一钝角77进行获取。
步骤505,响应于第一钝角小于等于第一角度,基于目标虚拟车辆的位置,确定虚拟摄像机的第一镜头方向。
在本申请实施例中,响应于终端获取到的第一钝角小于等于第一角度,则基于目标虚拟车辆的位置,确定虚拟摄像机的第一镜头方向,该第一镜头方向为虚拟摄像头实际拍摄虚拟场景的镜头方向。
在一些实施例中,响应于终端获取到的第一钝角小于等于第一角度,将第一镜头方向确定为目标镜头方向。
示例性的,图8是本申请实施例涉及的一种镜头方向确定过程的示意图。如图8所示,若第一角度83为165度,当目标虚拟车辆移动到图中虚线位置时,目标镜头方向与第一虚拟车辆82的车尾参考线之间相交形成的第一钝角的角度是165度,当目标虚拟车辆86处于如图所示位置时,此时目标镜头方向与第一虚拟车辆82的车尾参考线之间相交形成的是第一钝角84,将第一钝角84与第一角度83进行比较,可以确定该第一钝角84小于第一角度83,从而将第一镜头方向确定为目标镜头方向。
步骤506,响应于第一钝角大于第一角度,确定虚拟摄像机的第二镜头方向。
在本申请实施例中,若响应于终端获取到的第一钝角大于第一角度,则确定虚拟摄像机的第二镜头方向为虚拟摄像头实际拍摄虚拟场景的镜头方向。
在一些实施例中,第二镜头方向指向目标镜头方向与车尾指向方向之间,且第二镜头方向与车尾参考线之间的第二钝角的角度为第一角度,车尾指向方向是从第一虚拟车辆的车头指向车尾的方向。
示例性的,如图8所示,若第一角度83为165度,当目标虚拟车辆移动到图中虚线位置时,目标镜头方向与第一虚拟车辆82的车尾参考线之间相交形成的第二钝角的角度是165度,当目标虚拟车辆87处于如图所示位置时,此时目标镜头方向与第一虚拟车辆82的车尾参考线之间相交形成的是第一钝角85,将第一钝角85与第一角度83进行比较,可以确定该第一钝角85大于第一角度83,从而将第一镜头方向确定为第二钝角为第一角度时对应的目标镜头方向。
步骤507,响应于辅助画面展示在虚拟场景画面中,启动画面展示计时器。
在本申请实施例中,当开始在虚拟场景画面中展示辅助画面的同时,启动画面展示计时器。
其中,画面展示计时器用于记录辅助画面在虚拟场景画面中连续展示的时长,或者该画面展示计时器还可以用于记录具有同一焦点的辅助画面的展示时长。
在一些实施例中,响应于确定第二虚拟车辆中存在目标虚拟车辆,开始在虚拟场景画面中展示辅助画面,同时开始进行画面展示计时器的计时功能,对辅助画面的展示时长进行记录。
其中,辅助画面可以展示在虚拟场景画面上的任意区域,并且辅助画面的尺寸大小可以进行调节,用户可以在画面设置界面对辅助画面在虚拟场景画面上的位置以及辅助画面展示的尺寸大小进行自定义设置或者选择。
在一些实施例中,响应于存在目标虚拟车辆,通过画面展示计时器进行计时,若之后终端接收到反馈此时不存在目标虚拟车辆,则结束画面展示计时器的计时功能,并且重置该画面展示计时器。
比如,若某一时刻终端通过计算确定目标虚拟车辆,开始启动画面展示计时器,并进行计时,当持续确定该目标虚拟车辆的时长为3秒时,通过画面展示计时器进行计时的时长为3s,若经过5s后,该目标虚拟车辆超过第一虚拟车辆,并且经过计算未确定存在其他虚拟车辆符合条件作为目标虚拟车辆,则结束画面展示计时器的计时功能,并且将计时器的计时时长重置清零。
在一些实施例中,响应于虚拟场景画面的展示时间大于第三时长,在虚拟场景画面中展示辅助画面。
其中,开始展示虚拟场景画面的时刻通过计时器进行计时,当计时器记录的时长在第三时长内时,终端不进行判断目标虚拟对象,当计时器记录的时长超过第三时长后,终端开始实时检测确定目标虚拟车辆的步骤。
比如,以赛车游戏为例,当各个虚拟车辆进入对局中后,自动进行对局开始倒计时,当倒计时结束后正式进入赛车竞速计时模式,在开始赛车竞速计时模式时的第三时长以内,不进行辅助画面的展示,或者不进行目标虚拟车辆的计算确定步骤,当赛车竞速模式开始时长超过第三时长后,开始基于目标虚拟车辆展示辅助画面。
在一些实施例中,响应于所述第一虚拟车辆从开始移动的位置到当前位置之间的距离大于指定距离,在虚拟场景画面中展示辅助画面。
其中,由于在起点出发位置,各个虚拟车辆从同一起跑线上开始移动,所以在开始阶段可能存在目标虚拟对象变化频繁的现象,而上述两种方式均用于控制第一虚拟车辆在起点出发位置附近不进行辅助画面的展示,可以避免无意义的辅助画面展示,从而节约终端资源。
在一些实施例中,响应于目标虚拟车辆与第一虚拟车辆之间的相对距离小于等于第二距离,保持虚拟摄像机的镜头方向,在虚拟场景画面上展示通过虚拟摄像机拍摄的镜头方向上的辅助画面。
其中,当第一虚拟车辆与目标虚拟车辆之间的相对距离很近时,并且达到虚拟摄 像机的镜头对应的最小有效距离即第二距离时,此时虚拟摄像机的镜头停止跟随目标虚拟车辆的位置进行移动,并且保持静止不动,该虚拟摄像机的镜头焦点仍然是目标虚拟车辆,若目标虚拟车辆与第一虚拟车辆之间的相对距离变大时,大于第二距离时,则虚拟摄像头的镜头方向继续跟随目标虚拟车辆移动。
步骤508,响应于在辅助画面展示过程中,目标虚拟车辆从第一目标虚拟车辆切换为第二目标虚拟车辆,将画面展示计时器进行重置。
在本申请实施例中,当辅助画面已经开始展示在虚拟场景画面中时,在各个虚拟车辆移动的过程中,在第一虚拟车辆后方与该第一虚拟车辆相对距离最小的虚拟车辆发生变化,由第一目标虚拟车辆变为第二目标虚拟车辆时,需要将画面展示计时器进行重置清零。
其中,第一目标虚拟车辆和第二目标虚拟车辆是至少一个第二虚拟车辆中的任意两个。
在一些实施例中,目标虚拟车辆从第一目标虚拟车辆切换到第二目标虚拟车辆后,虚拟摄像机的焦点由第一目标虚拟车辆切换到第二目标虚拟车辆。
其中,当目标虚拟车辆从第一目标虚拟车辆切换到第二目标虚拟车辆后,辅助画面中展示的虚拟场景,由以第一目标虚拟车辆为焦点进行拍摄的画面切换为以第二目标虚拟车辆为焦点进行拍摄的画面。
示例性的,以赛车游戏为例,图9是本申请实施例涉及的一种辅助画面对应的焦点切换示意图。如图9所示,若第一虚拟车辆91的后方具有第一目标虚拟车辆93以及第二目标虚拟车辆94,在当前虚拟场景画面上展示辅助画面92,由于当前时刻第一目标虚拟车辆93与第一虚拟车辆之间的相对距离最小,所以该辅助画面92是以第一目标虚拟车辆93为焦点进行拍摄的画面。若之后某一时刻,第二目标虚拟车辆94超过第一目标虚拟车辆93,成为与第一虚拟车辆之间的相对距离最小的虚拟车辆,所以将虚拟摄像机的焦点切换为第二目标虚拟车辆进行画面拍摄。
在一些实施例中,辅助画面上添加有用于指示冲刺效果的线条图案。
比如,如图9所示,冲刺效果的线条图案95位于辅助画面92的边缘,通过添加冲刺效果的线条图案95,可以使用户的紧张感增强,从而提高了用户的操作体验。
步骤509,响应于画面展示计时器对应的时长达到第一时长,结束展示辅助画面。
在本申请实施例中,当终端获取到画面展示计时器记录的时长达到第一时长时,结束在虚拟场景画面上展示辅助画面。
也就是说,响应于在辅助画面展示过程中,目标虚拟车辆从第一目标虚拟车辆切换为第二目标虚拟车辆,将画面展示计时器进行重置,当虚拟摄像头的焦点持续维持在同一虚拟车辆上的时长达到第一时长时,才结束在虚拟场景画面上展示该辅助画面。
通过上述方案,可以解决在辅助画面展示过程中,实时调整作为焦点的虚拟车辆,并且可以通过流畅的画面连续进行展示。有益于用户在进行操作过程中,通过辅助画面获取后方虚拟车辆的有效位置信息。
步骤510,响应于第一钝角大于第一角度的持续时间达到第二时长,结束展示辅助画面。
在本申请实施例中,当目标镜头方向与车尾参考线之间的第一钝角大于第一角度的持续时间达到第二时长,同样可以结束展示辅助画面。
在一些实施例中,当虚拟摄像头的镜头方向转到最大角度时,此时目标虚拟车辆是部分在辅助画面或者完全不在辅助画面中的状态,所以为了使辅助画面尽可能展示有意义的画面内容,当虚拟摄像头的镜头方向转到最大角度的时间持续达到第二时长 时,结束展示辅助画面。
其中,该第二时长可以小于第一时长,即通过上述方式结束的辅助画面相对于步骤509所示方案,可以提前结束展示辅助画面。
比如,图10是本申请实施例涉及的一种目标镜头方向与车尾参考线之间的第一钝角大于第一角度时辅助画面的示意图。如图10所示,该第一虚拟车辆后方的目标虚拟车辆处于超车状态,并且该目标虚拟车辆对应的第一钝角大于第一角度,所以在展示的辅助画面1001中不存在目标虚拟车辆,仅包括赛道边缘的画面。利用辅助画面1001展示赛道边缘画面对用户的操作没有实际的增益,所以若依旧在展示时长达到第一时长时结束该辅助画面1001,会出现终端资源浪费的情况。
应用本申请实施例,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高用户控制虚拟车辆时的交互效率以及人机交互效率。
以虚拟场景为赛车游戏中的虚拟场景为例,请参考图11,其示出了本申请一个示例性的实施例提供的一种虚拟场景中画面展示方法的逻辑流程图,如图11所示,该逻辑流程可以包括以下步骤:
终端检测第一虚拟车辆的触发范围内是否具有其它虚拟车辆,其中,触发范围可以是在第一虚拟车辆的后方,且相对距离小于第一距离的范围,当检测到存在其它虚拟车辆在处罚范围内时,对第一虚拟车辆当前状态进行判断(S1101)。若判断第一虚拟车辆当前处于刚开局冲出起点的状态(S1102),则不触发后视镜功能,其中该后视镜功能为展示辅助画面的功能(S1103)。若判断第一虚拟车辆当前不处于刚开局冲出起点的状态(S1104),则确定触发后视镜功能(S1105)。然后,若通过实时检测确定目标虚拟车辆一直处于第一虚拟车辆的触发范围内(S1106),则通过虚拟摄像头跟踪拍摄该目标虚拟车辆的画面(S1107),若在拍摄中途目标虚拟车辆离开第一虚拟车辆对应的触发范围(S1108),则控制虚拟摄像机脱离跟踪拍摄该目标虚拟车辆(S1109),若该目标虚拟车辆离开触发范围后指定时间内(如3秒)返回第一虚拟车辆的触发范围内(S1110),则继续控制该虚拟摄像机跟踪拍摄目标虚拟车辆的画面(S1111)。若辅助画面的展示时长已经达到指定的最大展示时长,比如可以是3秒,则控制关闭后视镜功能,并且结束展示辅助画面(S1112)。
应用本申请实施例,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高用户控制虚拟车辆时的交互效率以及人机交互效率。
图12是本申请实施例示出的一种虚拟场景中画面展示装置的结构方框图。该虚拟场景中画面展示装置可以设置于计算机设备中,以执行图3或图5对应实施例所示 的方法中的全部或者部分步骤。该虚拟场景中画面展示装置可以包括:
主画面展示模块1210,配置为展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
目标确定模块1220,配置为基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述第二虚拟车辆中确定目标虚拟车辆;所述第二虚拟车辆是位于所述第一虚拟车辆后方的虚拟车辆;
辅画面展示模块1230,配置为在所述虚拟场景画面中展示辅助画面;所述辅助画面是以所述目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
在一些实施例中,所述目标确定模块1220,包括:
待选获取子模块,配置为获取待选虚拟车辆;所述待选虚拟车辆是与所述第一虚拟车辆之间的所述相对距离小于等于第一距离的所述第二虚拟车辆;
第一目标确定子模块,配置为从所述待选虚拟车辆中确定所述目标虚拟车辆。
在一些实施例中,所述目标确定子模块,包括:
目标确定单元,配置为将与所述第一虚拟车辆之间的所述相对距离最小的所述待选虚拟车辆,确定为所述目标虚拟车辆。
在一些实施例中,所述目标确定模块1220,包括:
第一获取子模块,配置为获取与所述第一虚拟车辆之间的所述相对距离最小的所述第二虚拟车辆;
第二目标确定子模块,配置为响应于所述第二虚拟车辆与所述第一虚拟车辆之间的所述相对距离小于等于第一距离,将所述第二虚拟车辆确定为所述目标虚拟车辆。
在一些实施例中,所述装置还包括:
距离获取模块,配置为基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述第二虚拟车辆中确定目标虚拟车辆之前,获取所述第一虚拟车辆与所述第二虚拟车辆之间的所述相对距离。
在一些实施例中,所述距离获取模块,包括:
距离获取子模块,配置为将所述第一虚拟车辆的车尾与所述第二虚拟车辆的中心点之间的连线长度,确定为所述相对距离。
在一些实施例中,所述装置还包括:
计时模块,配置为响应于所述辅助画面展示在所述虚拟场景画面中,启动画面展示计时器;所述画面展示计时器用于记录所述辅助画面在所述虚拟场景画面中连续展示的时长;
第一画面结束模块,配置为响应于所述画面展示计时器对应的时长达到第一时长,结束展示所述辅助画面。
在一些实施例中,所述装置还包括:
计时重置模块,配置为响应于所述画面展示计时对应的时长达到第一时长,结束展示所述辅助画面之前,响应于在所述辅助画面展示过程中,所述目标虚拟车辆从第一目标虚拟车辆切换为第二目标虚拟车辆,将所述画面展示计时器进行重置;所述第一目标虚拟车辆和所述第二目标虚拟车辆是至少一个所述第二虚拟车辆中的任意两个。
在一些实施例中,所述虚拟摄像机位于所述第一虚拟车辆的斜上方,且所述虚拟摄像机随着所述第一虚拟车辆移动;
所述装置还包括:
钝角获取模块,配置为在所述虚拟场景画面中展示辅助画面之前,获取目标镜头 方向与车尾参考线之间的第一钝角;所述目标镜头方向是从所述虚拟摄像机指向所述目标虚拟车辆的中心点的方向;所述车尾参考线是所述第一虚拟车辆的车尾所在直线,所述车尾参考线与水平面平行,且与所述第一虚拟车辆的车头和车尾的连线垂直;
第一方向确定模块,配置为响应于所述第一钝角小于等于第一角度,基于当前时刻所述目标虚拟车辆的位置,确定所述虚拟摄像机的第一镜头方向;所述第一镜头方向是所述目标镜头方向。
在一些实施例中,所述装置还包括:
第二方向确定模块,配置为响应于所述第一钝角大于所述第一角度,确定所述虚拟摄像机的第二镜头方向;所述第二镜头方向指向所述目标镜头方向与所述车尾指向方向之间,且第二镜头方向与所述车尾参考线之间的第二钝角的角度为所述第一角度;所述车尾指向方向是从所述第一虚拟车辆的车头指向车尾的方向。
在一些实施例中,所述装置还包括:
第二画面结束模块,配置为响应于所述目标镜头方向与车尾参考线之间的第一钝角大于所述第一角度的持续时间达到第二时长,结束展示所述辅助画面。
在一些实施例中,所述辅画面展示模块1230,包括:
方向确定子模块,配置为响应于所述目标虚拟车辆与所述第一虚拟车辆之间的所述相对距离小于等于第二距离,保持所述虚拟摄像机的镜头方向;
画面拍摄子模块,配置为在所述虚拟场景画面上展示通过所述虚拟摄像机拍摄的所述镜头方向上的所述辅助画面。
在一些实施例中,所述辅画面展示模块1230,包括:
辅画面展示子模块,配置为响应于所述虚拟场景画面的展示时间大于第三时长,在所述虚拟场景画面中展示所述辅助画面。
应用本申请实施例,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高用户控制虚拟车辆时的交互效率及人机交互效率。
图13是本申请实施例示出的一种虚拟场景中画面展示装置的结构方框图。该虚拟场景中画面展示装置可以用于终端中,以执行图4或图5对应实施例所示的方法中,由终端执行的全部或者部分步骤。该虚拟场景中画面展示装置可以包括:
主画面展示模块1310,配置为展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
第一辅画面展示模块1320,配置为在所述虚拟场景画面中展示第一辅助画面;所述第一辅助画面是以第一目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面;所述第一目标虚拟车辆是与所述第一虚拟车辆之间的相对距离最小,且所述相对距离小于等于第一距离的虚拟车辆;
第二辅画面展示模块1330,配置为响应于与所述第一虚拟车辆之间的所述相对距离最小,且所述相对距离小于等于所述第一距离的所述虚拟车辆变换为第二目标虚拟车辆,在所述虚拟场景画面中展示第二辅助画面,所述第二辅助画面是以所述第二目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的所述虚拟摄像头进行拍摄的画面。
应用本申请实施例,通过实时检测第一虚拟车辆与第二虚拟车辆之间的相对距离,确定其中的目标虚拟车辆,并且通过虚拟摄像机以目标虚拟车辆为焦点对虚拟场景进行拍摄,展示拍摄得到的辅助画面。由于第二虚拟车辆与第一虚拟车辆之间的相对距离可能会出现经常变化的情况,通过上述方案可以灵活的确定各个时刻对应的目标虚拟车辆,并且展示以目标虚拟车辆为焦点的辅助画面,从而使得辅助画面能够尽可能的显示有效的画面,提高了辅助画面传递有益于用户操作的信息的效率,保证用户在正常观察虚拟车辆前方的画面的同时,也能够观察到车辆后方的有效的画面内容,从而提高用户控制虚拟车辆时的交互效率以及人机交互效率。
图14是本申请实施例示出的计算机设备1400的结构框图。该计算机设备1400可以是用户终端,比如智能手机、平板电脑、动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器、笔记本电脑或台式电脑。计算机设备1400还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1400包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1401可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1401可以在集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1401还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请实施例提供的方法中的全部或者部分步骤。
在一些实施例中,计算机设备1400还可选包括有:外围设备接口1403和至少一个外围设备。处理器1401、存储器1402和外围设备接口1403之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1403相连。具体地,外围设备包括:射频电路1404、显示屏1405、摄像头组件1406、音频电路1407、定位组件1408和电源1409中的至少一种。
本领域技术人员可以理解,图14中示出的结构并不构成对计算机设备1400的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括至少一条指令、至少一段程序、代码集或指令集的存储器,上述至少一条指令、至少一段程序、代码集或指令集可由处理器执行以完成上述图3或图4或图5对应实施例所示的方法的全部或者部分步骤。例如,所述非临时性计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、 软盘和光数据存储设备等。
本申请实施例,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式中提供的虚拟场景中画面展示方法。
本领域技术人员在考虑说明书及实践这里公开的申请后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (18)

  1. 一种虚拟场景中画面展示方法,所述方法由计算机设备执行,所述方法包括:
    展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
    基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述至少一个第二虚拟车辆中确定目标虚拟车辆;
    其中,所述第二虚拟车辆是所述虚拟场景画面中位于所述第一虚拟车辆后方的虚拟车辆;
    在所述虚拟场景画面中展示辅助画面;所述辅助画面是以所述目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
  2. 根据权利要求1所述的方法,其中,所述基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述至少一个第二虚拟车辆中确定目标虚拟车辆,包括:
    获取待选虚拟车辆;所述待选虚拟车辆是与所述第一虚拟车辆之间的所述相对距离小于等于第一距离的所述第二虚拟车辆;
    从所述待选虚拟车辆中确定所述目标虚拟车辆。
  3. 根据权利要求2所述的方法,其中,所述从所述待选虚拟车辆中确定所述目标虚拟车辆,包括:
    将与所述第一虚拟车辆之间的所述相对距离最小的所述待选虚拟车辆,确定为所述目标虚拟车辆。
  4. 根据权利要求1所述的方法,其中,所述基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述至少一个第二虚拟车辆中确定目标虚拟车辆,包括:
    获取与所述第一虚拟车辆之间的所述相对距离最小的所述第二虚拟车辆;
    响应于所述第二虚拟车辆与所述第一虚拟车辆之间的所述相对距离小于等于第一距离,将所述第二虚拟车辆确定为所述目标虚拟车辆。
  5. 根据权利要求1所述的方法,其中,所述基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述至少一个第二虚拟车辆中确定目标虚拟车辆之前,还包括:
    获取所述第一虚拟车辆与所述第二虚拟车辆之间的所述相对距离。
  6. 根据权利要求5所述的方法,其中,所述获取所述第一虚拟车辆与所述第二虚拟车辆之间的所述相对距离,包括:
    将所述第一虚拟车辆的车尾与所述第二虚拟车辆的中心点之间的连线长度,确定为所述相对距离。
  7. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于所述辅助画面展示在所述虚拟场景画面中,启动画面展示计时器;所述画面展示计时器用于记录所述辅助画面在所述虚拟场景画面中连续展示的时长;
    响应于所述画面展示计时器对应的时长达到第一时长,结束展示所述辅助画面。
  8. 根据权利要求7所述的方法,其中,所述响应于所述画面展示计时对应的时长达到第一时长,结束展示所述辅助画面之前,还包括:
    响应于在所述辅助画面展示过程中,所述目标虚拟车辆从第一目标虚拟车辆切换为第二目标虚拟车辆,将所述画面展示计时器进行重置;所述第一目标虚拟车辆和所述第二目标虚拟车辆是至少一个所述第二虚拟车辆中的任意两个。
  9. 根据权利要求1所述的方法,其中,所述虚拟摄像机位于所述第一虚拟车辆的斜上方,且所述虚拟摄像机随着所述第一虚拟车辆移动;
    所述在所述虚拟场景画面中展示辅助画面之前,还包括:
    获取目标镜头方向与车尾参考线之间的第一钝角;所述目标镜头方向是从所述虚拟摄像机指向所述目标虚拟车辆的中心点的方向;所述车尾参考线是所述第一虚拟车辆的车尾所在直线,所述车尾参考线与水平面平行,且与所述第一虚拟车辆的车头和车尾的连线垂直;
    响应于所述第一钝角小于等于第一角度,基于当前时刻所述目标虚拟车辆的位置,确定所述虚拟摄像机的第一镜头方向;所述第一镜头方向是所述目标镜头方向。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    响应于所述第一钝角大于所述第一角度,确定所述虚拟摄像机的第二镜头方向;所述第二镜头方向指向所述目标镜头方向与所述车尾指向方向之间,且第二镜头方向与所述车尾参考线之间的第二钝角的角度为所述第一角度;所述车尾指向方向是从所述第一虚拟车辆的车头指向车尾的方向。
  11. 根据权利要求10所述的方法,其中,所述方法还包括:
    响应于所述第一钝角大于所述第一角度的持续时间达到第二时长,结束展示所述辅助画面。
  12. 根据权利要求1所述的方法,其中,所述在所述虚拟场景画面中展示辅助画面,包括:
    响应于所述目标虚拟车辆与所述第一虚拟车辆之间的所述相对距离小于等于第二距离,保持所述虚拟摄像机的镜头方向;
    在所述虚拟场景画面上展示通过所述虚拟摄像机拍摄的所述镜头方向上的所述辅助画面。
  13. 根据权利要求1至10任一项所述的方法,其中,所述在所述虚拟场景画面中展示辅助画面,包括:
    响应于所述虚拟场景画面的展示时间大于第三时长,在所述虚拟场景画面中展示所述辅助画面。
  14. 一种虚拟场景中画面展示方法,所述方法由计算机设备执行,所述方法包括:
    展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
    在所述虚拟场景画面中展示第一辅助画面;所述第一辅助画面是以第一目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面;所述第一目标虚拟车辆是与所述第一虚拟车辆之间的相对距离最小,且所述相对距离小于等于第一距离的虚拟车辆;
    响应于与所述第一虚拟车辆之间的所述相对距离最小,且所述相对距离小于等于所述第一距离的所述虚拟车辆变换为第二目标虚拟车辆,在所述虚拟场景画面中展示第二辅助画面,所述第二辅助画面是以所述第二目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的所述虚拟摄像头进行拍摄的画面。
  15. 一种虚拟场景中画面展示装置,所述装置包括:
    主画面展示模块,配置为展示虚拟场景画面,所述虚拟场景画面中包括第一虚拟车辆;
    目标确定模块,配置为基于所述第一虚拟车辆与至少一个第二虚拟车辆之间的相对距离,从所述第二虚拟车辆中确定目标虚拟车辆;所述第二虚拟车辆是位于所述第一虚拟车辆后方的虚拟车辆;
    辅画面展示模块,配置为在所述虚拟场景画面中展示辅助画面;所述辅助画面是以所述目标虚拟车辆为焦点,通过对应所述第一虚拟车辆设置的虚拟摄像头进行拍摄的画面。
  16. 一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至14任一项所述的虚拟场景中画面展示方法。
  17. 一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至14任一项所述的虚拟场景中画面展示方法。
  18. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时,实现权利要求1至14任一项所述的虚拟场景中画面展示方法。
PCT/CN2021/141708 2021-01-22 2021-12-27 虚拟场景中画面展示方法、装置、设备、存储介质及程序产品 WO2022156490A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020237016527A KR20230085934A (ko) 2021-01-22 2021-12-27 가상 장면에서의 픽처 디스플레이 방법 및 장치, 디바이스, 저장 매체, 및 프로그램 제품
JP2023527789A JP2023547721A (ja) 2021-01-22 2021-12-27 仮想場面における画面表示方法、装置、機器、及びプログラム
US17/992,599 US20230086441A1 (en) 2021-01-22 2022-11-22 Method and apparatus for displaying picture in virtual scene, device, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110090636.X 2021-01-22
CN202110090636.XA CN112870712B (zh) 2021-01-22 2021-01-22 虚拟场景中画面展示方法、装置、计算机设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/992,599 Continuation US20230086441A1 (en) 2021-01-22 2022-11-22 Method and apparatus for displaying picture in virtual scene, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2022156490A1 true WO2022156490A1 (zh) 2022-07-28

Family

ID=76050521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141708 WO2022156490A1 (zh) 2021-01-22 2021-12-27 虚拟场景中画面展示方法、装置、设备、存储介质及程序产品

Country Status (6)

Country Link
US (1) US20230086441A1 (zh)
JP (1) JP2023547721A (zh)
KR (1) KR20230085934A (zh)
CN (1) CN112870712B (zh)
TW (1) TW202228827A (zh)
WO (1) WO2022156490A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112870712B (zh) * 2021-01-22 2023-03-14 腾讯科技(深圳)有限公司 虚拟场景中画面展示方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108479068A (zh) * 2018-03-29 2018-09-04 网易(杭州)网络有限公司 虚拟对象显示方法和设备
CN109866763A (zh) * 2019-01-10 2019-06-11 苏州工业园区职业技术学院 一种面向智能驾驶辅助系统
CN109887372A (zh) * 2019-04-16 2019-06-14 北京中公高远汽车试验有限公司 驾驶培训模拟方法、电子设备及存储介质
US10453262B1 (en) * 2016-07-21 2019-10-22 Relay Cars LLC Apparatus and method for dynamic reflecting car mirrors in virtual reality applications in head mounted displays
CN112172670A (zh) * 2020-10-19 2021-01-05 广州优创电子有限公司 基于图像识别的后视图像显示方法及装置
CN112870712A (zh) * 2021-01-22 2021-06-01 腾讯科技(深圳)有限公司 虚拟场景中画面展示方法、装置、计算机设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10453262B1 (en) * 2016-07-21 2019-10-22 Relay Cars LLC Apparatus and method for dynamic reflecting car mirrors in virtual reality applications in head mounted displays
CN108479068A (zh) * 2018-03-29 2018-09-04 网易(杭州)网络有限公司 虚拟对象显示方法和设备
CN109866763A (zh) * 2019-01-10 2019-06-11 苏州工业园区职业技术学院 一种面向智能驾驶辅助系统
CN109887372A (zh) * 2019-04-16 2019-06-14 北京中公高远汽车试验有限公司 驾驶培训模拟方法、电子设备及存储介质
CN112172670A (zh) * 2020-10-19 2021-01-05 广州优创电子有限公司 基于图像识别的后视图像显示方法及装置
CN112870712A (zh) * 2021-01-22 2021-06-01 腾讯科技(深圳)有限公司 虚拟场景中画面展示方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
JP2023547721A (ja) 2023-11-13
TW202228827A (zh) 2022-08-01
CN112870712B (zh) 2023-03-14
US20230086441A1 (en) 2023-03-23
KR20230085934A (ko) 2023-06-14
CN112870712A (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112245921B (zh) 虚拟对象控制方法、装置、设备及存储介质
US20230050933A1 (en) Two-dimensional figure display method and apparatus for virtual object, device, and storage medium
TWI831074B (zh) 虛擬場景中的信息處理方法、裝置、設備、媒體及程式產品
JP7394872B2 (ja) 仮想シーンにおける仮想オブジェクト制御方法、装置、機器及びコンピュータプログラム
JP2022540277A (ja) 仮想オブジェクト制御方法、装置、端末及びコンピュータプログラム
WO2022134808A1 (zh) 虚拟场景中的数据处理方法、设备、存储介质及程序产品
WO2021218460A1 (zh) 虚拟对象的控制方法、装置、终端及存储介质
TW202220731A (zh) 虛擬場景中狀態切換方法、裝置、設備、媒體及程式產品
JP2023036743A (ja) 位置に基づくゲームプレイコンパニオンアプリケーションへユーザの注目を向ける方法及びシステム
CN114159787A (zh) 虚拟对象的控制方法、装置、电子设备及可读介质
CN112891943A (zh) 一种镜头处理方法、设备以及可读存储介质
JP7317857B2 (ja) 仮想カメラ配置システム
KR20230166957A (ko) 3차원 가상 환경에서 내비게이션 보조를 제공하기 위한 방법 및 시스템
WO2022156490A1 (zh) 虚拟场景中画面展示方法、装置、设备、存储介质及程序产品
CN111589114B (zh) 虚拟对象的选择方法、装置、终端及存储介质
US20230271087A1 (en) Method and apparatus for controlling virtual character, device, and storage medium
US20230048826A1 (en) Virtual scene display method and apparatus, device, storage medium, and program product
WO2022227934A1 (zh) 虚拟载具的控制方法、装置、设备、介质及程序产品
KR102625326B1 (ko) 게임 제어 방법, 프로그램을 기록한 기록 매체, 서버 및 통신 장치
CN114307145A (zh) 画面显示方法、装置、终端、存储介质及程序产品
CN114210051A (zh) 虚拟场景中的载具控制方法、装置、设备及存储介质
CN113018862A (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
WO2024067168A1 (zh) 基于社交场景的消息显示方法、装置、设备、介质及产品
WO2024078225A1 (zh) 虚拟对象的显示方法、装置、设备及存储介质
WO2024012016A1 (zh) 虚拟场景的信息显示方法、装置、电子设备、存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21920862

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023527789

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 20237016527

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24-11-2023)