CN110559662B - Visual angle switching method, device, terminal and medium in virtual environment - Google Patents

Visual angle switching method, device, terminal and medium in virtual environment Download PDF

Info

Publication number
CN110559662B
CN110559662B CN201910866721.3A CN201910866721A CN110559662B CN 110559662 B CN110559662 B CN 110559662B CN 201910866721 A CN201910866721 A CN 201910866721A CN 110559662 B CN110559662 B CN 110559662B
Authority
CN
China
Prior art keywords
virtual
camera
virtual environment
switching
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910866721.3A
Other languages
Chinese (zh)
Other versions
CN110559662A (en
Inventor
沈晓斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910866721.3A priority Critical patent/CN110559662B/en
Publication of CN110559662A publication Critical patent/CN110559662A/en
Application granted granted Critical
Publication of CN110559662B publication Critical patent/CN110559662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Abstract

The application discloses a method, a device, a terminal and a medium for switching visual angles in a virtual environment, which are applied to the field of virtual environment modeling. The method comprises the following steps: displaying a first virtual environment picture of a main control virtual object sitting on a virtual carrier; receiving a trigger operation acting on a trigger control; and switching and displaying the first virtual environment picture as a second virtual environment picture according to the trigger operation. Since the first virtual environment picture is a picture acquired based on a first viewing angle determined by the virtual vehicle, the second virtual environment picture is a picture acquired based on a second viewing angle determined by the master virtual object. And the second virtual environment picture has no or almost no occlusion of the virtual carrier, so that the virtual environment picture with better visual field is obtained.

Description

Visual angle switching method, device, terminal and medium in virtual environment
Technical Field
The present disclosure relates to the field of virtual environment modeling, and in particular, to a method, an apparatus, and a device for switching a view angle in a virtual environment.
Background
In three-dimensional virtual programs such as shooting games, military simulation programs, and the like, virtual characters can move in a virtual environment by riding a virtual vehicle. A typical virtual vehicle includes: automobiles, boats, and helicopters.
Taking a virtual vehicle as an example of a virtual automobile, a virtual environment picture and a probe control are displayed on a terminal, and a virtual character in the virtual environment picture is taken in the virtual automobile. When the user clicks the probe control, the virtual character in the virtual environment picture extends out of the upper half of the window of the virtual automobile and aims to the front of the virtual automobile.
The aiming range that above-mentioned technical scheme can realize is limited, and virtual car can shelter from very big partial aiming field of vision. If the system runs on a small-screen terminal such as a smart phone or a tablet personal computer, the aiming view is smaller.
Disclosure of Invention
The embodiment of the application provides a visual angle switching method, a visual angle switching device, a visual angle switching terminal and a visual angle switching medium in a virtual environment, and the problem that a virtual automobile in the related art can shield a large part of aiming visual fields, so that the aiming visual fields are smaller can be solved. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method for switching a view angle in a virtual environment, which is applied to a terminal in which an application program supporting the virtual environment runs, the method including:
displaying a first virtual environment picture of a main control virtual object sitting on a virtual carrier, wherein the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier, and a trigger control is superposed on the first virtual environment picture;
receiving a trigger operation acted on the trigger control;
and switching and displaying the first virtual environment picture into a second virtual environment picture according to the trigger operation, wherein the second virtual environment picture is a picture acquired based on a second visual angle determined by the main control virtual object.
According to another aspect of the present application, there is provided a viewing angle switching apparatus in a virtual environment, which is applied in a terminal running an application program supporting the virtual environment, the apparatus including:
the display module is used for displaying a first virtual environment picture of a main control virtual object sitting on a virtual carrier, the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier, and a trigger control is superposed on the first virtual environment picture;
the receiving module is used for receiving the trigger operation acted on the trigger control;
and the switching module is used for switching and displaying the first virtual environment picture into a second virtual environment picture according to the trigger operation, wherein the second virtual environment picture is a picture acquired based on a second visual angle determined by the main control virtual object.
According to another aspect of the present application, there is provided a terminal, including: a processor and a memory storing at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement a perspective switching apparatus in a virtual environment as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium storing at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement a perspective switching apparatus in a virtual environment as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the first virtual environment picture is switched and displayed into a second virtual environment picture according to the triggering operation, the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier, and the second virtual environment picture is a picture acquired by adopting a second visual angle determined based on the main control virtual object, so that the second virtual environment picture is not or almost not shielded by the virtual carrier, and the virtual environment picture with a better visual field is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic interface diagram of a virtual object entering a fighting state while riding a virtual vehicle provided in the related art;
fig. 2 is a schematic view of an interface provided by an exemplary embodiment of the present application for a virtual object to enter a combat state while riding a virtual vehicle;
FIG. 3 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 4 is a diagram illustrating a method for switching a view angle in a virtual environment according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of the operation of a camera model provided by an exemplary embodiment of the present application;
FIG. 6 is a diagram illustrating a method for switching a view angle in a virtual environment according to an exemplary embodiment of the present application;
FIG. 7 is a schematic interface diagram of the perspective switching method in the virtual environment shown in FIG. 6;
FIG. 8 is a schematic diagram of a method for switching a view angle in a virtual environment according to another exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of a method for switching a view angle in a virtual environment according to another exemplary embodiment of the present application;
FIG. 10 is a schematic interface diagram of a perspective switching method in the virtual environment shown in FIG. 9 when implemented;
FIG. 11 is an effective range of rotation for a second perspective on a virtual helicopter provided by an exemplary embodiment of the present application;
FIG. 12 is a distribution diagram of the seating positions of a virtual helicopter provided in an exemplary embodiment of the present application;
fig. 13 is a flowchart of a virtual object entering a combat state while riding a virtual vehicle according to an exemplary embodiment of the present application;
FIG. 14 is a block diagram of a perspective switching apparatus in a virtual environment provided by an exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment is a virtual environment implemented by a computer program. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional three-dimensional environment, or a pure fictional three-dimensional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, and the following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but are not limited thereto. Optionally, the virtual environment is also used for virtual environment engagement between at least two virtual characters. Optionally, the virtual environment is also used for a virtual firearm engagement between at least two virtual characters. Optionally, the virtual environment is further configured to engage a virtual firearm between at least two virtual characters within a target area that is smaller over time in the virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Tactical competitive game: the type of game that the player wins using tactical and attacking means (attacking using a virtual firearm) by controlling the virtual character in the virtual environment.
Virtual carrier: refers to a vehicle in a virtual environment for a movable object to ride. Virtual vehicles include, but are not limited to: at least one of a virtual car, a virtual boat, a virtual helicopter, a virtual airplane.
A virtual helicopter: the helicopter is a flying device depending on a propeller and a tail wing for flying, and has the outstanding characteristics of capability of maneuvering flight at low altitude (several meters from the ground), low speed (from hovering) and unchanged nose direction, and particularly capability of vertically taking off and landing in a small-area field. In the present application, a helicopter is particularly intended as a virtual helicopter provided in the virtual environment of a tactical competitive game.
First-person shooter game (FPS): the shooting game that refers to the user can carry out with first person's perspective belongs to one kind in the tactics sports game. The screen of the virtual environment in the game is a screen in which the virtual environment is observed from the perspective of the first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding attacks initiated by other virtual objects and dangers (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
Camera assembly (camera or camera lens or lens for short): in the application program supporting the virtual environment, the virtual environment picture seen by the user is shot by a camera in the three-dimensional virtual engine.
Spring arm: when a camera assembly is added to a virtual object or virtual vehicle to create a third perspective, a spring arm is also included to automatically control how the camera responds when obstructed by geometry or other objects within the checkpoint.
A probe: when the virtual object is riding on the vehicle, if the user has a shooting demand, the process of switching from the riding state to the preliminary shooting state. Tactical sports games often provide a probe button to support a player's probe.
Shoulder shooting: shoulder shooting is a standard shooting position in which the shoulder is held against the grip of the gun to control recoil.
In the related art shown in fig. 1, when a virtual object is seated in a virtual vehicle 11, a terminal displays a user interface 10 including thereon a virtual environment screen, a probe button 12, and a shoot button 13. At this time, the virtual environment picture is a picture captured by the first camera behind the virtual vehicle 11. When the user clicks the probe button 12 or the shoot button 13, the terminal controls the upper body of the virtual object 14 to stick out of the cabin of the virtual vehicle 11, and aims or shoots the front of the virtual vehicle 11 using a firearm weapon held by the upper body located outside the cabin. During this process, the position of the first camera remains unchanged. As can be seen from fig. 1, the central portion of the virtual environment picture is occupied by the three-dimensional model of the virtual vehicle and the upper half of the virtual object 14, and the aiming field of view 15 that the user can use is very small. If the terminal is a small screen terminal such as a smartphone, tablet computer, etc., the aiming field of view 15 that the user can use is smaller.
The embodiment of the application provides a visual angle switching scheme in a virtual environment, and when a virtual object aims or shoots, a first camera determined based on a virtual vehicle is switched to a second camera determined based on the virtual object, so that no or almost no virtual carrier is shielded in a virtual environment picture, and a virtual environment picture with a better visual field is obtained. Wherein the second camera may be a shoulder-mounted view camera of the virtual object. Taking a tactical sports game as an example, as shown in fig. 2, a terminal first displays a first virtual environment picture 10 of a game character riding on a virtual vehicle 11, wherein the first virtual environment picture is a picture collected by a first camera bound behind the virtual vehicle 10. When the user clicks the probe button 12 (or the shooting button 13 or the open mirror button), the terminal switches the currently used first camera to a second camera bound to the shoulder of the game character 21, and displays a second virtual environment picture 20 acquired by using the shoulder shooting angle of the game character 21.
Fig. 3 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting with remote props, attacking with close-up props, throwing virtual explosives. Illustratively, the first virtual object is a first virtual character, such as a simulated character object or an animated character object.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 in turn including a display module 1421, a control module 1422, and a receiving module 1423. The server 140 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The second terminal 160 is a terminal used by a second user who uses the second terminal 160 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting with remote props, attacking with close-up props, throwing virtual explosives. Illustratively, the second virtual object is a second virtual character, such as a simulated character object or an animated character object.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 4 is a flowchart illustrating a method for switching a view angle in a virtual environment according to an exemplary embodiment of the present application. The method may be used in the terminal shown in fig. 3. The method comprises the following steps:
step 401, displaying a first virtual environment picture of a master control virtual object sitting on a virtual vehicle, wherein the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual vehicle, and a trigger control is superposed on the first virtual environment picture;
the virtual environment is a two-dimensional or three-dimensional virtual environment simulated using computer technology. The master virtual object is a movable object, such as a character or cartoon object, controlled by the terminal in the virtual environment.
The virtual vehicle includes: at least one of a virtual vehicle, a virtual ship, a virtual helicopter, and a virtual airplane. In one example, the virtual vehicle is a vehicle having a cockpit. The cabin is internally provided with a riding position for riding the virtual object.
The user controls the master control virtual object to ride on the riding position of the virtual vehicle. Illustratively, the seating positions include: the cabin is provided with at least one of an inner sitting position positioned inside the cabin and an outer sitting position positioned outside the cabin, and the cabin is also provided with a window or a door for the part of the body model of the virtual object to extend out.
And displaying a first virtual environment picture of the main control virtual object on the virtual carrier on the terminal, wherein the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier. Illustratively, the first virtual environment view is a virtual environment view captured using a camera that is bound to the rear (or side or front) of the virtual vehicle.
A Head Up Display (HUD) panel area is superimposed on the first virtual environment screen. A plurality of UI controls are displayed on the panel area, and the UI controls comprise trigger controls. The trigger control is used for triggering the main control virtual object to enter a combat state (or a combat readiness state) when the main control virtual object takes the virtual vehicle. The trigger controls include, but are not limited to: at least one of an aiming control, a firing control, and a probe control.
The aiming control is used for controlling the main control virtual object to aim by using the virtual weapon;
the shooting control is used for controlling the main control virtual object to shoot by using the virtual weapon;
the probe control is a control for controlling all or part of the body model of the master virtual object to probe out of the cockpit of the virtual vehicle.
Step 402, receiving a trigger operation acting on a trigger control;
the trigger operation is at least one of a single-click operation, a double-click operation, a long-press operation and a sliding operation which act on the aiming control/shooting control/probe control. For example, the user has clicked on the targeting control.
Step 403, the first virtual environment picture is switched and displayed to be a second virtual environment picture according to the trigger operation, and the second virtual environment picture is a picture acquired from a second view angle determined based on the master control virtual object.
Illustratively, the second perspective is a first-person perspective of the master virtual object, or the second perspective is a shoulder-fire perspective of the master virtual object. The shoulder-fire perspective is a perspective that is aimed or shot using the perspective direction of the shoulder position of the master virtual object.
In summary, in the method provided in this embodiment, the first virtual environment picture is switched and displayed as the second virtual environment picture according to the trigger operation, the first virtual environment picture is a picture acquired by using the first viewing angle determined based on the virtual vehicle, and the second virtual environment picture is a picture acquired by using the second viewing angle determined based on the main control virtual object, so that there is no or almost no occlusion of the virtual vehicle in the second virtual environment picture, thereby obtaining the virtual environment picture with a better view.
The virtual environment picture displayed on the terminal is realized by a camera arranged in the virtual environment. The camera is a three-dimensional model located around a virtual vehicle or around a virtual object in a virtual environment, and is a model for capturing a screen in the virtual environment.
Take as an example that the camera is a three-dimensional model located around the virtual object. When the first-person perspective is adopted, the camera is positioned near or at the head of the virtual object; when the third person perspective is adopted, the camera may be located behind and bound to the virtual object, or may be located at any position away from the virtual object by a preset distance, and the virtual object located in the virtual environment may be observed from different angles by the camera. Optionally, the viewing angles include other viewing angles, such as a top viewing angle and a shoulder viewing angle, in addition to the first person viewing angle and the third person viewing angle; when a top view angle is adopted, the camera can be positioned above the head of the virtual object, and the top view angle is a view angle for observing the virtual environment from a top view angle in the air; when shoulder-shooting viewing angles are employed, the camera may be located at a shoulder position of the virtual object. Optionally, the camera is not actually displayed in the virtual environment, i.e. the camera is not displayed in the virtual environment displayed by the user interface.
To illustrate the case where the camera is located at an arbitrary position away from the virtual object by a preset distance, optionally, one virtual object corresponds to one camera, and the camera may rotate with the virtual object as a rotation center, for example: the camera is rotated with an arbitrary point of the virtual object as a rotation center, the camera is not only rotated in angle but also shifted in displacement during the rotation, and the distance between the camera and the rotation center is kept constant during the rotation, that is, the camera is rotated on the surface of a sphere with the rotation center as a sphere center, wherein the arbitrary point of the virtual object may be the head, the trunk, or an arbitrary point around the virtual object, which is not limited in the embodiments of the present application. Optionally, when the camera is observing the virtual object, the center of the viewing angle of the camera points to the direction that the point of the spherical surface where the camera is located points to the center of the sphere.
Optionally, the camera may also view the virtual object at a preset angle in different directions of the virtual object.
Referring schematically to fig. 5, a point is determined in the virtual object 11 as a centre of rotation 12, about which centre of rotation 12 the camera is rotated, optionally with an initial position, which is a position above and behind the virtual object (such as a rear position of the brain). The camera may be controlled by a user.
The above-described embodiment is explained below on the basis of a video camera. Step 401 can be implemented as step 4011 and step 4012, and step 403 can be implemented as step 4031, step 4032 and step 4033.
Fig. 6 shows a flowchart of a method for switching a view angle in a virtual environment according to an exemplary embodiment of the present application. The method may be used in the terminal shown in fig. 3, executed by an application running in the terminal. The method comprises the following steps:
step 4011, when a master control virtual object rides on a virtual vehicle, acquiring a first virtual environment picture acquired by a first camera for a virtual environment according to a first viewing angle, wherein the first camera is a camera bound behind the virtual vehicle;
the user controls the master virtual object to ride on a virtual vehicle in the virtual environment. The terminal determines the current camera as a first camera, and the first camera is a camera bound behind the virtual vehicle. Illustratively, there is a mounting point on the virtual vehicle, and the first camera is mounted behind the mounting point by a preset distance.
Current cameras are cameras that are used to view a virtual environment to obtain a view of the virtual environment for display by an application. The first camera is used for observing the virtual environment from the rear of the virtual vehicle to obtain a first virtual environment picture.
Optionally, the first camera can be controlled by a user, for example, the user rotates the first camera using the rotational angle control, so that the first camera can rotate around the periphery of the virtual vehicle, thereby acquiring the first virtual environment pictures in different angle of view directions with the virtual vehicle as a reference point.
In other embodiments, the first camera may also be an in-cabin camera located within the cockpit of the virtual vehicle. An in-cabin camera is a camera used to simulate the view of a passenger seated in an interior seating position.
Step 4012, displaying a user interface according to the first virtual environment picture acquired by the first camera, wherein the user interface includes the first virtual environment picture and a trigger control superposed on the first virtual environment picture;
in this embodiment, the first virtual environment picture generally refers to a virtual environment picture captured by the first camera, and the first virtual environment picture includes a virtual environment picture in which frames captured by the first camera are arranged in time series when the first camera is used as a current camera (i.e., a master camera).
The terminal displays a user interface of the application program, wherein the user interface comprises a first virtual environment picture and a trigger control superposed on the first virtual environment picture.
Step 402, receiving a trigger operation acting on a trigger control;
a Head Up Display (HUD) panel area is superimposed on the first virtual environment screen. The HUD panel area has a plurality of UI controls displayed thereon, such as: a move control, a minimap control, a perspective rotate control, an entry control for the virtual vehicle (such as getting on or getting on), an exit control for the virtual vehicle (such as getting off or getting off), a seat switch control for the virtual vehicle, and so forth.
The UI control on the HUD panel area comprises a trigger control. The trigger control is used for triggering the main control virtual object to enter a combat state (or a combat readiness state) when the main control virtual object takes the virtual vehicle. The trigger controls include, but are not limited to: at least one of an aiming control, a firing control, and a probe control.
The aiming control (also called a mirror opening control) is used for controlling the main control virtual object to aim by using a virtual weapon;
the shooting control is used for controlling the main control virtual object to shoot by using the virtual weapon;
the probe control is a control for controlling all or part of the body model of the master virtual object to probe out of the cockpit of the virtual vehicle.
The trigger operation is at least one of a single-click operation, a double-click operation, a long-press operation and a sliding operation which act on the aiming control/shooting control/probe control. For example, the user has clicked on the aim control/shoot control/probe control.
Step 4031, switching the first camera to a second camera according to the trigger operation, the first camera being a camera bound to the virtual vehicle, the second camera being a camera bound to the master control virtual object;
optionally, the second camera is a camera bound to a head position of the master virtual object; or the second camera is a camera bound at the neck position of the main control virtual object; or the second camera is a camera bound to the shoulder position of the main control virtual object.
In the virtual environment there is a world coordinate system in three dimensions. The plane of the x axis and the y axis of the world coordinate system is a horizontal plane of the virtual environment, the positive semi-axis of the z axis is a coordinate axis which is vertical to the horizontal plane and faces upwards, and the positive semi-axis of the z axis is a coordinate axis which is vertical to the horizontal plane and faces downwards, namely the z axis corresponds to the height of the virtual environment.
Setting the real-time coordinate of the neck of the main control virtual object in a world coordinate system as (X)1,Y1,Z1) If the neck offset of the second camera with respect to the master virtual object is set to (Δ X, Δ Y, Δ Z), the coordinate of the second camera in the world coordinate system is set to (X)1+ΔX,Y1+ΔY,Z1+ Δ Z). Optionally, a spring arm with a length L is further disposed on the second camera.
Taking the second camera as an example, which is bound to the neck position of the master virtual object, Δ X is 0, Δ Y is 0, Δ Z is 25CM, and L is 30 CM.
Assuming that the X-axis is a straight front direction, taking as an example that the second camera is a camera bound to the left shoulder position of the master virtual object, Δ X is 0, Δ Y is-10 CM, Δ Z is 25CM, and L is 30 CM.
Assuming that the X-axis is a straight front direction, taking as an example that the second camera is a camera bound to the left shoulder position of the master virtual object, Δ X is 0, Δ Y is +10CM, Δ Z is 25CM, and L is 30 CM.
Referring to fig. 7 in combination, the terminal switches the current camera from the first camera 1 to the second camera 2 according to a trigger operation, and the second camera is a camera bound to a shoulder position of the master virtual object.
4032, acquiring a second virtual environment picture acquired by the second camera for the virtual environment according to a second view angle;
current cameras are cameras that are used to view a virtual environment to obtain a view of the virtual environment for display by an application. When the second camera is bound at the head position, the second camera is used for observing the virtual environment from the head position of the main control virtual object to obtain a second virtual environment picture; when the second camera is bound at the neck position, the second camera is used for observing the virtual environment from the neck position of the main control virtual object to obtain a second virtual environment picture; and when the second camera is bound at the shoulder position, the second camera is used for observing the virtual environment from the shoulder position of the main control virtual object to obtain a second virtual environment picture.
Optionally, the second camera can be controlled by a user, for example, the user rotates the second camera using the rotational angle control, so that the second camera can rotate around the main control virtual object, thereby acquiring a second virtual environment picture in different angle directions with the main control virtual object as a reference point.
In this embodiment, the second virtual environment picture generally refers to a virtual environment picture acquired by the second camera, and the second virtual environment picture includes a virtual environment picture in which multiple frames acquired when the second camera is used as the current camera (i.e., the main camera) are arranged in time series.
Step 4033, the first virtual environment screen is switched and displayed to a second virtual environment screen.
And the terminal displays a user interface of the application program, wherein the user interface comprises a second virtual environment picture and a trigger control superposed on the second virtual environment picture.
Step 404, receiving again a trigger operation acting on a trigger control, wherein the trigger control comprises an aiming control or a probe control;
the user can click the aiming control or the probe control again, and the application program on the terminal receives the triggering operation acted on the triggering control again.
The triggering control in this step is a control for triggering the main control virtual object to exit the combat state (or combat readiness state) and enter the riding state when riding the virtual vehicle.
And step 405, restoring and displaying the second virtual environment picture as the first virtual environment picture according to the triggering operation.
And the terminal switches the current camera from the second camera to the first camera according to the trigger operation, and the virtual environment picture displayed on the application program is restored from the second virtual environment picture and displayed as the first virtual environment picture.
The above process may be triggered by the user to be executed multiple times in one session.
In summary, in the method provided in this embodiment, the first camera is switched to the second camera according to the trigger operation, the first virtual environment picture acquired by the first camera is a picture acquired by using a first viewing angle determined based on the virtual vehicle, and the second virtual environment picture acquired by the second camera is a picture acquired by using a second viewing angle determined based on the main control virtual object, so that there is no or almost no occlusion of the virtual vehicle in the virtual environment picture, and the virtual environment picture with a better view field is obtained.
In the method provided by this embodiment, the trigger operation applied to the trigger control is received again, and the second virtual environment screen is restored to the first virtual environment screen according to the trigger operation, so that the master virtual object exits from the state and enters the ordinary riding state while riding the virtual vehicle. Under the ordinary riding state, the traveling state of the virtual vehicle can be seen in the first virtual environment picture, and more traveling information related to the virtual vehicle can be obtained.
According to the distribution mode of the seats, the virtual vehicles are divided into two types:
first, only interior seating is included, not exterior seating;
the virtual vehicle is a virtual vehicle having a cabin with internal cabin memory. Such as: a driving position and a non-driving position. Such as: the four-person virtual automobile comprises a driving position and three auxiliary driving positions. In some embodiments of the present application, the interior seating position refers to a non-driving position. The virtual object, which is located on the driver's seat, does not support entering into a combat state (or combat readiness state).
Second, both interior and exterior seating positions are included.
When the virtual vehicle comprises an armored car, truck, helicopter or yacht, the virtual vehicle presents not only an internal seating position but also an external seating position. An external seating position is a seating position located outside the cabin.
In some virtual vehicles, the combat state cannot be entered directly while the master virtual object is in the cabin. For example, the glass in the cockpit may block the fire from shooting due to simulation requirements. It is therefore also necessary to adjust the body position of the master virtual object before switching the first camera to the second camera.
For virtual vehicles that include only interior seats, step 4031 described above may be implemented as step 4031a and step 4031b as shown in fig. 8.
Step 4031a, a part of body model of the main control virtual object positioned on the upper half body is controlled to find out a virtual carrier according to the triggering operation;
when the first camera is adopted for acquiring the virtual environment picture, the main control virtual object is taken at an internal sitting position of the virtual vehicle, and the internal sitting position is a sitting position in a cabin of the virtual vehicle. When the triggering operation is received, the terminal controls the main control virtual object to be positioned in the part of the body model of the upper half body to be protruded out of the cabin of the virtual vehicle from the window or the cabin door according to the triggering operation, and the part of the body model of the lower half body is kept in the cabin of the virtual vehicle.
Step 4031b, switch the first camera to a second camera bound to the partial body model.
The partial body model located in the upper half of the body includes at least one of a head, a neck, and a shoulder that host the virtual object. The second camera is a camera bound to the head or neck or shoulders of the master virtual object.
After a part of a body model of the main control virtual character positioned in the upper half body extends out of a cockpit of the virtual vehicle, the first camera is switched to the second camera.
In summary, in the method provided in this embodiment, the virtual vehicle is detected by controlling the body model of the main control virtual object located in the upper body, and then the first camera is switched to the second camera; in the real environment of can more really simulating, the tactics operation that the main control virtual object visited or shot at the cockpit window, it is comparatively lifelike to have the simulation effect.
For a virtual vehicle including both an internal and an external seat, the above step 4031 can be implemented as the following step 4031c and step 4031d, as shown in fig. 9.
Step 4031c, switching the master control virtual object from the internal seating position of the virtual vehicle to an external seating position according to the triggering operation, wherein the external seating position is a seating position located outside a cockpit of the virtual vehicle;
when the first camera is adopted for acquiring the virtual environment picture, the main control virtual object is taken at an internal sitting position of the virtual vehicle, and the internal sitting position is a sitting position in a cabin of the virtual vehicle. When the triggering operation is received, the terminal controls the main control virtual object to switch the main control virtual object from the internal seating position of the virtual vehicle to the external seating position according to the triggering operation.
In some embodiments, there are at least two external seating positions, each external seating position having a respective number. And the terminal determines the target external seating position with the minimum number and in an idle state on at least two external seating positions according to the triggering operation. The master virtual object is switched from the interior seating of the virtual vehicle to the target exterior seating.
In some embodiments, the master virtual object is switched from the interior seat to the exterior seat of the virtual vehicle according to a triggering operation when the interior seat of the master virtual object is not a driver seat.
Step 4031d, switch the first camera to a second camera bound to the body model of the master virtual object.
The body model of the master virtual object sitting in the external seating position includes at least one of a head, a neck, and a shoulder of the master virtual object. The second camera is a camera bound to the head or neck or shoulders of the master virtual object.
After switching the master virtual character from the interior seating position to the exterior seating position, the first camera is switched to the second camera. As shown in fig. 10, the master virtual object is first switched from the internal seating position of the virtual helicopter to the external seating position, and then the corresponding first virtual environment screen 111 is switched to the second virtual environment screen 113 before the first camera 1 is switched to the second camera 2, or the corresponding first virtual environment screen 111 is switched to the transition virtual screen 112, and the transition virtual screen 112 is switched to the second virtual environment screen 113 again.
Optionally, the second camera can be controlled by a user, for example, the user rotates the second camera using the rotational angle control, so that the second camera can rotate around the main control virtual object, thereby acquiring a second virtual environment picture in different angle directions with the main control virtual object as a reference point.
Optionally, the terminal controls the main control virtual object to rotate the view direction of the second view within a limited rotation range that is not blocked by a cockpit of the virtual vehicle according to the position of the external passenger seat in the virtual vehicle. For example, when the external seating position is a seating position located on the left side of the virtual vehicle, the main control virtual object is controlled to rotate the viewing angle direction of the second viewing angle within a limited rotation range of 180 degrees on the left side; for another example, when the external seating position is a seating position located on the right side of the virtual vehicle, the master virtual object is controlled to rotate the viewing angle direction of the second viewing angle within the limited rotation range of 180 degrees on the right side, as shown in fig. 12.
In summary, in the method provided in this embodiment, the master virtual object is switched from the internal seating position to the external seating position of the virtual vehicle according to the triggering operation, and then the first camera is switched to the second camera; in the realistic simulation real environment, the main control virtual object aims or shoots at the external riding position outside the cabin, and the simulation effect is vivid.
In one example, the virtual helicopter includes 8 seating positions, as shown in fig. 12:
a type: a driver seat, such as ride seat # 1 in fig. 12;
b type: interior seating positions (no firing is permitted), such as the No. 2, No. 3, No. 4 seating positions in fig. 11;
class C: a left outer seat (allowing for left-hand firing) on the left landing gear, such as seats No. 5, No. 6 in fig. 12;
and D type: the right outer seats (allowing right-hand firing) on the right landing gear are No. 7 and No. 8 seats in fig. 12.
Taking the above virtual environment as an example of a game client of a tactical competitive game, fig. 13 shows a perspective switching scheme for shooting in an aircraft in the tactical competitive game according to an exemplary embodiment of the present application.
Step 1301, taking a helicopter;
the player can control the game character in the tactical competitive game to ride the virtual helicopter in the virtual battlefield.
Step 1302, in a helicopter;
the game client controls the game character to ride in the virtual helicopter after receiving the operation of the player.
Step 1303, clicking a probe button or a shooting button at the BCD position;
when the game character is in class B (inside seat), class C (left outside seat), class D (right outside seat), the player can click a probe button or a shoot button or a mirror-open button displayed on the game client.
Wherein, the probe button is used for controlling the game role to extend the upper body out of the virtual helicopter. The shooting button is a button for controlling the game character to shoot at the aimed position. The open mirror button is a button for controlling the game character to aim with the scope.
Step 1304, the default position is unchanged or automatically switched to a shooting position;
when the game role is in the B-class sitting position, if the player clicks the probe button or the shooting button or the mirror opening button, the game client automatically switches the game role to the C-class shooting position or the D-class shooting position. For example, the game character is automatically switched to the closest one of the class C or class D shooting positions.
When the game character is in the class C or class D sitting position, if the player clicks the probe button or the shooting button or the mirror opening button, the default sitting position of the game character is kept unchanged.
Step 1305, switching a new camera;
the game client switches the camera originally specified by the carrier (referred to as a default camera for short) to a camera near the shoulder of the game character (referred to as a shoulder shooting camera for short). That is, the virtual environment picture viewed by the camera near the shoulder of the game character is taken as the virtual environment picture displayed on the game client.
Step 1306, obtaining a shoulder shooting visual angle at a helicopter shooting position;
the player obtains a shoulder shooting angle of the game character at the helicopter shooting position in the virtual environment picture displayed on the game client.
Step 1307, clicking the probe again or changing the position of the probe into the helicopter;
the player can click the probe button or the transposition button again to control the game character to be transposed into the helicopter.
Step 1308, switching back to the default camera;
the game client switches the shouldering camera to default to the camera. That is, the virtual environment picture viewed by the camera determined based on the vehicle is taken as the virtual environment picture displayed on the game client.
Step 1309, the default state is restored.
Several steps in the above figures are explained below.
1. Transposition logic in the virtual helicopter:
when riding the virtual helicopter, the virtual helicopter provides 8 seating positions. The left seats No. 5 and No. 6 are positions where shooting is possible, and the right seats No. 7 and No. 8 are positions where shooting is possible.
When a game character sits in the seats 2, 3 and 4 (the driving position 1 does not allow probe/shooting/open-mirror aiming) in the virtual helicopter, if a player clicks a probe button or a shooting button or an open-mirror aiming button, the game client automatically switches the game character to any one of the seats 5, 6, 7 and 8.
In one example, the game client may sequentially switch the game character to one of positions 5, 6, 7, and 8. That is, whether other game roles exist in the No. 5 riding position is judged firstly, and if the other game roles do not exist, the game roles are switched to the No. 5 riding position; if the game role exists in the riding position 6, continuously judging whether other game roles exist in the riding position 6, and if not, switching the game role to the riding position 6; if the game role exists in the No. 7 riding position, continuously judging whether other game roles exist in the No. 7 riding position, and if not, switching the game role to the No. 7 riding position; if the game character exists in the No. 8 riding position, whether other game characters exist in the No. 8 riding position is continuously judged, and if the game character does not exist in the No. 8 riding position, the game character is switched to the No. 8 riding position.
When a game character rides on the virtual helicopter, a transposition control is displayed on the lower right, and a riding position where a player is currently positioned is displayed on the transposition control. When the player clicks the transposition control, the riding position of the game role can be quickly switched.
2. Switching the cameras;
1) and selecting a proper binding position on the three-dimensional model of the game role. Because the game role can have the action of bending over downwards and shooting on the virtual helicopter, the lens offset of the camera is large, the neck part of the game role is selected as the binding position of the camera relatively reasonably, and the offset of the camera is natural.
2) Selecting the appropriate offset distance of the camera and spring arm length.
A world coordinate system exists in a map in a tactical competitive game. The world coordinate system is a three-dimensional coordinate system. The world coordinate system is based on a point in the map as an origin. The plane of the two coordinate axes xy corresponds to the horizontal plane, and the z axis is vertical to the horizontal plane and corresponds to the height.
Suppose that in the three-dimensional model of the game character, the real-time coordinate of the neck in the world coordinate system is (X)1,Y1,Z1) The offset of the second camera with respect to the neck is set to (Δ X, Δ Y, Δ Z).
The switched second camera position is (X)1+ΔX,Y1+ΔY,Z1+ΔZ)。
The length of the spring arm of the second camera is set to L.
Example (c): when the second camera is mounted on the neck of the game character, Δ X is 0, Δ Y is 0, Δ Z is 25CM, and L is 30 CM.
The camera is a more reasonable camera on the shooting position of the helicopter.
3. Camera angle limitation;
there is a limit to the angle of fire when the game character is in the left or right external seating position of the virtual helicopter. Assuming the positive x-axis direction is the default direction, the game character may only rotate plus or minus 90 degrees around the z-axis from the default direction. Meanwhile, the rotation angle of the camera is consistent with that of the game role, and the reasonability of shooting operation is kept.
For the following embodiments of the apparatus provided in the embodiments of the present application, reference may be made to the above one-to-one corresponding method embodiments for details that are not described in detail in the embodiments of the apparatus.
Fig. 14 is a block diagram illustrating a perspective switching apparatus in a virtual environment according to an exemplary embodiment of the present application. The apparatus having an application program running therein that supports a virtual environment, the apparatus comprising:
a display module 1420, configured to display a first virtual environment picture of a master control virtual object sitting on a virtual vehicle, where the first virtual environment picture is a picture acquired by using a first viewing angle determined based on the virtual vehicle, and a trigger control is superimposed on the first virtual environment picture;
a receiving module 1440, configured to receive a trigger operation acting on the trigger control;
a switching module 1460, configured to switch and display the first virtual environment picture as a second virtual environment picture according to the trigger operation, where the second virtual environment picture is a picture acquired based on a second view angle determined by the master virtual object.
In an optional embodiment, the switching module 1460 is configured to switch a first camera to a second camera according to the triggering operation, where the first camera is a camera bound to the virtual vehicle, and the second camera is a camera bound to the master virtual object; acquiring a second virtual environment picture acquired by the second camera for the virtual environment according to the second visual angle; and switching and displaying the first virtual environment picture as the second virtual environment picture.
In an alternative embodiment, the second camera is a camera bound to a head position of the master virtual object; or the second camera is a camera bound to the neck position of the master control virtual object; or the second camera is a camera bound at the shoulder position of the main control virtual object.
In an alternative embodiment, the master virtual object rides in an interior seating of the virtual vehicle; the switching module 1460 is configured to control, according to the trigger operation, the body model of the main control virtual object located in the upper body to extend out of the virtual vehicle; switching the first camera to the second camera bound to the part body model.
In an alternative embodiment, the master virtual object rides in an interior seating of the virtual vehicle; the switching module 1460 is configured to switch the master virtual object from an internal seat of the virtual vehicle to an external seat according to the triggering operation, where the external seat is a seat located outside a cabin of the virtual vehicle; switching the first camera to the second camera bound to the body model of the master virtual object.
In an alternative embodiment, the number of said external seats is at least two, each said external seat having a respective number; the switching module 1460 is configured to determine, according to the trigger operation, a target external seating position with the smallest number and in an idle state on at least two external seating positions; switching the master virtual object from an interior seating of the virtual vehicle to the target exterior seating.
In an optional embodiment, the switching module 1460 is configured to switch the master virtual object from the interior seat to the exterior seat of the virtual vehicle according to the triggering operation when the interior seat of the master virtual object is not the driving seat.
In an optional embodiment, the apparatus further comprises: a rotation module 1480;
the rotation module 1480 is configured to control the main control virtual object to rotate the view direction of the second view within a limited rotation range that is not covered by the cockpit of the virtual vehicle according to the position of the external passenger seat in the virtual vehicle.
In an optional embodiment, the receiving module 1440 is configured to receive a triggering operation acting on the triggering control again; the switching module 1460 is configured to restore and display the second virtual environment picture as the first virtual environment picture according to the trigger operation.
In an alternative embodiment, the trigger control comprises: at least one of an aiming control, a firing control, and a probe control;
the aiming control is used for controlling the main control virtual object to aim by using a virtual weapon;
the shooting control is used for controlling the main control virtual object to shoot by using a virtual weapon;
the probe control is used for controlling all or part of the body model of the main control virtual object to extend out of the cabin of the virtual carrier.
It should be noted that: in the above embodiment, when the device switches the view angle in the virtual environment, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the identity recognition device provided in the above embodiment belongs to the same concept as the method embodiment of the identity recognition method, and the specific implementation process thereof is described in the method embodiment and is not described herein again.
Fig. 15 shows a block diagram of a terminal 1500 according to an embodiment of the present application. The terminal 1500 may be an electronic device such as a mobile phone, a tablet computer, a smart television, a multimedia playing device, a wearable device, a desktop computer, and a server. The terminal 1500 may be configured to implement any one of the identity recognition method, the training method of the first countermeasure generation network, and the training method of the second countermeasure generation network provided in the above embodiments.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement a method of perspective switching in a virtual environment as provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device may include: at least one of a display 1504, an audio circuit 1505, a communication interface 1506, and a power supply 1507.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, there is also provided a terminal comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions. The at least one instruction, at least one program, set of codes, or set of instructions is configured to be executed by the processor to implement the method of perspective switching in a virtual environment described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions which, when executed by a processor of a terminal, implements the above-described method of perspective switching in a virtual environment. Alternatively, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product for implementing the above-described method of perspective switching in a virtual environment when the computer program product is executed.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method for switching visual angles in a virtual environment is applied to a terminal, wherein an application program supporting the virtual environment runs in the terminal, and the method comprises the following steps:
displaying a first virtual environment picture of a main control virtual object sitting on a virtual carrier, wherein the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier, and a trigger control is superposed on the first virtual environment picture;
receiving a trigger operation acted on the trigger control;
switching a first camera to a second camera according to the triggering operation, wherein a mounting point exists on the virtual vehicle, the first camera is mounted behind the mounting point according to a preset distance, or the first camera is an in-cabin camera located in a cabin of the virtual vehicle, and the in-cabin camera is a camera for simulating the view angle of a passenger sitting on an inner sitting position; the second camera is a camera bound to the neck position of the main control virtual object, the second camera is provided with a spring arm with the length of L, the spring arm is used for automatically controlling the corresponding mode of the second camera when the second camera is obstructed by a geometric body or other objects in a checkpoint, L is greater than 0, and the position of the second camera is obtained through the length and the offset distance of the spring arm;
acquiring a second virtual environment picture acquired by the second camera according to a second visual angle and acquired from the virtual environment, wherein the position of the second camera is obtained by synthesizing the neck deviation and the spring arm;
and switching and displaying the first virtual environment picture as the second virtual environment picture.
2. The method of claim 1, wherein the master virtual object is seated in the interior seating of the virtual vehicle;
the switching the first camera to the second camera according to the trigger operation includes:
controlling a part of body model of the main control virtual object positioned on the upper half body to extend out of the virtual carrier according to the triggering operation;
switching the first camera to the second camera bound to the part body model.
3. The method of claim 1, wherein the master virtual object is seated in the interior seating of the virtual vehicle;
the switching the first camera to the second camera according to the trigger operation includes:
switching the master virtual object from the interior seat to an exterior seat of the virtual vehicle according to the triggering operation, the exterior seat being a seat outside a cockpit of the virtual vehicle;
switching the first camera to the second camera bound to the body model of the master virtual object.
4. The method of claim 3, wherein there are at least two of the external seating positions, each of the external seating positions having a respective number;
the switching the master virtual object from the interior seating position to an exterior seating position of the virtual vehicle according to the triggering operation includes:
determining the target external seating positions with the minimum number and in an idle state on at least two external seating positions according to the triggering operation;
switching the master virtual object from the interior seating of the virtual vehicle to the target exterior seating.
5. The method of claim 3, wherein the switching the master virtual object from the interior seat to an exterior seat of the virtual vehicle according to the triggering operation comprises:
when the interior seat of the master virtual object is not a driver seat, switching the master virtual object from the interior seat to the exterior seat of the virtual vehicle according to the triggering operation.
6. The method of claim 3, further comprising:
and controlling the main control virtual object to rotate the visual angle direction of the second visual angle within a limited rotation range which is not shielded by a cabin of the virtual vehicle according to the position of the external passenger seat in the virtual vehicle.
7. The method of any of claims 1 to 6, further comprising:
receiving the trigger operation acted on the trigger control again;
and restoring and displaying the second virtual environment picture as the first virtual environment picture according to the trigger operation.
8. The method of any of claims 1 to 6, wherein the triggering control comprises: at least one of an aiming control, a firing control, and a probe control;
the aiming control is used for controlling the main control virtual object to aim by using a virtual weapon;
the shooting control is used for controlling the main control virtual object to shoot by using the virtual weapon;
the probe control is used for controlling all or part of the body model of the main control virtual object to extend out of the cabin of the virtual carrier.
9. An apparatus for switching a viewing angle in a virtual environment, the apparatus being applied to a terminal in which an application program supporting the virtual environment is running, the apparatus comprising:
the display module is used for displaying a first virtual environment picture of a main control virtual object sitting on a virtual carrier, the first virtual environment picture is a picture acquired by adopting a first visual angle determined based on the virtual carrier, and a trigger control is superposed on the first virtual environment picture;
the receiving module is used for receiving the trigger operation acted on the trigger control;
the switching module is used for switching a first camera into a second camera according to the triggering operation, a mounting point exists on the virtual carrier, the first camera is mounted behind the mounting point according to a preset distance, or the first camera is an in-cabin camera located in a cabin of the virtual carrier, and the in-cabin camera is a camera used for simulating the visual angle of a passenger sitting on an inner sitting position; the second camera is a camera bound to the neck position of the main control virtual object, the second camera is provided with a spring arm with the length of L, the spring arm is used for automatically controlling the corresponding mode of the second camera when the second camera is obstructed by a geometric body or other objects in a checkpoint, L is greater than 0, and the position of the second camera is obtained through the length and the offset distance of the spring arm; acquiring a second virtual environment picture acquired by the second camera according to a second visual angle; and switching and displaying the first virtual environment picture as the second virtual environment picture.
10. A terminal, characterized in that the terminal comprises: a processor and a memory, the memory storing a set of codes or instructions that are loaded and executed by the processor to implement a method of view switching in a virtual environment as claimed in any one of claims 1 to 8 above.
11. A computer readable storage medium having stored thereon a set of codes or instructions, which are loaded and executed by a processor to implement a method of view switching in a virtual environment as claimed in any one of the preceding claims 1 to 8.
CN201910866721.3A 2019-09-12 2019-09-12 Visual angle switching method, device, terminal and medium in virtual environment Active CN110559662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910866721.3A CN110559662B (en) 2019-09-12 2019-09-12 Visual angle switching method, device, terminal and medium in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910866721.3A CN110559662B (en) 2019-09-12 2019-09-12 Visual angle switching method, device, terminal and medium in virtual environment

Publications (2)

Publication Number Publication Date
CN110559662A CN110559662A (en) 2019-12-13
CN110559662B true CN110559662B (en) 2021-01-26

Family

ID=68779845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910866721.3A Active CN110559662B (en) 2019-09-12 2019-09-12 Visual angle switching method, device, terminal and medium in virtual environment

Country Status (1)

Country Link
CN (1) CN110559662B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111228804B (en) * 2020-02-04 2021-05-14 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for driving vehicle in virtual environment
CN111672107B (en) * 2020-05-28 2022-09-02 深圳市腾讯网域计算机网络有限公司 Virtual scene display method and device, computer equipment and storage medium
CN111760275A (en) * 2020-07-08 2020-10-13 网易(杭州)网络有限公司 Game control method and device and electronic equipment
CN112330823B (en) * 2020-11-05 2023-06-16 腾讯科技(深圳)有限公司 Virtual prop display method, device, equipment and readable storage medium
CN112957733B (en) * 2021-03-31 2022-10-11 歌尔股份有限公司 Game picture display method, positioning base station, host equipment and related equipment
CN113398564B (en) * 2021-07-12 2024-02-13 网易(杭州)网络有限公司 Virtual character control method, device, storage medium and computer equipment
CN113730908B (en) * 2021-09-15 2023-08-25 腾讯科技(深圳)有限公司 Picture display method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2742464Y (en) * 2004-10-30 2005-11-23 徐红兵 Spring arm for stabilizer of camera
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015103426B4 (en) * 2015-03-09 2020-07-02 Carl Zeiss Meditec Ag Microscope system and method for automated alignment of a microscope
CN109260702A (en) * 2018-08-30 2019-01-25 腾讯科技(深圳)有限公司 Virtual carrier control method, computer equipment and storage medium in virtual scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2742464Y (en) * 2004-10-30 2005-11-23 徐红兵 Spring arm for stabilizer of camera
CN108619720A (en) * 2018-04-11 2018-10-09 腾讯科技(深圳)有限公司 Playing method and device, storage medium, the electronic device of animation

Also Published As

Publication number Publication date
CN110559662A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN110559662B (en) Visual angle switching method, device, terminal and medium in virtual environment
US20200254353A1 (en) Synchronized motion simulation for virtual reality
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN113181650B (en) Control method, device, equipment and storage medium for calling object in virtual scene
US8998715B2 (en) Control of vehicles in three dimensional virtual environments
US9044670B2 (en) Using input from a pointing device to control a simulated object
US10176586B2 (en) Image generation system, image generation method, and information storage medium
CN112076473B (en) Control method and device of virtual prop, electronic equipment and storage medium
EP4119210A1 (en) State switching method and apparatus in virtual scene, device, medium, and program product
CN110732135B (en) Virtual scene display method and device, electronic equipment and storage medium
EP4011470A1 (en) Display method and apparatus for virtual environment picture, and device and storage medium
CN110876849B (en) Virtual vehicle control method, device, equipment and storage medium
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112402946B (en) Position acquisition method, device, equipment and storage medium in virtual scene
CN110585708B (en) Method, device and readable storage medium for landing from aircraft in virtual environment
CN111569414A (en) Flight display method and device of virtual aircraft, electronic equipment and storage medium
CN114130006B (en) Virtual prop control method, device, equipment, storage medium and program product
CN112138392B (en) Virtual object control method, device, terminal and storage medium
JP2018171309A (en) Simulation system and program
CN112156472A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112870694B (en) Picture display method and device of virtual scene, electronic equipment and storage medium
KR102528037B1 (en) Table type flight tactical simulation system supporting battalion-level tactical flight training
CN114042317A (en) Interaction method, device, equipment, medium and program product based on virtual object
CN116920371A (en) Method and device for processing props in game, electronic equipment and storage medium
CN116764196A (en) Processing method, device, equipment, medium and program product in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant