CN110597389B - Virtual object control method in virtual scene, computer device and storage medium - Google Patents

Virtual object control method in virtual scene, computer device and storage medium Download PDF

Info

Publication number
CN110597389B
CN110597389B CN201910862693.8A CN201910862693A CN110597389B CN 110597389 B CN110597389 B CN 110597389B CN 201910862693 A CN201910862693 A CN 201910862693A CN 110597389 B CN110597389 B CN 110597389B
Authority
CN
China
Prior art keywords
virtual
target
type
virtual object
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910862693.8A
Other languages
Chinese (zh)
Other versions
CN110597389A (en
Inventor
刘柏君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910862693.8A priority Critical patent/CN110597389B/en
Publication of CN110597389A publication Critical patent/CN110597389A/en
Application granted granted Critical
Publication of CN110597389B publication Critical patent/CN110597389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a virtual object control method in a virtual scene, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: the method comprises the steps of displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, and the virtual carrier comprises at least one first type position and at least two second type positions. And when the specified operation is received and the virtual object controlled by the terminal is positioned at the first type position, acquiring a target visual angle direction. And determining a target position from the at least two second type positions according to the target view angle direction. Therefore, the virtual object is controlled to be switched to the target position, and the efficiency of switching the position of the virtual object in the virtual carrier is improved.

Description

Virtual object control method in virtual scene, computer device and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method, a computer device, and a storage medium for controlling a virtual object in a virtual scene.
Background
At present, many application programs (such as virtual reality application programs, three-dimensional map programs, military simulation programs, first-person shooting games, multi-person online tactical competitive games and the like) for constructing virtual scenes have virtual carriers, and can control virtual objects to perform a transposition shooting function.
In the related art, a display interface of a virtual scene displayed in a touch screen terminal generally includes a virtual control for controlling a position of a virtual object in the virtual scene in a virtual vehicle, for example, the virtual control may be an icon key labeled with each position in the virtual vehicle, where the position of the virtual object is labeled and displayed in the icon key, when the terminal detects a touch operation of a user on the virtual key, the terminal controls the virtual object to be transposed in the virtual vehicle according to a fixed transposition order, and the terminal only controls the virtual object to switch one position in the virtual vehicle every time the terminal detects the touch operation of the user on the virtual key.
In the related art, since the replacement position needs to be selected according to the fixed transposition order, the virtual object needs to be controlled to move to the target position by performing multiple touch operations on the virtual key, which results in low transposition efficiency of the virtual object in the virtual vehicle.
Disclosure of Invention
The embodiment of the application provides a virtual object control method, computer equipment and a storage medium in a virtual scene, which can improve the transposition efficiency of a virtual object in a virtual carrier, and the technical scheme is as follows:
in one aspect, a method for controlling a virtual object in a virtual scene is provided, where the method is performed by a terminal, and the method includes:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed;
when a specified operation is received and the virtual object controlled by the terminal is located at the first type position, acquiring a target view angle direction, wherein the target view angle direction is a view angle direction for observing the virtual object through a camera model;
determining a target position from the at least two second type positions according to the target view angle direction;
and controlling the virtual object to be switched to the target position.
In one aspect, a method for controlling a virtual object in a virtual scene is provided, where the method is performed by a terminal, and the method includes:
the method comprises the steps that a first display interface of an application program is displayed, the first display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed; the display interface comprises a designated control; in the first display interface, the virtual object controlled by the terminal is located at the first type position;
receiving a triggering operation on the specified control, wherein the specified control is a control for triggering the virtual object to execute the specified action;
displaying a second display interface of the second application program, wherein the virtual object is located at a target position in the second display interface; the target position is a position corresponding to a target view angle direction when the trigger operation is received, in the at least two second type positions; the target view direction is a view direction in which the virtual object is observed by a camera model.
In one aspect, an apparatus for controlling a virtual object in a virtual scene is provided, the apparatus comprising:
the interface display module is used for displaying a display interface of an application program, the display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed;
a view angle obtaining module, configured to obtain a target view angle direction when a specified operation is received and a virtual object controlled by the terminal is located at the first type position, where the target view angle direction is a view angle direction in which the virtual object is observed through a camera model;
a position determining module, configured to determine a target position from the at least two second type positions according to the target view direction;
and the position switching module is used for controlling the virtual object to be switched to the target position.
Optionally, the position determination module includes, in addition to,
the vehicle orientation obtaining submodule is used for obtaining the orientation of the virtual vehicle;
and the target position determining submodule is used for determining the target position from the at least two second types of positions according to the target visual angle direction and the orientation of the virtual carrier.
Optionally, the target position determination submodule includes,
the priority acquiring unit is used for acquiring the priorities of the at least two second types of positions according to the target view angle direction and the orientation of the virtual carrier;
an idle position determination unit for determining respective idle positions of the at least two second type positions;
a first target location determining unit, configured to determine a location with a highest priority among the idle locations as the target location.
Optionally, the at least two second type positions include a first position located at the front left of the virtual vehicle, a second position located at the rear left of the virtual vehicle, a third position located at the front right of the virtual vehicle, and a fourth position located at the rear right of the virtual vehicle;
the priority acquiring unit is used for acquiring the priority of the received data,
if the horizontal component of the target view direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the first position, the second position, the third position and the fourth position, respectively;
if the horizontal component of the target view direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the third position, the fourth position, the first position and the second position, respectively.
Optionally, the at least two second type positions include a first position located at the front left of the virtual vehicle, a second position located at the rear left of the virtual vehicle, a third position located at the front right of the virtual vehicle, and a fourth position located at the rear right of the virtual vehicle;
the priority acquiring unit is used for acquiring the priority of the received data,
if the horizontal component of the target view angle direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions from high to low are the first position, the second position, the third position and the fourth position respectively;
if the horizontal component of the target view angle direction is offset to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, determining that the priorities of the at least two second type positions from high to low are the second position, the first position, the fourth position and the third position respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions are the third position, the fourth position, the first position and the second position from high to low respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, it is determined that the priorities of the at least two second type positions from high to low are the fourth position, the third position, the second position and the first position respectively.
Optionally, the position determination module includes, in addition to,
the direction obtaining submodule is used for obtaining the relative direction of each of the at least two second type positions relative to the central point of the virtual vehicle;
a target position determining submodule, configured to determine the target position from the at least two second-type positions according to the target viewing direction and a relative direction of each of the at least two second-type positions with respect to a center point of the virtual vehicle.
Optionally, the target position determination submodule includes,
a first component acquiring unit configured to acquire a first component of the target view direction on a horizontal plane;
a second component acquiring unit, configured to acquire at least two second components, where the at least two second components are components of the at least two second-type positions on a horizontal plane respectively with respect to relative directions of the center points of the virtual vehicles;
the target component determining unit is used for determining a second component with the smallest included angle with the first component as a target component;
a second target position determining unit, configured to determine a second type position corresponding to the target component as the target position.
Optionally, the position switching module includes,
and the target position switching submodule is used for controlling the virtual object to be switched to the target position when the target position is idle.
Optionally, the display interface includes a designated control;
the specified operation is a trigger operation on the specified control, and the specified control is used for triggering the virtual object to execute the specified action.
Optionally, the specified action is an attack action, or the specified action is a pre-action of the attack action.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the virtual object control method in the above virtual scene.
In yet another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the virtual object control method in the above virtual scene.
According to the method and the device, the virtual object in the virtual scene is subjected to the designated operation, and the target position is determined from the second type position according to the acquired target visual angle direction, so that the virtual object is controlled to be switched from the first type position in the virtual carrier to the target position in the second type position, and the position switching efficiency of the virtual object in the virtual carrier is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a control flow of a virtual object in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a virtual object control flow in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a display interface of a virtual scene according to the embodiment shown in FIG. 4;
FIG. 6 is a flowchart of a method for controlling a virtual object in a virtual scene according to an exemplary embodiment of the present application;
FIG. 7 is a schematic view of a virtual helicopter according to the embodiment of FIG. 6;
FIG. 8 is a schematic diagram of a display interface of another virtual scene according to the embodiment shown in FIG. 6;
FIG. 9 is a schematic view of a camera model to which the embodiment shown in FIG. 6 relates;
FIG. 10 is a schematic diagram of a coordinate system in a virtual scene according to the embodiment shown in FIG. 6;
FIG. 11 is a schematic view of a direction vector involved in the embodiment of FIG. 6;
FIG. 12 is a schematic illustration of a method of obtaining priority for a second type of location in accordance with the embodiment shown in FIG. 6;
FIG. 13 is a flow chart of a shooting intelligent relocation in a virtual helicopter according to the embodiment shown in FIG. 6;
FIG. 14 is a schematic illustration of another embodiment of FIG. 6 relating to obtaining a priority for a second type of location;
FIG. 15 is a flowchart of a method for controlling virtual objects in a virtual scene, according to an exemplary embodiment of the present application;
FIG. 16 is a schematic illustration of one type of acquisition of a second type of location to which the embodiment shown in FIG. 15 relates;
fig. 17 is a block diagram illustrating a configuration of a virtual object control apparatus in a virtual scene according to an exemplary embodiment of the present application;
fig. 18 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Virtual scene: is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene may also be used for fighting between at least two virtual characters using a virtual firearm within a target area that may be continually smaller over time in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
Referring to fig. 1, a schematic structural diagram of a terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the touch system 150 is used to detect a touch operation performed by a user on the display component or the key.
The power supply 160 is used to power the various other components in the terminal.
In the embodiment of the present application, the processor in the main board 110 may generate a virtual scene by executing or calling the program code and data stored in the memory, and expose the generated virtual scene through the external input/output device 120. In the process of displaying the virtual scene, the capacitive touch system 150 may detect a touch operation performed when the user interacts with the virtual scene.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a virtual vehicle 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. The virtual vehicle 210 may be a virtual vehicle in which a current virtual object of the terminal-corresponding user is located, for example, when the current virtual object of the terminal-corresponding user is a virtual character, the virtual vehicle 210 is a virtual vehicle in which the virtual character sits. Alternatively, the virtual vehicle 210 may be a current virtual object of the terminal corresponding to the user. The virtual object 240 may be a virtual object controlled by the corresponding user of the other terminal.
In fig. 2, the virtual vehicle 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the virtual vehicle 210, and for example, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the virtual vehicle 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The virtual object 240 may be switched in position in the virtual vehicle 210 under the control of the user. For example, the screen of the terminal supports touch operation, and the scene screen 200 of the virtual scene includes a virtual control, which may be a pre-action for executing an attack action or an attack action. When the user touches the virtual control, the virtual object 240 may switch positions in the virtual vehicle 210.
In the embodiment of the application, the virtual object can perform operations such as view angle adjustment and position switching under the control of the terminal.
For example, please refer to fig. 3, which illustrates a schematic diagram of a control flow of a virtual object in a virtual scene according to an exemplary embodiment of the present application. As shown in fig. 3, a terminal (such as the terminal shown in fig. 1) running an application corresponding to the virtual scene may control a virtual object in the virtual scene to switch positions in the virtual carrier by performing the following steps.
Step 31, displaying a display interface of the application program, where the display interface includes a scene picture of a virtual scene, the virtual scene includes a virtual vehicle, the virtual vehicle includes at least one first type position and at least two second type positions, the first type position is a position where execution of a specified action is prohibited, and the second type position is a position where execution of the specified action is permitted.
And step 32, when the specified operation is received and the virtual object controlled by the terminal is located at the first type position, acquiring a target view angle direction, wherein the target view angle direction is a view angle direction for observing the virtual object through the camera model.
Step 33, determining a target position from the at least two second type positions according to the target view direction.
And 34, controlling the virtual object to be switched to the target position.
According to the scheme shown in the embodiment of the application, when the virtual object controlled by the terminal is located at the position where the execution of the specified action is forbidden, the terminal can acquire the direction of the target view angle when receiving the specified operation, determine the target position from the positions where the execution of the specified action is allowed according to the direction of the target view angle, and then control the virtual object to switch to the target position, so that the function of controlling the virtual object to switch from the position where the execution of the specified action is forbidden to the position where the execution of the specified action is allowed according to the direction of the target view angle of the virtual object is provided, and the operability of the virtual object is.
For another example, please refer to fig. 4, which shows a schematic diagram of a control flow of a virtual object in a virtual scene according to an exemplary embodiment of the present application. As shown in fig. 4, a terminal (such as the terminal shown in fig. 1) running an application corresponding to the virtual scene may control the position switching of the virtual object in the virtual scene in the virtual vehicle by performing the following steps.
Step 41, displaying a first display interface of an application program, where the first display interface includes a scene picture of a virtual scene, the virtual scene includes a virtual vehicle, the virtual vehicle includes at least one first type position and at least two second type positions, the first type position is a position where a specified action is prohibited from being executed, and the second type position is a position where the specified action is permitted to be executed; the display interface comprises a designated control; in the first display interface, the virtual object controlled by the terminal is located at the first type position.
In the embodiment of the application, the first display interface of the application program comprises a scene picture of a virtual scene and a designated control which is stacked on the virtual scene picture, and when the virtual object is located at a first type position where execution of the designated action is forbidden, a user can control the virtual object in the virtual scene through operation of the designated control.
And 42, receiving a triggering operation on the specified control, wherein the specified control is a control for triggering the virtual object to execute the specified action.
The specified action can be an attack action or a preceding action of the attack action, for example, the specified control can be a shooting control or a probe control. The shooting control is a virtual control used to control the virtual object to perform a shooting action. The probe control is a virtual control used to control the virtual object to perform the probe action.
In the embodiment of the application, the designated control is stacked on the scene picture, and when a user triggers the designated control, the terminal can receive the triggering operation of the designated control, so that the control of executing the designated action on the virtual object is realized.
Step 43, displaying a second display interface of the application program, wherein the virtual object is located at a target position in the second display interface; the target position is a position corresponding to the target view angle direction when the trigger operation is received in the at least two second type positions; the target view direction is a view direction in which the virtual object is viewed by the camera model.
And the virtual object in the second display interface is positioned at a target position, and the target position corresponds to the target view angle direction when the terminal receives the triggering operation of the specified control. That is to say, the scheme shown in the embodiment of the present application can directly switch the fixed position of the virtual object in the target view direction through the trigger operation on the designated control.
Please refer to fig. 5, which illustrates a display interface diagram of a virtual scene according to an embodiment of the present application. As shown in fig. 5, the display interface 50 of the virtual scene includes a scene screen 51 and at least one designated control 52 (two designated controls are shown in fig. 5), and the scene screen 51 includes a virtual carrier 51a and a virtual object 51 b. The user can control the virtual object 51b to switch the position in the virtual vehicle 51a by specifying the control 52.
In the embodiment of the application, the virtual object in the virtual scene is subjected to the designated operation, and the target position is determined from the second type position according to the acquired target view angle direction, so that the virtual object is controlled to be switched from the first type position to the target position in the second type position in the virtual vehicle, the position switching efficiency of the virtual object in the virtual vehicle is improved, and the position switching flexibility of the virtual object in the virtual vehicle is improved.
Referring to fig. 6, a flowchart of a method for controlling a virtual object in a virtual scene according to an exemplary embodiment of the present application is shown. As shown in fig. 6, a terminal (such as the terminal shown in fig. 1) running an application corresponding to the virtual scene may control the position switching of the virtual object in the virtual scene in the virtual vehicle by performing the following steps.
Step 601, displaying a display interface of an application program, where the display interface includes a scene picture of a virtual scene, the virtual scene includes a virtual vehicle, the virtual vehicle includes at least one first type position and at least two second type positions, the first type position is a position where a specified action is prohibited from being executed, and the second type position is a position where the specified action is permitted to be executed.
In the solution shown in the embodiment of the present application, the virtual vehicle may be all types of virtual vehicles that include at least one position where the execution of the specified action is prohibited and at least two positions where the execution of the specified action is permitted. For example, the virtual vehicle may be a virtual armored car, a virtual cruise ship, or a virtual helicopter.
For example, please refer to fig. 7, which shows a schematic diagram of a virtual helicopter according to an embodiment of the present application. In reality, a helicopter is a flying device which flies by means of propellers and a tail wing, and the helicopter has the prominent characteristics of capability of maneuvering flight with low altitude (several meters away from the ground), low speed (from hovering) and unchanged nose direction, and particularly capability of vertically taking off and landing in a small-area field. As shown in fig. 7, there may be 8 positions for the virtual helicopter to ride or steer. The position 1 is a driving position, and the driving position is a position where a specified action cannot be performed and position switching cannot be performed. Position No. 2, position No. 3, and position No. 4 are the first type of position, i.e., the position where the specified action is prohibited from being performed. Position No. 5, position No. 6, position No. 7, and position No. 8 are the second type of position, i.e., the position that allows the specified action to be performed.
Step 602, when a specified operation is received and the virtual object controlled by the terminal is located at the first type position, acquiring a target view direction, where the target view direction is a view direction in which the virtual object is observed through a camera model.
Optionally, the display interface may include a designated control; the specified operation may be a trigger operation on the specified control, and the specified control is used for triggering the virtual object to execute the specified action.
Optionally, the specific action is an attack action, or the specific action is a pre-action of the attack action.
Please refer to fig. 8, which illustrates a display interface diagram of a virtual scene according to an embodiment of the present application. As shown in fig. 8, when the designated control is the shooting control 82 or the probe control 83, the user can adjust the position of the virtual object in the virtual vehicle 81 by touching the area where the shooting control 82 or the probe control 83 is located.
The virtual vehicle 81 is a virtual helicopter, and the specific internal position distribution thereof is as shown in fig. 7, and when the virtual object controlled by the terminal is located at position No. 2, position No. 3, or position No. 4, the direction of the target view angle is obtained, and the direction of the target view angle is the view angle direction in which the virtual object is observed through the camera model.
Referring to fig. 9, a schematic view of a camera model according to an embodiment of the present application is shown. A point is determined in the virtual object 91 as a centre of rotation 92 around which the camera model is rotated, optionally provided with an initial position which is a position above and behind the virtual object, such as the rear position of the brain. Illustratively, as shown in fig. 9, the initial position is position 93, and when the camera model rotates to position 94 or position 95, the direction of the angle of view of the camera model changes as the camera model rotates.
In the embodiment of the present application, the virtual object 91 may be a virtual carrier in a virtual scene, or the virtual object may be any other form of virtual object that can be controlled by a user, such as a virtual animal.
Step 603, obtain the orientation of the virtual vehicle.
Please refer to fig. 10, which is a schematic diagram illustrating a coordinate system in a virtual scene according to an embodiment of the present application. As shown in fig. 10, the virtual scene has the same direction division as the real world, a rectangular coordinate system may be established in the virtual scene, an x axis, a y axis, and a z axis are respectively established, the three coordinate axes are perpendicular to each other, and any direction in the virtual scene may be represented by a direction vector under the coordinate axis.
Please refer to fig. 11, which illustrates a schematic diagram of a direction vector according to an embodiment of the present application. As shown in fig. 11, since all the pictures in the virtual scene are captured by the camera, there is a unit vector of the direction of orientation of the camera
Figure BDA0002200285700000121
The vector is the vector of the target view direction. The unit vector of direction as shown in FIG. 11
Figure BDA0002200285700000122
Is the unit vector of the orientation of the virtual vehicle.
Step 604, determining the target position from the at least two second-type positions according to the target view direction and the orientation of the virtual vehicle.
In the embodiment of the present application, the target position may be determined from at least two positions allowing the designated action to be performed according to the target viewing direction and the orientation of the virtual vehicle.
In a possible implementation manner, the terminal may obtain priorities of at least two positions allowing the designated action to be executed according to the target view direction and the orientation of the virtual vehicle, determine each idle position of the at least two positions allowing the designated action to be executed, and determine a position with a highest priority among the idle positions as the target position.
When the at least two second-type positions include a first position located in front of the virtual vehicle on the left side, a second position located in back of the virtual vehicle on the left side, a third position located in front of the virtual vehicle on the right side, and a fourth position located in back of the virtual vehicle on the right side, the method for obtaining the priorities of the at least two second-type positions according to the target viewing angle direction and the orientation of the virtual vehicle may be that, if the horizontal component of the target viewing angle direction is offset to the left relative to the horizontal component of the orientation of the virtual vehicle, the priorities of the at least two second-type positions are determined to be the first position, the second position, the third position, and the fourth position from high to low, respectively. If the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, the priorities of the at least two second type positions are determined to be a third position, a fourth position, a first position and a second position from high to low.
For example, please refer to fig. 12, which illustrates a schematic diagram of obtaining the priority of the second type of location according to an embodiment of the present application. As shown in fig. 12, the position No. 1 is a driving position, the positions No. 2, No. 3, and No. 4 are first type positions, and the positions No. 5, No. 6, No. 7, and No. 8 are second type positions. When the virtual object is located at the first type position, the direction vector of the orientation of the virtual carrier is a vector
Figure BDA0002200285700000131
The direction vector of the target view direction is a vector
Figure BDA0002200285700000132
Neglecting the vector component in the z-axis direction
Figure BDA0002200285700000133
Has a horizontal direction component of
Figure BDA0002200285700000134
(Vector)
Figure BDA0002200285700000135
Has a horizontal direction component of
Figure BDA0002200285700000136
By cross-multiplication of vectors, i.e.
Figure BDA0002200285700000137
X can be calculated1y0-x0y1<0, description vector
Figure BDA0002200285700000138
Can be rotated to a vector by clockwise rotation of the horizontal component by an angle less than 180 degrees
Figure BDA0002200285700000139
The position of the horizontal direction component of (a) can be obtained that the direction of the target view angle is on the right side with respect to the direction of the virtual vehicle. It can then be determined that the priorities of the second type of location are, from high to low, location No. 7, location No. 8, location No. 5 and location No. 6, respectively. And if the 7 th position is a non-idle position and the 8 th position is an idle position, determining the 8 th position as a target position. If by cross-multiplication of vectors, i.e.
Figure BDA00022002857000001310
Calculate to obtain x1y0-x0y1Greater than or equal to 0, accounting for vector
Figure BDA00022002857000001311
Can be rotated to a vector by rotating counterclockwise a component of the horizontal direction by an angle less than 180 degrees
Figure BDA00022002857000001312
The position of the horizontal direction component of (a) can be obtained that the direction of the target view angle is on the left side with respect to the direction of the virtual vehicle.
Please refer to fig. 13, which illustrates a flowchart of shooting intelligent seat changing in a virtual helicopter according to an embodiment of the present application. As shown in fig. 13, when the user controls the virtual object to ride on the virtual helicopter in the virtual scene, the virtual object enters into the virtual helicopter in the application program. When the user is in the B-type position of the virtual helicopter, where the B-type position is the first-type position, the user clicks the probe or shoots the virtual key, i.e., performs a specified operation, and the application client may be controlled to calculate the angular relationship between the aiming direction, i.e., the target viewing angle direction, and the helicopter heading direction, i.e., the heading direction of the virtual vehicle. As shown in fig. 12, in conjunction with the content shown in fig. 12, the application client may determine that the aiming direction is inclined to the right relative to the helicopter heading direction, the application client determines that the sequence of intelligent seat changing is position No. 5, 6, 7, and 8, and in the display interface, the virtual object may be switched to position No. 5, position No. 6, position No. 7, or position No. 8 in sequence.
In another possible implementation manner, when the at least two second-type positions include a first position located at the front left of the virtual vehicle, a second position located at the rear left of the virtual vehicle, a third position located at the front right of the virtual vehicle, and a fourth position located at the rear right of the virtual vehicle, there may be four cases in the method for obtaining the priorities of the at least two second-type positions according to the target viewing angle direction and the orientation of the virtual vehicle:
if the horizontal component of the target view angle direction is deviated to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions from high to low are a first position, a second position, a third position and a fourth position respectively;
if the horizontal component of the target visual angle direction is deviated to the left relative to the horizontal component of the orientation of the virtual carrier, and the included angle between the horizontal component of the target visual angle direction and the horizontal component of the orientation of the virtual carrier is greater than 90 degrees, determining that the priorities of at least two second type positions from high to low are respectively a second position, a first position, a fourth position and a third position;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions are respectively a third position, a fourth position, a first position and a second position from high to low;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, determining that the priorities of the at least two second type positions from high to low are a fourth position, a third position, a second position and a first position respectively.
For example, please refer to fig. 14, which illustrates another exemplary diagram of obtaining the priority of the second type of location according to the embodiment of the present application. As shown in fig. 14, the position No. 1 is a driving position, the positions No. 2, No. 3, and No. 4 are first type positions, and the positions No. 5, No. 6, No. 7, and No. 8 are second type positions. When the virtual object is located at the first type position, the direction vector of the orientation of the virtual carrier is a vector
Figure BDA0002200285700000141
The direction vector of the target view direction is a vector
Figure BDA0002200285700000142
The vector component in the z-axis direction is omitted, the vector in the horizontal direction is cross-multiplied, and the direction vector horizontal component of the target view angle direction which is far to the left side relative to the orientation direction of the virtual carrier and the orientation of the virtual carrier can be calculated
Figure BDA0002200285700000143
Horizontal component of direction vector with target view direction
Figure BDA0002200285700000144
Is greater than 90 degrees. It can then be determined that the priorities of the second type of location are, from high to low, location No. 6, location No. 5, location No. 8 and location No. 7, respectively. And if the No. 6 position is a non-idle position and the No. 5 position is an idle position, determining that the No. 5 position is a target position.
Step 605, controlling the virtual object to switch to the target position.
And determining a target position in the second type position according to the priority, controlling the virtual object to be switched from the first type position to the target position, and executing a specified action.
When the virtual object is at the first type position, the application program shows a first display interface, the first display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position for forbidding execution of a specified action, and the second type position is a position for allowing execution of the specified action; the display interface comprises a designated control; in the first display interface, the virtual object controlled by the terminal is located at a first type position. After receiving the triggering operation of the designated control, switching the virtual object from the first type position to the second type position, and displaying a second display interface by the application program, wherein the virtual object is located at a target position in the second display interface; the target position is a position corresponding to a target view direction when the trigger operation is received, among the at least two second type positions, the target view direction being a view direction in which the virtual object is observed by the camera model.
In summary, in the embodiment of the present application, the virtual object in the virtual scene is subjected to the designation operation, and the target position is determined from the second type position according to the obtained target view direction, so that the virtual object is controlled to switch from the first type position in the virtual vehicle to the target position in the second type position, the efficiency of switching the position of the virtual object in the virtual vehicle is improved, and the flexibility of switching the position of the virtual object in the virtual vehicle is increased.
Referring to fig. 15, a flowchart of a method for controlling a virtual object in a virtual scene according to an exemplary embodiment of the present application is shown. As shown in fig. 15, a terminal (such as the terminal shown in fig. 1) running an application corresponding to the virtual scene may control the position switching of the virtual object in the virtual scene in the virtual carrier by performing the following steps.
Step 1501, displaying a display interface of an application, where the display interface includes a scene picture of a virtual scene, the virtual scene includes a virtual vehicle, the virtual vehicle includes at least one first type position and at least two second type positions, the first type position is a position where a specified action is prohibited from being executed, and the second type position is a position where the specified action is permitted to be executed.
Step 1502, when a specified operation is received and the virtual object controlled by the terminal is located at the first type position, acquiring a target view direction, where the target view direction is a view direction in which the virtual object is observed through the camera model.
The step 1501 and the step 1502 can refer to the step 601 and the step 602, which is not described herein again.
In step 1503, the relative directions of the at least two second-type locations with respect to the center point of the virtual vehicle are obtained.
For example, please refer to fig. 16, which illustrates a schematic diagram of acquiring a second type of location according to an embodiment of the present application. As shown in fig. 16, the position No. 1 is a driving position, the positions No. 2, No. 3, and No. 4 are first type positions, and the positions No. 5, No. 6, No. 7, and No. 8 are second type positions. Horizontal direction vectors a5, a6, a7, a8 are introduced from the center point of the virtual vehicle to the second type of position, i.e., position No. 5, No. 6, No. 7, and No. 8, respectively.
Step 1504, determining the target position from the at least two second-type positions according to the target viewing direction and the relative directions of the at least two second-type positions with respect to the center point of the virtual vehicle.
In one possible implementation, step 1504 may include the steps of:
1. a first component of the target view direction on a horizontal plane is acquired.
2. At least two second components are obtained, wherein the at least two second components are components of the at least two second type positions on the horizontal plane respectively relative to the relative direction of the central point of the virtual vehicle.
3. And determining a second component with the smallest included angle with the first component as the target component.
4. And determining the second type position corresponding to the target component as the target position.
For example, as shown in fig. 16, a first component of the target view angle direction on the horizontal plane is a direction vector b. The at least two second components may be horizontal direction vectors a5, a6, a7, a8 introduced from the center point of the virtual vehicle to the second type of location, i.e., locations No. 5, No. 6, No. 7, and No. 8. Since the second component having the smallest angle with respect to the first component direction vector b is the horizontal direction vector a6, it is possible to determine the horizontal direction vector a6 as the target component and the position No. 6 corresponding to the target component as the target position.
In step 1505, the virtual object is controlled to switch to the target position.
The step 605 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
In summary, in the embodiment of the present application, the virtual object in the virtual scene is subjected to the designation operation, and the target position is determined from the second type position according to the obtained target view direction, so that the virtual object is controlled to switch from the first type position in the virtual vehicle to the target position in the second type position, the efficiency of switching the position of the virtual object in the virtual vehicle is improved, and the flexibility of switching the position of the virtual object in the virtual vehicle is increased.
Fig. 17 is a block diagram illustrating a configuration of a virtual object control apparatus in a virtual scene according to an exemplary embodiment. The virtual object control device in the virtual scene can be used in a terminal to execute all or part of the steps executed by the terminal in the method shown in the corresponding embodiment of fig. 3, 6 or 15. The virtual object control device in the virtual scene may include:
an interface display module 1701 for displaying a display interface of an application, wherein the display interface includes a scene picture of a virtual scene, the virtual scene includes a virtual carrier, the virtual carrier includes at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed;
a view angle obtaining module 1702, configured to, when a specified operation is received and the virtual object controlled by the terminal is located at the first type position, obtain a target view angle direction, where the target view angle direction is a view angle direction in which the virtual object is observed through a camera model;
a position determining module 1703, configured to determine a target position from the at least two second-type positions according to the target view direction;
a position switching module 1704, configured to control the virtual object to be switched to the target position.
Optionally, the position determining module 1703 includes,
the vehicle orientation obtaining submodule is used for obtaining the orientation of the virtual vehicle;
and the target position determining submodule is used for determining the target position from the at least two second types of positions according to the target visual angle direction and the orientation of the virtual carrier.
Optionally, the target position determination submodule includes,
the priority acquiring unit is used for acquiring the priorities of the at least two second types of positions according to the target view angle direction and the orientation of the virtual carrier;
an idle position determination unit for determining respective idle positions of the at least two second type positions;
a first target location determining unit, configured to determine a location with a highest priority among the idle locations as the target location.
Optionally, the at least two second type positions include a first position located at the front left of the virtual vehicle, a second position located at the rear left of the virtual vehicle, a third position located at the front right of the virtual vehicle, and a fourth position located at the rear right of the virtual vehicle;
the priority acquiring unit is used for acquiring the priority of the received data,
if the horizontal component of the target view direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the first position, the second position, the third position and the fourth position, respectively;
if the horizontal component of the target view direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the third position, the fourth position, the first position and the second position, respectively.
Optionally, the at least two second type positions include a first position located at the front left of the virtual vehicle, a second position located at the rear left of the virtual vehicle, a third position located at the front right of the virtual vehicle, and a fourth position located at the rear right of the virtual vehicle;
the priority acquiring unit is used for acquiring the priority of the received data,
if the horizontal component of the target view angle direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions from high to low are the first position, the second position, the third position and the fourth position respectively;
if the horizontal component of the target view angle direction is offset to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, determining that the priorities of the at least two second type positions from high to low are the second position, the first position, the fourth position and the third position respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions are the third position, the fourth position, the first position and the second position from high to low respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, it is determined that the priorities of the at least two second type positions from high to low are the fourth position, the third position, the second position and the first position respectively.
Optionally, the position determining module 1703 includes,
the direction obtaining submodule is used for obtaining the relative direction of each of the at least two second type positions relative to the central point of the virtual vehicle;
a target position determining submodule, configured to determine the target position from the at least two second-type positions according to the target viewing direction and a relative direction of each of the at least two second-type positions with respect to a center point of the virtual vehicle.
Optionally, the target position determination submodule includes,
a first component acquiring unit configured to acquire a first component of the target view direction on a horizontal plane;
a second component acquiring unit, configured to acquire at least two second components, where the at least two second components are components of the at least two second-type positions on a horizontal plane respectively with respect to relative directions of the center points of the virtual vehicles;
the target component determining unit is used for determining a second component with the smallest included angle with the first component as a target component;
a second target position determining unit, configured to determine a second type position corresponding to the target component as the target position.
Optionally, the position switch module 1704, including,
and the target position switching submodule is used for controlling the virtual object to be switched to the target position when the target position is idle.
Optionally, the display interface includes a designated control;
the specified operation is a trigger operation on the specified control, and the specified control is used for triggering the virtual object to execute the specified action.
Optionally, the specified action is an attack action, or the specified action is a pre-action of the attack action.
In summary, in the embodiment of the present application, the virtual object in the virtual scene is subjected to the designation operation, and the target position is determined from the second type position according to the obtained target view direction, so that the virtual object is controlled to switch from the first type position in the virtual vehicle to the target position in the second type position, the efficiency of switching the position of the virtual object in the virtual vehicle is improved, and the flexibility of switching the position of the virtual object in the virtual vehicle is increased.
Fig. 18 is a block diagram illustrating the structure of a computer device 1800, according to an example embodiment. The computer device 1800 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer IV), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content required to be displayed on the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1802 is used to store at least one instruction for execution by the processor 1801 to implement a virtual vehicle control method in a virtual scene as provided by method embodiments herein.
In some embodiments, computer device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, touch screen display 1805, camera 1806, audio circuitry 1807, positioning components 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, providing a front panel of the computer device 1800; in other embodiments, the number of the display screens 1805 may be at least two, respectively disposed on different surfaces of the computer device 1800 or in a foldable design; in still other embodiments, the display 1805 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be multiple and placed at different locations on the computer device 1800 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The Location component 1808 is used to locate a current geographic Location of the computer device 1800 for navigation or LBS (Location Based Service). The Positioning component 1808 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.
The power supply 1809 is used to power various components within the computer device 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the touch display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the computer device 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the computer device 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1813 may be disposed on the side bezel of computer device 1800 and/or on the lower layer of touch display 1805. When the pressure sensor 1813 is disposed on a side frame of the computer device 1800, a user's holding signal to the computer device 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the touch display screen 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed on the front, back, or side of the computer device 1800. When a physical key or vendor Logo is provided on the computer device 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the touch display 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the touch display 1805 is turned down. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is typically provided on the front panel of the computer device 1800. The proximity sensor 1816 is used to gather the distance between the user and the front of the computer device 1800. In one embodiment, the touch display 1805 is controlled by the processor 1801 to switch from the light screen state to the rest screen state when the proximity sensor 1816 detects that the distance between the user and the front of the computer device 1800 is gradually decreased; when the proximity sensor 1816 detects that the distance between the user and the front of the computer device 1800 is gradually increasing, the touch display 1805 is controlled by the processor 1801 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 18 is not intended to be limiting with respect to the computer device 1800 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3, 4, 6, or 15 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (14)

1. A method for controlling a virtual object in a virtual scene, the method being performed by a terminal, the method comprising:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed;
when a specified operation is received and the virtual object controlled by the terminal is located at the first type position, acquiring a target view angle direction, wherein the target view angle direction is a view angle direction for observing the virtual object through a camera model; the display interface comprises a designated control; the specified operation is a triggering operation of the specified control, and the specified control is used for triggering the virtual object to execute the specified action;
determining a target position from the at least two second type positions according to the target view angle direction, wherein the target position is a position corresponding to the target view angle direction when the specified operation is received from the at least two second type positions;
and controlling the virtual object to be directly switched to the target position and executing the specified action.
2. The method of claim 1, wherein determining a target position from the at least two second type positions according to the target perspective direction comprises:
acquiring the orientation of the virtual vehicle;
determining the target position from the at least two second type positions according to the target perspective direction and the orientation of the virtual vehicle.
3. The method of claim 2, wherein said determining the target position from the at least two second type positions according to the target perspective direction and the orientation of the virtual vehicle comprises:
acquiring the priorities of the at least two second type positions according to the direction of the target view angle and the orientation of the virtual carrier;
determining respective free locations of the at least two second type locations;
and determining the position with the highest priority in the idle positions as the target position.
4. The method of claim 3, wherein the at least two second type locations include a first location located to the left front of the virtual vehicle, a second location located to the left rear of the virtual vehicle, a third location located to the right front of the virtual vehicle, a fourth location located to the right rear of the virtual vehicle;
the obtaining the priorities of the at least two second type positions according to the target view direction and the orientation of the virtual vehicle includes:
if the horizontal component of the target view direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the first position, the second position, the third position and the fourth position, respectively;
if the horizontal component of the target view direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, determining the priorities of the at least two second-type positions from high to low as the third position, the fourth position, the first position and the second position, respectively.
5. The method of claim 3, wherein the at least two second type locations include a first location located to the left front of the virtual vehicle, a second location located to the left rear of the virtual vehicle, a third location located to the right front of the virtual vehicle, a fourth location located to the right rear of the virtual vehicle;
the obtaining the priorities of the at least two second type positions according to the target view direction and the orientation of the virtual vehicle includes:
if the horizontal component of the target view angle direction is shifted to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions from high to low are the first position, the second position, the third position and the fourth position respectively;
if the horizontal component of the target view angle direction is offset to the left relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, determining that the priorities of the at least two second type positions from high to low are the second position, the first position, the fourth position and the third position respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is smaller than or equal to 90 degrees, determining that the priorities of the at least two second type positions are the third position, the fourth position, the first position and the second position from high to low respectively;
if the horizontal component of the target view angle direction is shifted to the right relative to the horizontal component of the orientation of the virtual vehicle, and an included angle between the horizontal component of the target view angle direction and the horizontal component of the orientation of the virtual vehicle is greater than 90 degrees, it is determined that the priorities of the at least two second type positions from high to low are the fourth position, the third position, the second position and the first position respectively.
6. The method of claim 1, wherein determining a target position from the at least two second type positions according to the target perspective direction comprises:
acquiring the relative directions of the at least two second-type positions relative to the central point of the virtual vehicle;
determining the target position from the at least two second-type positions according to the target view direction and the relative directions of the at least two second-type positions with respect to the central point of the virtual vehicle.
7. The method of claim 6, wherein the determining the target position from the at least two second-type positions according to the target perspective direction and the relative directions of the at least two second-type positions with respect to the center point of the virtual vehicle comprises:
acquiring a first component of the target visual angle direction on a horizontal plane;
acquiring at least two second components, wherein the at least two second components are components of the at least two second type positions on a horizontal plane respectively relative to the relative direction of the central point of the virtual vehicle;
determining a second component with the smallest included angle with the first component as a target component;
and determining the second type position corresponding to the target component as the target position.
8. The method of claim 7, wherein the controlling the virtual object to switch to the target location comprises:
and when the target position is idle, controlling the virtual object to be switched to the target position.
9. The method according to any one of claims 1 to 8,
the specified action is an attack action, or the specified action is a preposed action of the attack action.
10. A method for controlling a virtual object in a virtual scene, the method being performed by a terminal, the method comprising:
the method comprises the steps that a first display interface of an application program is displayed, the first display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed; the display interface comprises a designated control; in the first display interface, the virtual object controlled by the terminal is located at the first type position;
receiving a triggering operation on the specified control, wherein the specified control is a control for triggering the virtual object to execute the specified action;
displaying a second display interface of the application program, wherein the second display interface is displayed after the virtual object is directly switched from the first type position to a target position and the specified action is executed; the target position is a position corresponding to a target view angle direction when the trigger operation is received, in the at least two second type positions; the target view direction is a view direction in which the virtual object is observed by a camera model.
11. The method of claim 10, wherein the target location is a second type of location to which the target component corresponds;
the target component is a second component, of the at least two second components, of which the included angle with the first component is smaller than the included angle threshold; the at least two second components are components of the at least two second type positions on a horizontal plane respectively relative to the relative direction of the center point of the virtual vehicle; the first component is a component of the target view angle direction in a horizontal plane.
12. An apparatus for controlling a virtual object in a virtual scene, the apparatus being used in a terminal, the apparatus comprising:
the interface display module is used for displaying a display interface of an application program, the display interface comprises a scene picture of a virtual scene, the virtual scene comprises a virtual carrier, the virtual carrier comprises at least one first type position and at least two second type positions, the first type position is a position where a specified action is forbidden to be executed, and the second type position is a position where the specified action is allowed to be executed;
a view angle obtaining module, configured to obtain a target view angle direction when a specified operation is received and a virtual object controlled by the terminal is located at the first type position, where the target view angle direction is a view angle direction in which the virtual object is observed through a camera model; the display interface comprises a designated control; the specified operation is a triggering operation of the specified control, and the specified control is used for triggering the virtual object to execute the specified action;
a position determining module, configured to determine, according to the target view direction, a target position from the at least two second type positions, where the target position is a position, of the at least two second type positions, corresponding to the target view direction when the specified operation is received;
and the position switching module is used for controlling the virtual object to be directly switched to the target position and executing the specified action.
13. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes or set of instructions, said at least one instruction, said at least one program, said set of codes or set of instructions being loaded and executed by said processor to implement a virtual object control method of a virtual scene as claimed in any one of claims 1 to 9.
14. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a virtual object control method of a virtual scene as claimed in any one of claims 1 to 9.
CN201910862693.8A 2019-09-12 2019-09-12 Virtual object control method in virtual scene, computer device and storage medium Active CN110597389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910862693.8A CN110597389B (en) 2019-09-12 2019-09-12 Virtual object control method in virtual scene, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910862693.8A CN110597389B (en) 2019-09-12 2019-09-12 Virtual object control method in virtual scene, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110597389A CN110597389A (en) 2019-12-20
CN110597389B true CN110597389B (en) 2021-04-09

Family

ID=68859052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910862693.8A Active CN110597389B (en) 2019-09-12 2019-09-12 Virtual object control method in virtual scene, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110597389B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815759B (en) * 2020-06-18 2021-04-02 广州建通测绘地理信息技术股份有限公司 Measurable live-action picture generation method and device, and computer equipment
CN112076470B (en) * 2020-08-26 2021-05-28 北京完美赤金科技有限公司 Virtual object display method, device and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3927821B2 (en) * 2002-01-25 2007-06-13 株式会社バンダイナムコゲームス PROGRAM, INFORMATION STORAGE MEDIUM, AND GAME DEVICE
US9421457B2 (en) * 2013-01-25 2016-08-23 John Faratzis Sports entertainment display surface
CN106178504B (en) * 2016-06-27 2019-07-05 网易(杭州)网络有限公司 Virtual objects motion control method and device
CN109999497B (en) * 2019-04-30 2021-01-15 腾讯科技(深圳)有限公司 Control method and device of virtual object, storage medium and electronic device

Also Published As

Publication number Publication date
CN110597389A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
US11703993B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
US11221726B2 (en) Marker point location display method, electronic device, and computer-readable storage medium
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
WO2020043016A1 (en) Virtual carrier control method in virtual scene, computer device and storage medium
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
US11954200B2 (en) Control information processing method and apparatus, electronic device, and storage medium
CN109821237B (en) Method, device and equipment for rotating visual angle and storage medium
CN110738738B (en) Virtual object marking method, equipment and storage medium in three-dimensional virtual scene
CN110743168B (en) Virtual object control method in virtual scene, computer equipment and storage medium
CN111589125A (en) Virtual object control method and device, computer equipment and storage medium
CN110585704B (en) Object prompting method, device, equipment and storage medium in virtual scene
US20210142516A1 (en) Method and electronic device for virtual interaction
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
CN110533756B (en) Method, device, equipment and storage medium for setting attaching type ornament
CN110597389B (en) Virtual object control method in virtual scene, computer device and storage medium
CN112130945A (en) Gift presenting method, device, equipment and storage medium
WO2022237076A1 (en) Method and apparatus for controlling avatar, and device and computer-readable storage medium
CN111013137A (en) Movement control method, device, equipment and storage medium in virtual scene
CN113032590B (en) Special effect display method, device, computer equipment and computer readable storage medium
CN112755517A (en) Virtual object control method, device, terminal and storage medium
CN111369684B (en) Target tracking method, device, equipment and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant