CN112717409B - Virtual vehicle control method, device, computer equipment and storage medium - Google Patents

Virtual vehicle control method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112717409B
CN112717409B CN202110090249.6A CN202110090249A CN112717409B CN 112717409 B CN112717409 B CN 112717409B CN 202110090249 A CN202110090249 A CN 202110090249A CN 112717409 B CN112717409 B CN 112717409B
Authority
CN
China
Prior art keywords
selection
virtual
virtual vehicle
control
microphone sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110090249.6A
Other languages
Chinese (zh)
Other versions
CN112717409A (en
Inventor
朱倩
汪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110090249.6A priority Critical patent/CN112717409B/en
Publication of CN112717409A publication Critical patent/CN112717409A/en
Application granted granted Critical
Publication of CN112717409B publication Critical patent/CN112717409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a virtual vehicle control method, a virtual vehicle control device, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; the first selection control comprises at least two selection areas; in response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation; and controlling the first virtual vehicle to execute the first operation based on the selection area corresponding to the designated operation. Through the method, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.

Description

Virtual vehicle control method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of virtual scenes, and in particular, to a virtual vehicle control method, device, computer device, and storage medium.
Background
Currently, in a game-like application program for controlling a virtual vehicle, for example, in a racing game, a user may control the virtual vehicle in a virtual scene interface through a virtual control in the virtual scene interface.
In the related art, a direction operation control for controlling the running direction of a virtual vehicle is overlapped in a virtual scene picture corresponding to a game application program for controlling the virtual vehicle, and a user realizes the control of the virtual vehicle by the designated operation of the direction operation control; and the acceleration control, the brake control and the prop use control of the virtual vehicle can be overlapped in the virtual scene picture, so that a user can select the operation function of the virtual vehicle.
However, in the related art, the user generally can only control the virtual vehicle through the touch operation of the control superimposed on the virtual scene image, so that the control mode of the virtual vehicle is single, and the control effect of the virtual vehicle is limited.
Disclosure of Invention
The embodiment of the application provides a virtual vehicle control method, a device, computer equipment and a storage medium, which can improve the control effect on a virtual vehicle, and the technical scheme is as follows:
in one aspect, a virtual vehicle control method is provided, the method including:
displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; at least two selection areas are contained in the first selection control;
In response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation;
and controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
In yet another aspect, there is provided a virtual vehicle control apparatus including:
the virtual scene picture display module displays the virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; at least two selection areas are contained in the first selection control;
a selection area determining module, configured to determine, in response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation within the first selection control;
and the virtual vehicle control module is used for controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
In one possible implementation, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation manner, the selection area determining module includes:
A first signal generation sub-module for generating a first signal in response to receiving the blowing operation to the microphone sensor;
and the selection area determination submodule is used for determining a selection area corresponding to the blowing operation in the first selection control based on the first signal.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
the selection area determination submodule is further configured to:
in response to the signal strength of the first signal being greater than a strength threshold, a selection region corresponding to the blowing operation is determined within the first selection control.
In one possible implementation, the selection area determining submodule includes:
the first time information determining unit is used for responding to the fact that the signal intensity of the first signal is larger than the intensity threshold value and obtaining first time information corresponding to the blowing operation; the first time information is used for indicating the time when the blowing operation of the microphone sensor is received;
And the selection area determining unit is used for determining a selection area corresponding to the blowing operation based on the first time information.
In one possible implementation, the first selection control includes a region selection pointer; the region pointed by the region selection pointer changes with time.
In one possible implementation manner, the selection area determining module further includes:
the pointing region determining submodule is used for determining a region pointed by the region selection pointer based on the first moment information;
and the pointer region determining submodule is used for determining the region pointed by the region selection pointer as the selection region corresponding to the specified operation.
In one possible implementation, the first operation is an acceleration operation of the first virtual vehicle;
the virtual vehicle control module includes:
an acceleration duration determining submodule, configured to determine an acceleration duration of the first virtual vehicle based on a selection region corresponding to the specified operation;
and the acceleration operation execution sub-module is used for controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation, the selection area determining module is configured to, in response to a selection request from the user,
In response to receiving a specified operation on the microphone sensor, and the first selection control is in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
In one possible implementation, the apparatus further includes:
the time parameter acquisition module is used for acquiring the time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in the virtual scene picture;
and the first state determining module is used for determining the first selection control as an available state in response to the time parameter of the virtual scene picture meeting a first time condition.
In one possible implementation, the apparatus further includes:
the virtual position acquisition module is used for acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
and a second state determining module configured to determine the first selection control as an available state in response to the virtual location information meeting a first location condition.
In one possible implementation, the apparatus further includes:
and a third state determining module configured to determine the first selection control as an unavailable state in response to the first virtual vehicle performing the first operation.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual vehicle control method described above.
In another aspect, a computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set loaded and executed by the processor to implement the virtual vehicle control method described above is provided.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from a computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs a virtual vehicle control method.
The beneficial effects of the technical scheme provided by the embodiment of the application at least comprise:
the user realizes the selection operation of the first selection control through the appointed operation of the microphone sensor, and the terminal controls the first virtual vehicle to execute the first operation according to the selection area in the first selection control corresponding to the appointed operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a display interface schematic of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a flow chart of a virtual vehicle control method provided by an exemplary embodiment of the present application;
FIG. 4 illustrates a method flow diagram of a virtual vehicle control method illustrated in an exemplary embodiment of the present application;
Fig. 5 shows a schematic diagram of a microphone sensor on a mobile terminal according to an embodiment of the present application;
fig. 6 shows a schematic diagram of an electret microphone according to an embodiment of the application;
FIG. 7 is a schematic diagram of a user performing a blowing operation according to an embodiment of the present application;
FIG. 8 illustrates a microphone sensor position presentation schematic diagram in accordance with an embodiment of the present application;
FIG. 9 illustrates a first selection control diagram according to an embodiment of the present application;
fig. 10 shows a schematic diagram of a match corresponding to a virtual scene image according to an embodiment of the present application;
fig. 11 shows a schematic diagram of a match corresponding to a virtual scene image according to an embodiment of the present application;
FIG. 12 illustrates a schematic diagram of an energy bar control according to an embodiment of the present application;
FIG. 13 illustrates a schematic diagram of acceleration duration selection according to an embodiment of the present application;
fig. 14 shows a schematic diagram of a terminal rotation operation according to an embodiment of the present application;
FIG. 15 illustrates a directional operation schematic diagram according to an embodiment of the present application;
FIG. 16 illustrates a directional operation diagram according to an embodiment of the present application;
FIG. 17 is a flow chart illustrating a first virtual vehicle acceleration method according to an exemplary embodiment;
FIG. 18 is a block diagram of a virtual vehicle control apparatus according to an exemplary embodiment of the present application;
FIG. 19 is a block diagram of a computer device shown in accordance with an exemplary embodiment;
fig. 20 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
A virtual scene is a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene can be a simulation environment scene of a real world, a half-simulation half-fictional three-dimensional environment scene, or a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are exemplified by the virtual scene being a three-dimensional virtual scene, but are not limited thereto.
Virtual scenes are typically presented by application generation in a computer device such as a terminal based on hardware (such as a screen) in the terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a notebook computer or a personal computer device of a stationary computer.
2) Virtual object
Virtual objects refer to movable objects in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereoscopic model created based on an animated skeleton technique. Each virtual object has its own shape, volume, and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
3) Virtual vehicle
The virtual vehicle refers to a virtual vehicle which can realize running operation according to the control of a user on an operation control in a virtual environment, and the functions which can be realized by the virtual vehicle can comprise acceleration, deceleration, braking, backing, steering, drifting, using props and the like, and the functions can be realized automatically, for example, the virtual vehicle can accelerate automatically, or the virtual vehicle can turn automatically; the functions may also be implemented according to a user control trigger of the operation control, for example, when the user triggers the brake control, the virtual vehicle performs a braking action.
4) Racing car game
The racing game is mainly carried out under a virtual racing scene, a plurality of virtual vehicles are used for realizing racing games aiming at realizing specified racing targets, and in the virtual racing scene, a user can control the virtual vehicle corresponding to the terminal and race with the virtual vehicles controlled by other users; the user can also control the virtual vehicle corresponding to the terminal, and the AI-controlled virtual vehicle generated by the client program corresponding to the racing game performs racing.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130. The first terminal 110 and the second terminal 130 are terminal devices with microphone sensors.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multi-person online competition program or the application 111 may be an offline class application. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be an RCG (race game), a Sandbox-like game that includes a racing function, or other types of games that include a racing function. In this embodiment, the application 111 is exemplified by an RCG. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual vehicle located in a virtual environment to perform activities, and the first virtual vehicle may be referred to as a master virtual object of the first user 112. The activities of the first virtual vehicle include, but are not limited to: acceleration, deceleration, braking, backing, steering, drifting, use of props, etc. Illustratively, the first virtual vehicle may be a virtual vehicle, or a virtual model with virtual vehicle functionality modeled from other vehicles (e.g., ships, airplanes), etc.; the first virtual vehicle may be a virtual vehicle modeled from a real vehicle model that is present in reality.
The second terminal 130 installs and runs an application 131 supporting a virtual environment, and the application 131 may be a multi-person online competition program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on a screen of the second terminal 130. The client may be any of an RCG game program, a Sandbox game, and other game programs that include racing functionality, in this embodiment illustrated by the application 131 being an RCG game.
Alternatively, the second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual vehicle located in the virtual environment to implement a driving operation, and the second virtual vehicle may be referred to as a master virtual vehicle of the second user 132.
Optionally, a third virtual vehicle may also exist in the virtual environment, where the third virtual vehicle is controlled by the AI corresponding to the application 131, and the third virtual vehicle may be referred to as an AI-controlled virtual vehicle.
Optionally, the first virtual vehicle, the second virtual vehicle, and the third virtual vehicle are in the same virtual world. Alternatively, the first virtual vehicle and the second virtual vehicle may belong to the same camping, the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual vehicle and the second virtual vehicle may belong to different camps, different teams, different organizations, or have hostile relationships.
Alternatively, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may refer broadly to one of the plurality of terminals and the second terminal 130 may refer broadly to another of the plurality of terminals, the present embodiment being illustrated with only the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but in different embodiments there are a number of other terminals that can access the server 120. Optionally, there is one or more terminals corresponding to the developer, on which a development and editing platform for supporting the application program of the virtual environment is installed, the developer may edit and update the application program on the terminal, and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 may download the application program installation package from the server 120 to implement the update of the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster formed by a plurality of servers, a cloud computing platform and a virtualization center. The server 120 is used to provide background services for applications supporting a three-dimensional virtual environment. Optionally, the server 120 takes on primary computing work and the terminal takes on secondary computing work; alternatively, the server 120 takes on secondary computing work and the terminal takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 120 and the terminals.
In one illustrative example, server 120 includes memory 121, processor 122, user account database 123, athletic service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load instructions stored in the server 120, and process data in the user account database 123 and the athletic service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and other terminals, such as an avatar of the user account, a nickname of the user account, and a combat index of the user account, where the user account is located; the athletic service module 124 is used for providing a plurality of athletic rooms for users to perform athletic activities, such as 1V1 athletic activity, 3V3 athletic activity, 5V5 athletic activity, etc.; the user-oriented I/O interface 125 is used to establish communication exchanges of data with the first terminal 110 and/or the second terminal 130 via a wireless network or a wired network.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may be a two-dimensional virtual scene. Taking an example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which illustrates a schematic diagram of a display interface of the virtual scene provided in an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a virtual vehicle 210 currently controlled, an environment screen 220 of the three-dimensional virtual scene, and a virtual vehicle 240. The virtual vehicle 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual vehicle 210 and the virtual vehicle 240 are three-dimensional models in a three-dimensional virtual scene, and an environmental screen of the three-dimensional virtual scene displayed in the scene screen 200 is an object observed by a third person called a viewing angle corresponding to the currently controlled virtual vehicle 210, wherein the third person called a viewing angle corresponding to the virtual vehicle 210 is a viewing angle screen observed from a virtual camera disposed at a rear upper side of the virtual vehicle, and as illustrated in fig. 2, the environmental screen 220 of the three-dimensional virtual scene displayed under the observation of the third person called a viewing angle corresponding to the currently controlled virtual vehicle 210 is a road 224, a sky 225, a hill 221, and a factory building 222.
The currently controlled virtual vehicle 210 may perform operations such as steering, accelerating, drifting and the like under the control of the user, and the virtual vehicle in the virtual scene may display different three-dimensional models under the control of the user, for example, the screen of the terminal supports a touch operation, and the scene picture 200 of the virtual scene includes a virtual control, when the user touches the virtual control, the currently controlled virtual vehicle 210 may perform a specified operation (for example, a deformation operation) in the virtual scene and display the currently corresponding three-dimensional model.
Fig. 3 shows a flowchart of a virtual vehicle control method according to an exemplary embodiment of the present application. The virtual vehicle control method is performed by a user terminal for use with a microphone sensor. As shown in fig. 3, the virtual vehicle control method includes:
step 301, displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; the virtual scene picture is overlapped with a first selection control; at least two selection areas are contained within the first selection control.
In one possible implementation, the virtual scene is a virtual scene viewed from a third person perspective of the first virtual vehicle. The third person viewing angle of the first virtual vehicle is a viewing angle corresponding to the virtual camera arranged at the rear upper side of the first virtual vehicle, and the virtual scene picture observed by the third person viewing angle of the first virtual vehicle is a virtual scene picture observed by the virtual camera arranged at the rear upper side of the first virtual vehicle.
In another possible implementation, the virtual scene is a virtual scene viewed from a first person perspective of the first virtual vehicle. The first-person view angle of the first virtual vehicle is a view angle corresponding to a virtual camera arranged at a driver position of the first virtual vehicle, and the virtual scene picture observed by the first-person view angle of the first virtual vehicle is a virtual scene picture observed by the virtual camera arranged at the driver position of the first virtual vehicle.
In one possible implementation, the virtual scene is overlaid with a perspective switch control, and the virtual scene is switchable between a first person perspective of the first virtual vehicle and a third person perspective of the first virtual vehicle in response to a user's designated operation of the perspective switch control.
For example, when the virtual scene displayed by the terminal is a virtual scene corresponding to a first person viewing angle of the first virtual vehicle, the terminal switches the virtual scene corresponding to the first person viewing angle of the first virtual vehicle to a virtual scene corresponding to a third person viewing angle in response to a specified operation of the viewing angle switching control by the user; when the virtual scene displayed by the terminal is the virtual scene corresponding to the third person viewing angle corresponding to the first virtual vehicle, responding to the appointed operation of the user on the viewing angle switching control, the terminal switches the virtual scene corresponding to the third person viewing angle of the first virtual vehicle into the virtual scene corresponding to the first person viewing angle.
In response to receiving a specified operation on the microphone sensor, a selection region corresponding to the specified operation is determined within the first selection control, step 302.
In one possible implementation, the microphone sensor may be any one of a moving coil microphone sensor, an aluminum ribbon microphone sensor, a capacitive sensor, and an electret microphone.
Step 303, controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
In summary, according to the scheme shown in the embodiment of the present application, the user performs the selection operation on the first selection control through the designated operation on the microphone sensor, and the terminal controls the first virtual vehicle to execute the first operation according to the selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.
Fig. 4 illustrates a method flow diagram of a virtual vehicle control method illustrated in an exemplary embodiment of the present application. The virtual vehicle control method may be performed by a user terminal having a microphone sensor. As shown in fig. 4, the virtual vehicle control method includes:
Step 401, displaying a virtual scene picture.
In one possible implementation, the virtual scene is a virtual scene when the first virtual vehicle is racing against other virtual vehicles.
In one possible implementation manner, the first virtual vehicle is a virtual vehicle corresponding to a user account corresponding to the terminal.
When the user account logs in on the terminal, the terminal uploads account information corresponding to the user account to a server corresponding to the virtual scene picture, after the server receives the account information, virtual vehicle information corresponding to the account information is issued to the terminal, and the terminal displays virtual vehicles corresponding to the user account according to the virtual vehicle information.
In one possible implementation manner, the virtual vehicle corresponding to the user account may be a plurality of different types of virtual vehicles, and the terminal displays each different type of virtual vehicle corresponding to the virtual vehicle information on the vehicle selection interface in response to receiving the virtual vehicle information issued by the server.
And responding to the received selection operation of the user on the vehicle selection interface, determining a target virtual vehicle corresponding to the selection operation, and determining the target virtual vehicle as a first virtual vehicle corresponding to the user account.
In response to receiving a specified operation on the microphone sensor, a selection region corresponding to the specified operation is determined within the first selection control, step 402.
In one possible implementation, the microphone sensor is a multi-microphone array of a plurality of microphone sensors.
When the mobile terminal receives an acoustic signal using a multi-microphone array as a microphone sensor, it is possible to extract original speech as clean as possible from a noisy speech signal contaminated with noise. Compared with the time-frequency domain characteristic that a single microphone system can only acquire signals, the multi-microphone voice enhancement system can inspect the space domain information of the signals, eliminate background noise, and improve the accuracy of the function of the air blowing control mobile phone by utilizing the multi-microphone noise reduction technology so as to achieve better control effect.
Referring to fig. 5, a schematic diagram of a microphone sensor on a mobile terminal according to an embodiment of the present application is shown. As shown in fig. 5, at least one microphone sensor is present on the terminal bottom 501 of the terminal 500. There are two multi-microphone arrays of multiple microphones in the terminal 500, namely a multi-microphone array 502 at the left end of the bottom of the terminal and a multi-microphone array 503 at the right end of the bottom of the terminal.
In one possible implementation, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation, the microphone sensor is an electret microphone.
Referring to fig. 6, a schematic diagram of an electret microphone according to an embodiment of the disclosure is shown. As shown in fig. 6, the electret microphone includes a voltage source 600 that provides a bias voltage to the electret microphone, and since the electret microphone has a field-effect transistor to realize pre-amplification, the electret microphone requires a certain bias voltage during normal operation. The electret microphone further comprises a resistor 601, which resistor 601 prevents the electret microphone from damaging the circuit in response to an excessive current signal generated after receiving sound pressure. The electret microphone further comprises an electret film portion 602, the electret film portion 602 receives a sound pressure signal, the electret film vibrates along with the sound pressure signal, so that capacitance generated by the electret and an intermediate medium changes along with the vibration of the sound pressure signal, and voltage of two sections of the electret film is further caused to change, and accordingly corresponding electric signals are generated.
By utilizing the function of capturing sound of the microphone of the mobile phone, when a person blows air to the microphone, sound waves vibrate an electret film in the microphone, so that capacitance changes to generate tiny voltage which changes correspondingly. This voltage is then converted to a voltage of 0-5V, which is received by a data collector via analog-to-Digital (a/D) conversion, to generate a Digital audio signal.
In one possible implementation, in response to receiving the blowing operation on the microphone sensor, generating a first signal; based on the first signal, a selection region corresponding to the blowing operation is determined within the first selection control.
When the microphone sensor receives the sound pressure signal, the electret film in the cylinder vibrates, thereby generating a corresponding electrical signal. When the system detects a change in the electrical signal, a selection area corresponding to the specified operation is determined in the first selection control according to the change condition (namely, the first signal) of the electrical signal.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation of the microphone sensor; in response to the signal strength of the first signal being greater than a strength threshold, a selection region corresponding to the blowing operation is determined within the first selection control.
Because the microphone sensor receives the sound pressure signal to generate the corresponding electric signal, the microphone sensor receives the external noise sound pressure or generates the first signal when the microphone sensor receives unspecified operations such as sound pressure generated by speaking of a user, and the like, the first signal needs to be screened to ensure that only when the user executes the specified operations (namely blowing operations), the generated first signal triggers the first selection control to determine the selection area corresponding to the specified operations.
When the specified operation is an air blowing operation and the air blowing operation is an air blowing operation which is performed by a user with the microphone sensor as a target, the signal intensity of the first signal is positively correlated with the sound pressure signal corresponding to the air blowing operation received by the microphone sensor, the sound pressure received by the microphone sensor is larger, and the sound pressure transmitted to the microphone sensor by noise or speaking of the user is smaller, so that an intensity threshold value can be set, when the first signal obtained after the sound pressure is received by the microphone sensor is larger than the intensity threshold value, the fact that the sound pressure intensity received by the microphone sensor is larger at the moment is the sound pressure obtained by the user through the air blowing operation, namely the specified operation is performed by the user, and the terminal determines a selection area corresponding to the specified operation in the first selection control.
In one possible implementation manner, before the virtual scene picture is displayed, an application program corresponding to the virtual scene picture acquires equipment information corresponding to the user terminal; and determining the position of a microphone sensor of the user terminal according to the equipment information, and displaying the position of the microphone sensor on a display interface corresponding to the user terminal.
The location of the microphone sensor may be different for different models of user terminals, for example the microphone sensor may be located to the left of the lower part of the terminal or the microphone sensor may be located to the right of the lower part of the terminal. When the position of the microphone sensor deviates from the position where the user performs the specified operation, the microphone sensor may receive a first signal generated when the user performs the blowing operation, which is not greater than a threshold value. Referring to fig. 7, a schematic diagram of a user performing a blowing operation according to an embodiment of the present application is shown. As shown in fig. 7, when the user performs the blowing operation on the right area 703 at the bottom of the terminal 700, the sound pressure received by the microphone sensor 701 may be smaller, that is, the first signal generated by the microphone sensor may be smaller than the threshold value, so that the application program corresponding to the virtual scene picture cannot respond to the blowing operation performed by the user, that is, cannot perform the selection operation on the first selection control.
Referring to fig. 8, a schematic diagram illustrating a microphone sensor position display according to an embodiment of the present application is shown. As shown in fig. 8, before the virtual scene is displayed, the terminal 800 may first display a position display interface 801 corresponding to the microphone sensor, where a model schematic diagram 802 corresponding to the terminal 800 is displayed on the position display interface, and the model schematic diagram 802 is marked with a microphone sensor position 803 corresponding to the terminal 800. In one possible implementation, the virtual scene picture is presented by closing the position presentation interface in response to a user's specified operation of the position presentation interface.
In one possible implementation manner, in response to the signal intensity of the first signal being greater than the intensity threshold, first time information corresponding to the blowing operation is obtained; the first time information is used for indicating the time when the blowing operation to the microphone sensor is received; and determining a selection area corresponding to the specified operation based on the first time information.
The first selection control determines a selection area in the first selection control according to first time information (namely generation time) corresponding to the first signal.
In one possible implementation, the first selection control comprises a region selection pointer; the region pointed to by the region selection pointer varies with time.
Referring to fig. 9, a schematic diagram of a first selection control according to an embodiment of the present application is shown. As shown in fig. 9, a first selection control 900 includes a region selection pointer 901 and at least two selection regions 902 included therein. At least two selection areas 902 in fig. 9, including a selection area 1, a selection area 2, and a selection area 3, the area selection pointer 901 rotates with time about a pointer substrate 903 as a rotation axis, where a pattern of a first operation corresponding to the first selection control may be displayed in the pointer substrate 903. For example, when the first operation corresponding to the first selection control is an acceleration operation, N may be displayed in the pointer base 903 2 O (nitrous oxide), commonly used as an accelerating combustion improver for racing vehicles, to indicate to the user that the first virtual vehicle is passing N according to the first selection control 2 And O performs acceleration operation.
In one possible implementation, determining, based on the first time information, an area to which the area selection pointer points; and determining the area pointed by the area selection pointer as a selection area corresponding to the specified operation.
After the terminal determines the first time information, determining an area pointed by the area selection pointer at the time corresponding to the first time information according to the first time information, and determining the area as a selection area selected by a user through blowing operation.
In one possible implementation, in response to receiving a specified operation on the microphone sensor, and the first selection control is in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
When the first selection control is in an available state, the first selection control responds to the appointed operation of the microphone sensor, and an area corresponding to the appointed operation is selected; when the first selection control is in an unavailable state and a designated operation of the microphone sensor is received, the first selection control does not make a selection operation corresponding to the designated operation.
In one possible implementation, the microphone sensor is in a disabled state when the first selection control is in an unavailable state.
The microphone sensor is started to receive the appointed operation executed by the user only when the first selection control is in the available state, and the microphone sensor is not started when the first selection control is in the unavailable state, so that the resource consumption of the terminal is reduced.
In one possible implementation manner, obtaining a time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in the virtual scene picture; in response to the time parameter of the virtual scene picture meeting a first time condition, the first selection control is determined to be in an available state.
When the time parameter corresponding to the virtual scene picture meets a first time condition, the first selection control is determined to be in an available state, namely, when the first virtual vehicle is racing, and when the racing time meets the first time condition, the first selection control is started, so that a user can select a selection area corresponding to a designated operation through the first selection control, and the first virtual vehicle is controlled to execute a first operation according to the selected selection area.
Fig. 10 is a schematic diagram of a game corresponding to a virtual scene according to an embodiment of the present application. As shown in fig. 10, the first virtual vehicle 1001 travels on a virtual road 1002 in the virtual scene, on which a play ranking display control 1003 is superimposed, wherein "4DDD" displayed in enlargement represents a rank "4" corresponding to the first virtual vehicle "DDD". The virtual scene image is further overlaid with a small map display control 1004, and a road schematic diagram displayed with the place where the first virtual vehicle is located as the center is displayed on the small map display control. The virtual scene picture is further overlaid with a time parameter control 1005, where the time parameter control is used to determine a running time (i.e. a time parameter) of the first virtual vehicle in the virtual scene picture, and when the running time meets a specified condition, for example, the running time meets a specified condition greater than 30S, the first selection control 1006 is determined to be in an available state, at this time, the first selection control may be displayed on the virtual scene picture, and the region selection pointer in the first selection control 1006 rotates along with the time transformation.
In one possible implementation manner, virtual position information of the first virtual vehicle in the virtual scene picture is obtained; in response to the virtual location information satisfying a first location condition, the first selection control is determined to be in an available state.
When the virtual position information of the first virtual vehicle in the virtual scene picture meets the first position condition, a first selection control is started, so that a user can select a selection area corresponding to a designated operation through the first selection control, and the first virtual vehicle is controlled to execute the first operation according to the selected selection area.
Please refer to fig. 11, which illustrates a game schematic diagram corresponding to a virtual scene image according to an embodiment of the present application. As shown in fig. 11, in which the first virtual vehicle 1101 travels on a virtual road 1102 in the virtual scene, a play ranking display control 1103 is superimposed on the virtual scene, wherein "4DDD" displayed in enlargement represents a rank "4" corresponding to the first virtual vehicle "DDD". A minimap display control 1104 is further superimposed on the virtual scene image, where a road schematic is displayed centered on the location of the first virtual vehicle. The virtual scene is further overlaid with a time parameter control 1105, which is used to determine the travel time of the first virtual vehicle in the virtual scene. The virtual scene picture is further overlaid with an area determining control 1106, the area determining control 1106 is located on a virtual road 1102 in the virtual scene picture, and the first selecting control is determined to be in an available state in response to that the first virtual vehicle passes through an area corresponding to the area selecting control, that is, virtual position information of the first virtual vehicle meets a first position condition, at this time, the first selecting control can be displayed on the virtual scene picture, and an area selecting pointer in the first selecting control 1107 rotates along with time transformation.
In one possible implementation, the first selection control is determined to be in an unavailable state in response to the first virtual vehicle performing the first operation.
When the first virtual vehicle performs the first operation, the first selection control is determined to be in an unavailable state, and the first virtual vehicle cannot repeatedly trigger the first selection control.
In one possible implementation, the first selection control is determined to be in an unavailable state for a specified time in response to the first virtual vehicle performing the first operation.
When the first virtual vehicle executes the first operation, the first selection control is determined to be in an unavailable state within a specified time, and at this time, the first virtual vehicle cannot repeatedly trigger the first selection control within the specified time, that is, a user cannot control the first virtual vehicle to repeatedly execute the first operation within the specified time.
In one possible implementation, the first selection control is determined to be in an available state in response to the first virtual vehicle performing the first operation for a time greater than a specified time.
That is, when the first virtual vehicle performs the first operation and the first selection control is determined to be in an unavailable state within the specified time, when the first virtual vehicle has a travel time greater than the specified time after performing the first operation, the first selection control is determined to be in an available state, and at this time, the user can control the first virtual vehicle to perform the first operation again through the specified operation on the microphone sensor.
In one possible implementation, the first selection control is determined to be in an unavailable state within a designated area in response to the first virtual vehicle performing the first operation.
When the first virtual vehicle executes the first operation, the first selection control is determined to be in an unavailable state in the designated area, and at this time, the first virtual vehicle cannot repeatedly trigger the first selection control in the designated area, that is, a user cannot control the first virtual vehicle to repeatedly execute the first operation in the designated area.
In one possible implementation, the first selection control is determined to be in an available state in response to the first virtual vehicle performing the first operation and the first virtual vehicle driving through the designated area while the first selection control is in an unavailable state.
That is, when the first virtual vehicle performs the first operation and the first selection control is determined to be in the unavailable state in the designated area, when the travel distance of the first virtual vehicle after the first operation is performed is greater than a threshold value (i.e., after the first virtual vehicle travels through the designated area), the first selection control is determined to be in the available state, at which time the user can control the first virtual vehicle to perform the first operation again through the designated operation of the microphone sensor.
In one possible implementation, an energy bar display control is further overlaid on the virtual scene picture, and the first selection control is determined to be in an available state in response to an energy bar displayed on the energy bar display control meeting a specified condition.
Referring to fig. 12, a schematic diagram of an energy bar control according to an embodiment of the present application is shown. The virtual scene corresponding to fig. 12 is superimposed with an energy bar control 1202, where the energy bar control 1202 can implement an increase of an energy bar according to an energy storage operation corresponding to the first virtual vehicle, and when the energy bar increases to a threshold specified by the energy bar control, the first selection control 1201 is determined to be in an available state, and at this time, a user can control the first virtual vehicle to execute the first operation through a specified operation on a microphone sensor. The energy storage operation can be a running operation of the first virtual vehicle, namely when the first virtual vehicle runs normally, the first virtual vehicle is continuously triggered to store energy, so that the energy bar is increased; or the energy storage operation may be that the first virtual vehicle acquires a virtual object in the virtual scene image, that is, a virtual object exists on a virtual road in the virtual scene image, and when the first virtual vehicle touches the virtual object, the energy storage operation is triggered.
Step 403, determining an acceleration duration of the first virtual vehicle based on the selection area corresponding to the specified operation.
Wherein the first operation is an acceleration operation of the first virtual vehicle. When the first operation is an acceleration operation of the first virtual vehicle, the selection area corresponding to the designated operation is used for selecting an acceleration duration of the first virtual vehicle.
Referring to fig. 13, a schematic diagram of acceleration duration selection according to an embodiment of the present application is shown. As shown in fig. 13, there are at least two selection areas in the first selection control 1300, and in fig. 13, the at least two selection areas include one area 1, two areas 2, two areas 3, and two areas 4, where the area 1 may be identified by a green area, the area 2 is indicated by a yellow area, the area 3 is indicated by an orange area, and the area four is indicated by a gray area (the foregoing colors are represented schematically, and are not shown in the drawing). When the user performs the air blowing operation on the microphone sensor, the acceleration duration of the acceleration operation of the first virtual vehicle is longest when the region selection pointer 1301 selects the region 1, and the acceleration duration of the acceleration operation of the first virtual vehicle is slightly shorter than the acceleration duration corresponding to the region 1 when the user performs the air blowing operation on the microphone sensor; when the user performs the air blowing operation on the microphone sensor and the region selection pointer 1301 selects the region 3, the acceleration duration of the acceleration operation of the first virtual vehicle is slightly shorter than the acceleration duration corresponding to the region 2; when the user performs the air blowing operation on the microphone sensor and the region 4 is selected by the region selection pointer 1301, the acceleration duration of the first virtual vehicle acceleration operation is 0 or the acceleration duration of the first virtual vehicle acceleration operation is the shortest.
In one possible implementation, the acceleration magnitude of the first virtual vehicle is determined based on a selection region corresponding to the specified operation.
That is, the selection area corresponding to the specified operation may determine at least one of an acceleration duration and an acceleration amplitude of the first virtual vehicle, so that the first virtual vehicle realizes the acceleration operation according to the acceleration duration and the acceleration amplitude.
Step 404, controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation, the first virtual vehicle is controlled to perform the acceleration operation based on an acceleration duration of the first virtual vehicle and an acceleration magnitude of the first virtual vehicle.
When the first selection control corresponding to the first virtual vehicle is in an available state, the terminal can respond to the receiving of the designated operation of the microphone sensor to select a selection area of the first selection control, determine the acceleration duration and the acceleration amplitude of the first virtual vehicle according to the selected selection area, and control the first virtual vehicle to execute the acceleration operation according to the acceleration duration and the acceleration amplitude of the first virtual vehicle.
In one possible implementation, the user terminal further includes a gyro sensor, and the first virtual vehicle is controlled to perform a steering operation in response to the gyro sensor receiving a user's turning operation on the user terminal.
In a racing game, important functions of the virtual vehicle in racing are an acceleration function and a steering function, when the acceleration function is realized through a microphone sensor, and the steering function of the virtual vehicle is realized through a gyroscope, operation controls overlapped on a virtual scene picture are fewer, and a user is not easy to miss-touch when clicking other operation controls on a virtual scene interface, so that the interaction accuracy of the user and a terminal is improved.
Referring to fig. 14, a schematic diagram of a terminal rotation operation according to an embodiment of the present application is shown. As shown in fig. 14, a portion 1401 in fig. 14 is a normal posture when a user controls the terminal, when the user controls the terminal to rotate leftwards, the leftwards turning posture of the terminal is shown as a portion 1402 in fig. 14, and at this time, the terminal detects rotation information of the terminal through a gyroscope sensor, and generates a corresponding rotation signal to control the first virtual vehicle to turn leftwards; when the user controls the terminal to turn right, the rightward turning gesture of the terminal is shown as part 1403 in fig. 14, and at this time, the terminal detects the turning information of the terminal through the gyro sensor, and generates a corresponding turning signal to control the first virtual vehicle to turn right.
In one possible implementation manner, the virtual scene image is further overlaid with a direction operation control, and in response to receiving a trigger operation of the direction operation control by a user, the first virtual vehicle is controlled to perform steering operation according to the trigger operation.
Referring to fig. 15, a schematic diagram of a directional operation according to an embodiment of the present application is shown. In the virtual scene picture shown in fig. 15, there may also be a direction operation control, where the direction operation control includes a left direction control 1501 and a right direction control 1502, where the left direction control 1501 and the right direction control 1502 are respectively overlapped on two sides of the virtual scene picture, so that when a user holds the user terminal with two hands, the user can implement directional control on the first virtual vehicle through the two direction controls with two hands, and simultaneously implement acceleration operation on the first virtual vehicle through blowing operation on the microphone sensor.
Referring to fig. 16, a schematic diagram of a directional operation according to an embodiment of the present application is shown. In the virtual scene as shown in fig. 16, there may also be a direction operation control, where the direction operation control is a left-right direction control 1601, where the left-right direction control 1601 is located on one side of the virtual scene, and the left-right direction control may be superimposed on the left side of the virtual scene or on the right side of the virtual scene, so that when the user holds the user terminal with one hand, the user may implement directional control on the first virtual vehicle with one hand, and simultaneously implement acceleration operation on the first virtual vehicle with blowing operation on the microphone sensor.
In one possible implementation, since the user needs to control the first virtual vehicle to perform the first operation by performing the blowing operation on the microphone sensor, the virtual scene image should present a vertical screen display on the terminal, that is, the microphone sensor portion of the user terminal is opposite to the mouth of the user, so that the user can perform the blowing operation on the microphone sensor at any time.
By means of the virtual vehicle control mode, a player controls the racing car to advance in a mode of blowing at a proper time, the player plays a game in easy and interesting blowing interaction, and simultaneously, fingers of the player are liberated and lungs are exercised, so that the virtual vehicle control mode is a game interaction mode which is more beneficial to physical health and is more inclusive to player groups. And for people with inflexible and obstacle fingers and people in inconvenient finger operation scenes, the virtual vehicle can be controlled through the scheme shown in the embodiment of the application.
In summary, according to the scheme shown in the embodiment of the present application, the user performs the selection operation on the first selection control through the designated operation on the microphone sensor, and the terminal controls the first virtual vehicle to execute the first operation according to the selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.
Please refer to fig. 17, which is a flowchart illustrating a first virtual vehicle acceleration method according to an exemplary embodiment. The first virtual vehicle acceleration method is performed by a user terminal having a microphone sensor, the method comprising:
s1701, the terminal determines whether a sound pressure signal is detected. I.e. the microphone sensor determines whether a sound pressure signal is detected or not within a specified time based on a sound pressure detection model (i.e. electret film module) therein. When the sound pressure signal is not detected for a specified time, the first virtual vehicle does not perform an acceleration operation.
S1702, the microphone sensor receives the signal and converts it into an electrical signal. When a microphone sensor of the terminal detects a sound pressure signal, the sound pressure signal is converted into an electric signal and transmitted to the CPU.
S1703, the CPU receives the electric signal and sends out a program instruction. After the CPU receives the electric signal, the magnitude of the electric signal is firstly judged to determine whether the magnitude of the sound pressure signal received by the microphone sensor accords with the characteristics of the sound pressure signal corresponding to the blowing operation of the microphone sensor by a user. When the sound pressure signal indicated by the electric signal accords with the characteristic of the sound pressure signal corresponding to the blowing operation, the user is informed that the blowing operation is executed on the microphone sensor, and a program instruction corresponding to the blowing operation is sent out.
S1704, the terminal judges the position where the pointer falls according to the time of the electric signal generation. When the position where the pointer falls is in an invalid area, the first virtual vehicle does not execute acceleration operation; when the position corresponding to the pointer is in a yellow area, the first virtual vehicle executes small acceleration; when the position corresponding to the pointer is in an orange area, the first virtual vehicle executes medium acceleration; when the position corresponding to the pointer is in the green area, the first virtual vehicle performs strong acceleration.
S1705, the terminal judges whether the first virtual vehicle reaches a destination, and when the first virtual vehicle reaches the destination, the game is ended; when the first virtual vehicle does not reach the end point, the operation steps shown in S1701 to S1704 are re-executed.
Fig. 18 is a block diagram of a virtual vehicle control apparatus according to an exemplary embodiment of the present application. The virtual vehicle control apparatus may be applied to a computer device, which may be a user terminal having a microphone sensor, wherein the user terminal may be the terminal shown in fig. 1. As shown in fig. 14, the virtual vehicle control apparatus includes:
a virtual scene picture display module 1801 for displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; at least two selection areas are contained in the first selection control;
A selection region determining module 1802 configured to determine, in response to receiving a designation operation of the microphone sensor, a selection region corresponding to the designation operation within the first selection control;
the virtual vehicle control module 1803 is configured to control the first virtual vehicle to execute a first operation based on a selection area corresponding to the specified operation.
In one possible implementation, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation, the selection area determining module 1802 includes:
a first signal generation sub-module for generating a first signal in response to receiving the blowing operation to the microphone sensor;
and the selection area determination submodule is used for determining a selection area corresponding to the blowing operation in the first selection control based on the first signal.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
The selection area determination submodule is further configured to:
in response to the signal strength of the first signal being greater than a strength threshold, a selection region corresponding to the blowing operation is determined within the first selection control.
In one possible implementation, the selection area determining submodule includes:
the first time information determining unit is used for responding to the fact that the signal intensity of the first signal is larger than the intensity threshold value and obtaining first time information corresponding to the blowing operation; the first time information is used for indicating the time when the blowing operation of the microphone sensor is received;
and the selection area determining unit is used for determining a selection area corresponding to the blowing operation based on the first time information.
In one possible implementation, the first selection control includes a region selection pointer; the region pointed by the region selection pointer changes with time.
In one possible implementation, the selection area determining module 1802 further includes:
the pointing region determining submodule is used for determining a region pointed by the region selection pointer based on the first moment information;
and the pointer region determining submodule is used for determining the region pointed by the region selection pointer as the selection region corresponding to the specified operation.
In one possible implementation, the first operation is an acceleration operation of the first virtual vehicle;
the virtual vehicle control module 1803 includes:
an acceleration duration determining submodule, configured to determine an acceleration duration of the first virtual vehicle based on a selection region corresponding to the specified operation;
and the acceleration operation execution sub-module is used for controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation, the selection area determining module 1802 is configured to, for example,
in response to receiving a specified operation on the microphone sensor, and the first selection control is in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
In one possible implementation, the apparatus further includes:
the time parameter acquisition module is used for acquiring the time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in the virtual scene picture;
and the first state determining module is used for determining the first selection control as an available state in response to the time parameter of the virtual scene picture meeting a first time condition.
In one possible implementation, the apparatus further includes:
the virtual position acquisition module is used for acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
and a second state determining module configured to determine the first selection control as an available state in response to the virtual location information meeting a first location condition.
In one possible implementation, the apparatus further includes:
and a third state determining module configured to determine the first selection control as an unavailable state in response to the first virtual vehicle performing the first operation.
In summary, according to the scheme shown in the embodiment of the present application, the user performs the selection operation on the first selection control through the designated operation on the microphone sensor, and the terminal controls the first virtual vehicle to execute the first operation according to the selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.
Fig. 19 is a block diagram illustrating a structure of a computer device 1900 according to an example embodiment. The computer device may be implemented as a server in the above-described aspects of the present application.
The computer apparatus 1900 includes a central processing unit (Central Processing Unit, CPU) 1901, a system Memory 1904 including a random access Memory (Random Access Memory, RAM) 1902 and a Read-Only Memory (ROM) 1903, and a system bus 1905 connecting the system Memory 1904 and the central processing unit 1901. The computer device 1900 also includes a basic Input/Output system (I/O system) 1906 that facilitates the transfer of information between various devices within the computer, and a mass storage device 1907 for storing an operating system 1913, application programs 1914, and other program modules 1915.
The basic input/output system 1906 includes a display 1908 for displaying information and an input device 1909, such as a mouse, keyboard, etc., for inputting information by a user. Wherein the display 1908 and the input device 1909 are both connected to the central processing unit 1901 through an input output controller 1910 connected to a system bus 1905. The basic input/output system 1906 may also include an input/output controller 1910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1907 is connected to the central processing unit 1901 through a mass storage controller (not shown) connected to the system bus 1905. The mass storage device 1907 and its associated computer-readable media provide non-volatile storage for the computer device 1900. That is, the mass storage device 1907 may include a computer readable medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM) flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1904 and mass storage device 1907 described above may be collectively referred to as memory.
According to various embodiments of the disclosure, the computer device 1900 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 1900 may be connected to the network 1912 through a network interface unit 1911 coupled to the system bus 1905, or other types of networks or remote computer systems (not shown) may also be connected to the network using the network interface unit 1911.
The memory further includes at least one instruction, at least one program, a code set, or an instruction set stored in the memory, and the central processing unit 1901 implements all or part of the steps in the flowcharts of the virtual vehicle control method shown in the above-described respective embodiments by executing the at least one instruction, at least one program, a code set, or an instruction set.
Fig. 20 is a block diagram illustrating a structure of a computer device 2000, according to an example embodiment. The computer device 2000 may be a terminal such as a smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, mpeg 3), MP4 (Moving Picture Experts Group Audio Layer IV, mpeg 4) player, notebook computer, or desktop computer. The computer device 2000 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the computer device 2000 includes: a processor 2001 and a memory 2002.
Processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 2001 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2001 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 2001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2002 may include one or more computer-readable storage media, which may be non-transitory. Memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the virtual vehicle control method provided by the method embodiments herein.
In some embodiments, the computer device 2000 may also optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002, and peripheral interface 2003 may be connected by a bus or signal line. The respective peripheral devices may be connected to the peripheral device interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2004, a display 2005, a camera assembly 2006, audio circuitry 2007, and a power supply 2009.
Peripheral interface 2003 may be used to connect I/O (Input/Output) related at least one peripheral device to processor 2001 and memory 2002. In some embodiments, processor 2001, memory 2002, and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2001, memory 2002, and peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 2004 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited in this application.
The display 2005 is used to display UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2005 is a touch display, the display 2005 also has the ability to capture touch signals at or above the surface of the display 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2005 may be one, providing a front panel of the computer device 2000; in other embodiments, the display 2005 may be at least two, respectively disposed on different surfaces of the computer device 2000 or in a folded design; in still other embodiments, the display 2005 may be a flexible display disposed on a curved surface or a folded surface of the computer device 2000. Even more, the display 2005 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The display 2005 can be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 2006 is used to capture images or video. Optionally, the camera assembly 2006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 2006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing, or inputting the electric signals to the radio frequency circuit 2004 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location of the computer device 2000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2007 may also include a headphone jack.
The power supply 2009 is used to power the various components in the computer device 2000. The power source 2009 may be alternating current, direct current, disposable or rechargeable. When the power source 2009 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the computer device 2000. For example, the acceleration sensor 2011 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 2001 may control the display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 2012 may detect a body direction and a rotation angle of the computer device 2000, and the gyro sensor 2012 may cooperate with the acceleration sensor 2011 to collect 3D actions of the user on the computer device 2000. The processor 2001 may implement the following functions based on the data collected by the gyro sensor 2012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 2013 may be disposed on a side frame of the computer device 2000 and/or below the display 2005. When the pressure sensor 2013 is disposed on a side frame of the computer device 2000, a grip signal of the computer device 2000 by a user may be detected, and the processor 2001 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at the lower layer of the display 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the display 2005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, processor 2001 may control the display brightness of display 2005 based on the intensity of ambient light collected by optical sensor 2015. Specifically, when the intensity of the ambient light is high, the display luminance of the display screen 2005 is turned high; when the ambient light intensity is low, the display brightness of the display screen 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 based on the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also known as a distance sensor, is typically disposed on the front panel of the computer device 2000. The proximity sensor 2016 is used to capture the distance between the user and the front of the computer device 2000. In one embodiment, when the proximity sensor 2016 detects a gradual decrease in the distance between the user and the front of the computer device 2000, the processor 2001 controls the display 2005 to switch from the bright screen state to the off screen state; when the proximity sensor 2016 detects that the distance between the user and the front of the computer device 2000 gradually increases, the processor 2001 controls the display 2005 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 20 is not limiting as to the computer device 2000, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, including instructions, for example, a memory including at least one instruction, at least one program, a set of codes, or a set of instructions, executable by a processor, to perform all or part of the steps of the methods shown in the corresponding embodiments of fig. 3 or 4. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform all or part of the steps of the method described above with respect to the corresponding embodiments of fig. 3 or fig. 4.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. A virtual vehicle control method for a user terminal having a microphone sensor, the method comprising:
displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; at least two selection areas are contained in the first selection control;
in response to receiving a specified operation on the microphone sensor, determining a selection area corresponding to the specified operation in the first selection control, wherein the specified operation is an air blowing operation of blowing air against the microphone sensor by a user corresponding to the user terminal;
controlling the first virtual vehicle to execute a first operation based on a selection area corresponding to the specified operation;
wherein the determining, in response to receiving a specified operation on the microphone sensor, a selection region within the first selection control that corresponds to the specified operation includes:
generating a first signal in response to receiving the blowing operation on the microphone sensor; the first signal is an electrical signal generated by the blowing operation received by the microphone sensor; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
Acquiring first time information corresponding to the blowing operation in response to the signal intensity of the first signal being greater than an intensity threshold; the first time information is used for indicating the time when the blowing operation of the microphone sensor is received;
and determining a selection area corresponding to the blowing operation based on the first time information.
2. The method of claim 1, wherein the first selection control comprises a region selection pointer; the region pointed by the region selection pointer changes with time.
3. The method of claim 2, wherein the determining a selection area corresponding to the blowing operation based on the first time information comprises:
determining the area pointed by the area selection pointer based on the first moment information;
and determining the area pointed by the area selection pointer as a selection area corresponding to the blowing operation.
4. A method according to any one of claims 1 to 3, wherein the first operation is an acceleration operation of the first virtual vehicle;
the controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the blowing operation includes:
Determining the acceleration duration of the first virtual vehicle based on the selection area corresponding to the blowing operation;
and controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
5. The method of claim 1, wherein the determining, in response to receiving a specified operation of the microphone sensor, a selection area within the first selection control that corresponds to the specified operation comprises:
in response to receiving a specified operation on the microphone sensor, and the first selection control is in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
6. The method of claim 5, wherein, in response to receiving a specified operation on the microphone sensor, prior to determining a selection region within the first selection control that corresponds to the specified operation, further comprising:
obtaining a time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in the virtual scene picture;
and determining the first selection control as an available state in response to the time parameter of the virtual scene picture meeting a first time condition.
7. The method of claim 5, wherein, in response to receiving a specified operation on the microphone sensor, prior to determining a selection region within the first selection control that corresponds to the specified operation, further comprising:
acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
in response to the virtual location information meeting a first location condition, the first selection control is determined to be in an available state.
8. The method according to any one of claims 5 to 7, further comprising:
the first selection control is determined to be in an unavailable state in response to the first virtual vehicle performing the first operation.
9. A virtual vehicle control apparatus for a user terminal having a microphone sensor, the apparatus comprising:
the virtual scene picture display module displays the virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is overlapped on the virtual scene picture; at least two selection areas are contained in the first selection control;
a selection area determining module, configured to determine, in response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation in the first selection control, where the specified operation is an air blowing operation that a user corresponding to the user terminal performs air blowing on the microphone sensor;
The virtual vehicle control module is used for controlling the first virtual vehicle to execute a first operation based on a selection area corresponding to the specified operation;
wherein the selection area determining module is used for,
generating a first signal in response to receiving the blowing operation on the microphone sensor; the first signal is an electrical signal generated by the blowing operation received by the microphone sensor; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
acquiring first time information corresponding to the blowing operation in response to the signal intensity of the first signal being greater than an intensity threshold; the first time information is used for indicating the time when the blowing operation of the microphone sensor is received;
and determining a selection area corresponding to the blowing operation based on the first time information.
10. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the virtual vehicle control method of any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that at least one section of program is stored in the storage medium, the at least one section of program being loaded and executed by a processor to implement the virtual vehicle control method according to any one of claims 1 to 8.
CN202110090249.6A 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium Active CN112717409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110090249.6A CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110090249.6A CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112717409A CN112717409A (en) 2021-04-30
CN112717409B true CN112717409B (en) 2023-06-20

Family

ID=75595213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110090249.6A Active CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112717409B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363337A (en) * 2023-04-04 2023-06-30 如你所视(北京)科技有限公司 Model tour method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6397698B2 (en) * 2014-08-28 2018-09-26 任天堂株式会社 Information processing terminal, information processing program, information processing terminal system, and information processing method
CN107329725A (en) * 2016-04-28 2017-11-07 上海连尚网络科技有限公司 Method and apparatus for controlling many people's interactive applications
CN108499106B (en) * 2018-04-10 2022-02-25 网易(杭州)网络有限公司 Processing method and device for racing game prompt information
CN109847348B (en) * 2018-12-27 2022-09-27 努比亚技术有限公司 Operation interface control method, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN112717409A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
WO2019153824A1 (en) Virtual object control method, device, computer apparatus, and storage medium
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN108536295B (en) Object control method and device in virtual scene and computer equipment
CN112245912B (en) Sound prompting method, device, equipment and storage medium in virtual scene
WO2020125340A1 (en) Method and device for processing control information, electronic equipment, and storage medium
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
WO2020156252A1 (en) Method, device, and apparatus for constructing building in virtual environment, and storage medium
JP2021535824A (en) Viewing angle rotation method, device and computer program
CN110743168A (en) Virtual object control method in virtual scene, computer device and storage medium
JP2024509064A (en) Location mark display method, device, equipment and computer program
TWI817208B (en) Method and apparatus for determining selected target, computer device, non-transitory computer-readable storage medium, and computer program product
CN109806583B (en) User interface display method, device, equipment and system
WO2022237076A1 (en) Method and apparatus for controlling avatar, and device and computer-readable storage medium
CN112755517B (en) Virtual object control method, device, terminal and storage medium
CN112494958B (en) Method, system, equipment and medium for converting words by voice
CN112717409B (en) Virtual vehicle control method, device, computer equipment and storage medium
CN111589143B (en) Animation playing method, device, equipment and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN112870712B (en) Method and device for displaying picture in virtual scene, computer equipment and storage medium
CN113041619B (en) Control method, device, equipment and medium for virtual vehicle
CN111338487B (en) Feature switching method and device in virtual environment, terminal and readable storage medium
CN112023403A (en) Battle process display method and device based on image-text information
CN112755533B (en) Virtual carrier coating method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042617

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant