CN112717409A - Virtual vehicle control method and device, computer equipment and storage medium - Google Patents

Virtual vehicle control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112717409A
CN112717409A CN202110090249.6A CN202110090249A CN112717409A CN 112717409 A CN112717409 A CN 112717409A CN 202110090249 A CN202110090249 A CN 202110090249A CN 112717409 A CN112717409 A CN 112717409A
Authority
CN
China
Prior art keywords
selection
virtual
virtual vehicle
control
microphone sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110090249.6A
Other languages
Chinese (zh)
Other versions
CN112717409B (en
Inventor
朱倩
汪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110090249.6A priority Critical patent/CN112717409B/en
Publication of CN112717409A publication Critical patent/CN112717409A/en
Application granted granted Critical
Publication of CN112717409B publication Critical patent/CN112717409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to a virtual vehicle control method, a virtual vehicle control device, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; the first selection control comprises at least two selection areas; in response to receiving a specified operation on the microphone sensor, determining a selection area corresponding to the specified operation within the first selection control; and controlling the first virtual vehicle to execute the first operation based on the selection area corresponding to the designated operation. By the method, the user can control the first virtual vehicle through the microphone sensor, and the control mode of the virtual vehicle is expanded, so that the control effect of the virtual vehicle is improved.

Description

Virtual vehicle control method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a virtual vehicle control method, an apparatus, a computer device, and a storage medium.
Background
Currently, in a game application for controlling a virtual vehicle, for example, in a racing game, a user may control the virtual vehicle in a virtual scene interface by using a virtual control in the virtual scene interface.
In the related art, a direction operation control for controlling the running direction of a virtual vehicle is superposed in a virtual scene picture corresponding to a game application program for controlling the virtual vehicle, and a user controls the virtual vehicle by specifying the direction operation control; and the virtual scene picture can also be superposed with an acceleration control, a brake control and a prop using control of the virtual vehicle, so that a user can select an operation function of the virtual vehicle.
However, in the related art, a user can only control the virtual vehicle through touch operation of the control superimposed on the virtual scene screen, which results in a single control manner for the virtual vehicle, and limits the control effect for the virtual vehicle.
Disclosure of Invention
The embodiment of the application provides a virtual vehicle control method, a virtual vehicle control device, computer equipment and a storage medium, which can improve the control effect of a virtual vehicle, and the technical scheme is as follows:
in one aspect, a virtual vehicle control method is provided, the method comprising:
displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; at least two selection areas are contained in the first selection control;
in response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation;
and controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
In yet another aspect, there is provided a virtual vehicle control apparatus, the apparatus including:
the virtual scene picture display module is used for displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; at least two selection areas are contained in the first selection control;
a selection area determination module, configured to determine, in response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation within the first selection control;
and the virtual vehicle control module is used for controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
In a possible implementation manner, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation manner, the selection area determining module includes:
a first signal generation submodule for generating a first signal in response to receiving the blowing operation on the microphone sensor;
a selection region determination sub-module to determine a selection region within the first selection control corresponding to the blowing operation based on the first signal.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is in positive correlation with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
the selection area determination submodule is further configured to:
in response to the signal strength of the first signal being greater than an intensity threshold, determining a selection region within the first selection control that corresponds to the blow operation.
In one possible implementation manner, the selection area determining sub-module includes:
the first time information determining unit is used for responding to the condition that the signal intensity of the first signal is greater than the intensity threshold value, and acquiring first time information corresponding to the air blowing operation; the first time information is used for indicating the time when the blowing operation on the microphone sensor is received;
and the selection area determining unit is used for determining a selection area corresponding to the blowing operation based on the first time information.
In one possible implementation, the first selection control contains a region selection pointer; the area pointed to by the area selection pointer varies over time.
In a possible implementation manner, the selection area determining module further includes:
a pointing region determination submodule configured to determine a region pointed to by the region selection pointer based on the first time information;
and the pointer area determining submodule is used for determining the area pointed by the area selection pointer as the selection area corresponding to the specified operation.
In one possible implementation, the first operation is an acceleration operation of the first virtual vehicle;
the virtual vehicle control module comprising:
an acceleration duration determination submodule, configured to determine an acceleration duration of the first virtual vehicle based on a selection region corresponding to the specified operation;
and the acceleration operation execution submodule is used for controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation manner, the selection area determination module is configured to,
in response to receiving a specified operation on the microphone sensor and the first selection control being in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
In one possible implementation, the apparatus further includes:
the time parameter acquisition module is used for acquiring the time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in a virtual scene picture;
a first state determination module to determine the first selection control to be in an available state in response to a time parameter of the virtual scene screen satisfying a first time condition.
In one possible implementation, the apparatus further includes:
the virtual position acquisition module is used for acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
a second state determination module to determine the first selection control to be in an available state in response to the virtual location information satisfying a first location condition.
In one possible implementation, the apparatus further includes:
a third state determination module to determine the first selection control to be in an unavailable state in response to the first virtual vehicle performing the first operation.
In another aspect, a computer device is provided, comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by said processor to implement the above virtual vehicle control method.
In another aspect, a computer readable storage medium is provided having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement the above virtual vehicle control method.
In yet another aspect, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, causing the computer device to execute the virtual vehicle control method.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the user realizes the selection operation of the first selection control through the specified operation of the microphone sensor, and the terminal controls the first virtual vehicle to execute the first operation according to the selection area in the first selection control corresponding to the specified operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, the control mode of the virtual vehicle is expanded, and therefore the control effect of the virtual vehicle is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a display interface diagram of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a flow chart of a virtual vehicle control method provided by an exemplary embodiment of the present application;
FIG. 4 illustrates a method flow diagram of a virtual vehicle control method, shown in an exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of a microphone sensor on a mobile terminal according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating an electret microphone according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a user performing a blowing operation according to an embodiment of the present application;
FIG. 8 illustrates a microphone sensor location presentation diagram according to an embodiment of the present application;
FIG. 9 is a diagram illustrating a first selection control according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating a game corresponding to a virtual scene picture according to an embodiment of the present application;
fig. 11 is a schematic diagram illustrating a game corresponding to a virtual scene picture according to an embodiment of the present application;
FIG. 12 illustrates a schematic diagram of an energy bar control according to an embodiment of the present application;
fig. 13 is a diagram illustrating an acceleration duration selection according to an embodiment of the present application;
fig. 14 is a schematic view illustrating a terminal rotation operation according to an embodiment of the present application;
FIG. 15 illustrates a directional operational diagram according to an embodiment of the present application;
FIG. 16 is a schematic view of a directional operation in accordance with an embodiment of the present application;
FIG. 17 is a schematic flow diagram illustrating a first virtual vehicle acceleration method according to an exemplary embodiment;
FIG. 18 is a block diagram illustrating a virtual vehicle control apparatus according to an exemplary embodiment of the present application;
FIG. 19 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 20 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene
A virtual scene is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
3) Virtual vehicle
The virtual vehicle is a virtual vehicle which can realize running operation of a virtual object in a virtual environment according to control of an operation control by a user, the functions which can be realized by the virtual vehicle can comprise acceleration, deceleration, braking, backing, steering, drifting, prop use and the like, and the functions can be automatically realized, for example, the virtual vehicle can automatically accelerate, or the virtual vehicle can automatically steer; the functions can also be realized according to control triggering of the user on the operation control, for example, when the user triggers the brake control, the virtual vehicle executes a brake action.
4) Racing car game
The racing game is mainly carried out in a virtual competition scene, a plurality of virtual vehicles realize the racing game aiming at realizing the appointed competition target, and in the virtual competition scene, a user can control the virtual vehicle corresponding to the terminal to carry out racing competition with the virtual vehicles controlled by other users; the user may also control the virtual vehicle corresponding to the terminal to race against the virtual vehicle controlled by the AI generated by the client program corresponding to the racing game.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130. The first terminal 110 and the second terminal 130 are terminal devices having microphone sensors.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online competition program, or the application 111 may also be an offline application. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be an RCG (racing game), a Sandbox-like game containing racing functionality, or other types of games containing racing functionality. In the present embodiment, the application 111 is exemplified as an RCG. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual vehicle located in the virtual environment for activity, and the first virtual vehicle may be referred to as a master virtual object of the first user 112. The activities of the first virtual vehicle include, but are not limited to: at least one of acceleration, deceleration, braking, backing, steering, drifting, and using props, etc. Illustratively, the first virtual vehicle may be a virtual vehicle, or a virtual model with virtual vehicle functions modeled from other vehicles (e.g., ships, airplanes), etc.; the first virtual vehicle may also be a virtual vehicle modeled from a real vehicle model that is present in reality.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online competition program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of an RCG game program, a Sandbox game, and other game programs including a racing function, and in the present embodiment, the application 131 is an RCG game as an example.
Alternatively, the second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual vehicle located in the virtual environment to implement the running operation, and the second virtual vehicle may be referred to as a master virtual vehicle of the second user 132.
Optionally, a third virtual vehicle may also exist in the virtual environment, the third virtual vehicle being controlled by the AI corresponding to the application 131, and the third virtual vehicle may be referred to as an AI control virtual vehicle.
Optionally, the first virtual vehicle, the second virtual vehicle, and the third virtual vehicle are in the same virtual world. Optionally, the first virtual vehicle and the second virtual vehicle may belong to the same camp, the same team, the same organization, have a friend relationship, or have a temporary communication right. Alternatively, the first virtual vehicle and the second virtual vehicle may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, server 120 includes memory 121, processor 122, user account database 123, athletic service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the athletic service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the athletic service module 124 is configured to provide a plurality of athletic rooms for users to perform athletic activities, such as 1V1 athletic sports, 3V3 athletic sports, 5V5 athletic sports, and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual vehicle 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual vehicle 240. The virtual vehicle 240 may be a virtual object controlled by a user or a virtual object controlled by an application corresponding to other terminals.
In fig. 2, the currently controlled virtual vehicle 210 and the virtual vehicle 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from a third person perspective view angle corresponding to the currently controlled virtual vehicle 210, where the third person perspective view angle corresponding to the virtual vehicle 210 is a view angle picture observed from a virtual camera disposed at the rear upper side of the virtual vehicle, and exemplarily, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed under the observation of the third person perspective view angle corresponding to the currently controlled virtual vehicle 210 is a road 224, a sky 225, a hill 221, and a factory building 222.
The currently controlled virtual vehicle 210 may perform operations such as steering, acceleration, drifting and the like under the control of the user, and the virtual vehicle in the virtual scene may exhibit different three-dimensional models under the control of the user, for example, a screen of the terminal supports a touch operation, and the scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual vehicle 210 may perform a specified operation (for example, a deformation operation) in the virtual scene and exhibit a currently corresponding three-dimensional model.
FIG. 3 illustrates a flow chart of a virtual vehicle control method provided by an exemplary embodiment of the present application. The virtual vehicle control method is performed by a user terminal for a vehicle having a microphone sensor. As shown in fig. 3, the virtual vehicle control method includes:
step 301, displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; the first selection control includes at least two selection regions therein.
In one possible implementation, the virtual scene is a virtual scene viewed from a third personal perspective of the first virtual vehicle. The third person weighing visual angle of the first virtual vehicle is a visual angle corresponding to a virtual camera arranged at the rear upper part of the first virtual vehicle, and a virtual scene picture observed by the third person weighing visual angle of the first virtual vehicle is a virtual scene picture observed by the virtual camera arranged at the rear upper part of the first virtual vehicle.
In another possible implementation, the virtual scene is a virtual scene viewed from a first-person perspective of the first virtual vehicle. The first person perspective of the first virtual vehicle is a perspective corresponding to the virtual camera arranged at the driver position of the first virtual vehicle, and the virtual scene picture observed at the first person perspective of the first virtual vehicle is a virtual scene picture observed at the virtual camera arranged at the driver position of the first virtual vehicle.
In one possible implementation manner, a perspective switching control is superimposed on the virtual scene picture, and in response to a user's specified operation on the perspective switching control, the virtual scene picture can be switched between a first person perspective of the first virtual vehicle and a third person perspective of the first virtual vehicle.
For example, when the virtual scene picture displayed by the terminal is a virtual scene picture corresponding to the first-person perspective of the first virtual vehicle, in response to the user's designated operation on the perspective switching control, the terminal switches the virtual scene picture corresponding to the first-person perspective of the first virtual vehicle to a virtual scene picture corresponding to the third-person perspective; when the virtual scene picture displayed by the terminal is the virtual scene picture corresponding to the third person-name visual angle corresponding to the first virtual vehicle, the terminal switches the virtual scene picture corresponding to the third person-name visual angle of the first virtual vehicle into the virtual scene picture corresponding to the first person-name visual angle in response to the specified operation of the visual angle switching control by the user.
In response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation is determined within the first selection control, step 302.
In one possible implementation, the microphone sensor may be any one of a moving coil microphone sensor, an aluminum strip microphone sensor, a capacitive sensor, and an electret microphone.
And step 303, controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
To sum up, in the solution shown in the embodiment of the present application, a user performs a selection operation on a first selection control by performing a designated operation on a microphone sensor, and a terminal controls a first virtual vehicle to execute a first operation according to a selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, the control mode of the virtual vehicle is expanded, and therefore the control effect of the virtual vehicle is improved.
FIG. 4 illustrates a method flow diagram of a virtual vehicle control method, shown in an exemplary embodiment of the present application. The virtual vehicle control method may be performed by a user terminal having a microphone sensor. As shown in fig. 4, the virtual vehicle control method includes:
step 401, displaying a virtual scene picture.
In one possible implementation manner, the virtual scene picture is a virtual scene picture when the first virtual vehicle and the other virtual vehicles compete for a speed ratio competition.
In one possible implementation manner, the first virtual vehicle is a virtual vehicle corresponding to a user account corresponding to the terminal.
When the user account logs in the terminal, the terminal uploads account information corresponding to the user account to a server corresponding to the virtual scene picture, the server receives the account information and then issues virtual vehicle information corresponding to the account information to the terminal, and the terminal displays a virtual vehicle corresponding to the user account according to the virtual vehicle information.
In a possible implementation manner, the virtual vehicle corresponding to the user account may be a plurality of virtual vehicles of different types, and the terminal displays, on the vehicle selection interface, each virtual vehicle of different types corresponding to the virtual vehicle information in response to receiving the virtual vehicle information issued by the server.
And in response to receiving a selection operation of the user on the vehicle selection interface, determining a target virtual vehicle corresponding to the selection operation, and determining the target virtual vehicle as a first virtual vehicle corresponding to the user account.
In response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation is determined within the first selection control, step 402.
In one possible implementation, the microphone sensor is a multi-microphone array of multiple microphone sensors.
When a mobile terminal receives an acoustic wave signal using a multi-microphone array as a microphone sensor, original speech can be extracted as pure as possible from a noisy speech signal contaminated with noise. Compared with the time-frequency domain characteristic that a single-microphone system can only acquire signals, the multi-microphone speech enhancement system can inspect the spatial domain information of the signals, eliminate background noise, and improve the accuracy of the function of the mobile phone controlled by blowing by using the noise reduction technology of the multi-microphone system so as to achieve better control effect.
Please refer to fig. 5, which illustrates a schematic diagram of a microphone sensor on a mobile terminal according to an embodiment of the present application. As shown in fig. 5, on the terminal bottom 501 of the terminal 500, there is at least one microphone sensor. There are two multi-microphone arrays in the terminal 500, a multi-microphone array 502 at the left end of the bottom of the terminal and a multi-microphone array 503 at the right end of the bottom of the terminal.
In a possible implementation manner, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation, the microphone sensor is an electret microphone.
Please refer to fig. 6, which illustrates a schematic diagram of an electret microphone according to an embodiment of the present application. As shown in fig. 6, the electret microphone includes a voltage source 600, which provides a bias voltage for the electret microphone, and since the electret microphone has a fet for pre-amplification, the electret microphone needs a certain bias voltage during normal operation. The electret microphone also comprises a resistor 601, and the resistor 601 prevents the circuit from being damaged by overlarge current signals generated by the electret microphone in response to receiving sound pressure. The electret microphone also comprises an electret film part 602, wherein the electret film part 602 receives a sound pressure signal, the electret film generates vibration along with the sound pressure signal, so that the capacitance generated by the electret and the intermediate medium is changed along with the vibration of the sound pressure signal, and the voltage of the two sections of the electret film is further changed, so that a corresponding electric signal is generated.
By utilizing the sound capturing function of the microphone of the mobile phone, when a person blows air towards the microphone, the sound waves vibrate the electret film in the microphone, so that capacitance changes to generate correspondingly changed tiny voltage. The voltage is then converted into a voltage of 0-5V, and the voltage is accepted by a data acquisition unit through A/D (analog-to-Digital) conversion to generate a Digital audio signal.
In one possible implementation, in response to receiving the blowing operation to the microphone sensor, generating a first signal; based on the first signal, a selection region corresponding to the blowing operation is determined within the first selection control.
When the microphone sensor receives a sound pressure signal, the electret film in the cylinder vibrates, so that a corresponding electric signal is generated. When the system detects the change of the electric signal, according to the change condition of the electric signal (namely a first signal), a selection area corresponding to the specified operation is determined in the first selection control.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is positively correlated with a sound pressure signal corresponding to the blowing operation of the microphone sensor; in response to the signal strength of the first signal being greater than an intensity threshold, a selection region corresponding to the blow operation is determined within the first selection control.
Because the microphone sensor generates a corresponding electrical signal by receiving the sound pressure signal, the microphone sensor also generates a first signal when receiving external noise sound pressure or non-specified operations such as sound pressure generated by speaking of a user, and the like, the first signal needs to be screened to ensure that the generated first signal triggers the first selection control to determine the selection area corresponding to the specified operation only when the user performs the specified operation (namely, the blowing operation).
When the specified operation is a blowing operation and the blowing operation is a blowing operation performed by a user with the microphone sensor as a target, the signal intensity of the first signal is in positive correlation with a sound pressure signal corresponding to the blowing operation received by the microphone sensor, the sound pressure received by the microphone sensor is relatively large, and noise or user speaking is relatively small, so that an intensity threshold value can be set.
In a possible implementation manner, before the virtual scene picture is displayed, the application program corresponding to the virtual scene picture acquires device information corresponding to the user terminal; and determining the position of a microphone sensor of the user terminal according to the equipment information, and displaying the position of the microphone sensor on a display interface corresponding to the user terminal.
The position of the microphone sensor may be different for different models of user terminals, for example the microphone sensor may be on the left side of the lower part of the terminal or the microphone sensor may be on the right side of the lower part of the terminal. When the position of the microphone sensor deviates from the position where the user performs the specified operation, it may be caused that the first signal generated when the microphone sensor receives the user performs the blowing operation is not greater than the threshold value. Refer to fig. 7, which illustrates a schematic diagram of a user performing a blowing operation according to an embodiment of the present application. As shown in fig. 7, the microphone sensor 701 corresponding to the terminal 700 is located in the left area 702 at the bottom of the terminal, and when the user performs a blowing operation on the right area 703 at the bottom of the terminal 700, the sound pressure received by the microphone sensor 701 may be small, that is, the first signal generated by the microphone sensor may be smaller than a threshold value, so that the application program corresponding to the virtual scene picture cannot respond to the blowing operation performed by the user, that is, cannot perform a selection operation on the first selection control.
Please refer to fig. 8, which shows a schematic diagram illustrating a microphone sensor position according to an embodiment of the present application. As shown in fig. 8, before the terminal 800 displays the virtual scene picture, a position display interface 801 corresponding to the microphone sensor may be displayed first, a model diagram 802 corresponding to the terminal 800 is displayed on the position display interface, and a microphone sensor position 803 corresponding to the terminal 800 is marked on the model diagram 802. In one possible implementation manner, in response to a user's designated operation on the position display interface, the position display interface is closed, and the virtual scene picture is displayed.
In one possible implementation manner, in response to that the signal intensity of the first signal is greater than the intensity threshold, acquiring first time information corresponding to the air blowing operation; the first time information is used for indicating the time when the blowing operation on the microphone sensor is received; and determining a selection area corresponding to the designated operation based on the first time information.
The first selection control determines a selection area in the first selection control according to first time information (i.e., generation time) corresponding to the first signal.
In one possible implementation, the first selection control includes a region selection pointer; the region pointed to by the region selection pointer varies over time.
Referring to fig. 9, a schematic diagram of a first selection control according to an embodiment of the present application is shown. As shown in fig. 9, a region selection pointer 901 is included in the first selection control 900, along with at least two selection regions 902 included in the first selection control. In fig. 9, at least two selection areas 902 include a selection area 1, a selection area 2, and a selection area 3, and the area selection pointer 901 rotates with time around the pointer base 903 as a rotation axis, where a pattern of a first operation corresponding to the first selection control may be displayed in the pointer base 903. For example, when the first operation corresponding to the first selection control is an acceleration operation, N may be displayed in the pointer base 9032O (nitrous oxide, commonly used as an accelerated oxidizer for racing) to prompt the user that the first virtual vehicle may be directed through N according to the first selection control2And O, performing acceleration operation.
In one possible implementation manner, based on the first time information, determining a region pointed by the region selection pointer; and determining the region pointed by the region selection pointer as the selection region corresponding to the specified operation.
And after the terminal determines the first time information, determining the area pointed by the area selection pointer at the time corresponding to the first time information according to the first time information, and determining the area as the selection area selected by the user through the blowing operation.
In one possible implementation, in response to receiving a specified operation on the microphone sensor and the first selection control being in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
When the first selection control is in an available state, the first selection control responds to the specified operation of the microphone sensor and selects an area corresponding to the specified operation; when the first selection control is in an unavailable state and a specified operation on the microphone sensor is received, the first selection control does not make a selection operation corresponding to the specified operation.
In one possible implementation, the microphone sensor is in a disabled state when the first selection control is in an unavailable state.
That is, only when the first selection control is in the available state, the microphone sensor is turned on to receive the designated operation executed by the user, and when the first selection control is in the unavailable state, the microphone sensor is not turned on, so that the resource consumption of the terminal is reduced.
In a possible implementation manner, acquiring a time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in the virtual scene picture; in response to a time parameter of the virtual scene screen satisfying a first time condition, determining the first selection control to be in an available state.
When the time parameter corresponding to the virtual scene picture meets a first time condition, the first selection control is determined to be in an available state, namely when the first virtual vehicle is in a competitive speed competition, and when the competition time meets the first time condition, the first selection control is started, so that a user can select a selection area corresponding to the specified operation through the first selection control, and the first virtual vehicle is controlled to execute a first operation according to the selected selection area.
Please refer to fig. 10, which shows a schematic match diagram corresponding to a virtual scene picture according to an embodiment of the present application. As shown in fig. 10, in which the first virtual vehicle 1001 travels on a virtual road 1002 in the virtual scene screen on which a racing rank display control 1003 is superimposed, wherein "4 DDD" displayed in an enlarged manner represents a rank "4" corresponding to the first virtual vehicle "DDD". A small map display control 1004 is also superimposed on the virtual scene picture, and a road schematic diagram displayed by taking the place where the first virtual vehicle is located as the center is displayed on the small map display control. A time parameter control 1005 is further superimposed on the virtual scene screen, and is configured to determine a travel time (i.e., a time parameter) of the first virtual vehicle in the virtual scene screen, and when the travel time satisfies a specified condition, for example, the travel time satisfies a specified condition greater than 30S, the first selection control 1006 is determined to be in an available state, at this time, the first selection control may be displayed on the virtual scene screen, and the area selection pointer in the first selection control 1006 rotates along with the change of time.
In one possible implementation manner, acquiring virtual position information of the first virtual vehicle in the virtual scene picture; in response to the virtual location information satisfying a first location condition, the first selection control is determined to be in an available state.
And when the virtual position information of the first virtual vehicle in the virtual scene picture meets the first position condition, starting a first selection control so that a user can select a selection area corresponding to the specified operation through the first selection control, and controlling the first virtual vehicle to execute the first operation according to the selected selection area.
Please refer to fig. 11, which illustrates a schematic match diagram corresponding to a virtual scene picture according to an embodiment of the present application. As shown in fig. 11, in which the first virtual vehicle 1101 travels on a virtual road 1102 in the virtual scene screen on which a racing rank display control 1103 is superimposed, the "4 DDD" displayed in an enlarged manner represents the rank "4" corresponding to the first virtual vehicle "DDD". A small map display control 1104 is also superimposed on the virtual scene picture, and a road schematic diagram displayed with the location of the first virtual vehicle as the center is displayed on the small map display control. Also superimposed on the virtual scene screen is a time parameter control 1105 for determining a travel time of the first virtual vehicle in the virtual scene screen. An area determination control 1106 is further superimposed on the virtual scene screen, the area determination control 1106 is located on the virtual road 1102 in the virtual scene screen, and in response to that the first virtual vehicle passes through an area corresponding to the area selection control, that is, the virtual position information of the first virtual vehicle meets a first position condition, the first selection control is determined to be in an available state, at this time, the first selection control can be displayed on the virtual scene screen, and an area selection pointer in the first selection control 1107 rotates along with the change of time.
In one possible implementation, the first selection control is determined to be in an unavailable state in response to the first virtual vehicle performing the first operation.
When the first virtual vehicle executes the first operation, the first selection control is determined to be in an unavailable state, and the first virtual vehicle cannot repeatedly trigger the first selection control at the moment.
In one possible implementation, the first selection control is determined to be in an unavailable state for a specified time in response to the first virtual vehicle performing the first operation.
When the first virtual vehicle executes the first operation, the first selection control is determined to be in an unavailable state within a specified time, and at this time, the first virtual vehicle cannot repeatedly trigger the first selection control within the specified time, that is, a user cannot control the first virtual vehicle to repeatedly execute the first operation within the specified time.
In one possible implementation, the first selection control is determined to be available in response to the first virtual vehicle performing the first operation for a time period in which the first selection control is unavailable being greater than a specified time period.
When the first virtual vehicle executes the first operation and the first selection control is determined to be in the unavailable state within the specified time, and when the running time of the first virtual vehicle after executing the first operation is longer than the specified time, the first selection control is determined to be in the available state, and at this time, the user can control the first virtual vehicle to execute the first operation again through the specified operation on the microphone sensor.
In one possible implementation, the first selection control is determined to be in an unavailable state within a designated area in response to the first virtual vehicle performing the first operation.
That is, after the first virtual vehicle performs the first operation, the first selection control is determined to be in an unavailable state in the designated area, and at this time, the first virtual vehicle cannot repeatedly trigger the first selection control in the designated area, that is, the user cannot control the first virtual vehicle to repeatedly perform the first operation in the designated area.
In one possible implementation manner, the first selection control is determined to be in the available state in response to the first virtual vehicle driving through the designated area when the first selection control is in the unavailable state after the first virtual vehicle performs the first operation.
When the first virtual vehicle executes the first operation and the first selection control is determined to be in an unavailable state in the designated area, and when the travel distance of the first virtual vehicle after executing the first operation is greater than a threshold value (namely after the first virtual vehicle travels through the designated area), the first selection control is determined to be in an available state, and at the moment, the user can control the first virtual vehicle to execute the first operation again through the designated operation of the microphone sensor.
In a possible implementation manner, an energy bar display control is further superimposed on the virtual scene picture, and in response to that an energy bar displayed on the energy bar display control satisfies a specified condition, the first selection control is determined to be in an available state.
Referring to fig. 12, a schematic diagram of an energy bar control according to an embodiment of the present application is shown. An energy bar control 1202 is superimposed on the virtual scene screen corresponding to fig. 12, the energy bar control 1202 may implement an increase of an energy bar according to an energy storage operation corresponding to the first virtual vehicle, when the energy bar increases to a threshold specified by the energy bar control, the first selection control 1201 is determined to be in an available state, and at this time, a user may control the first virtual vehicle to perform the first operation through a specified operation on a microphone sensor. The energy storage operation can be a running operation of a first virtual vehicle, namely when the first virtual vehicle runs normally, the first virtual vehicle continuously triggers the energy storage operation to realize the increase of the energy bar; or the energy storage operation may be that the first virtual vehicle acquires a virtual article in the virtual scene screen, that is, a virtual article exists on a virtual road in the virtual scene screen, and when the first virtual vehicle touches the virtual article, the energy storage operation is triggered.
In step 403, the acceleration duration of the first virtual vehicle is determined based on the selection area corresponding to the designation operation.
Wherein the first operation is an acceleration operation of the first virtual vehicle. When the first operation is an acceleration operation of the first virtual vehicle, the selection area corresponding to the designation operation is used to select an acceleration period of the first virtual vehicle.
Please refer to fig. 13, which illustrates a schematic diagram of selecting an acceleration duration according to an embodiment of the present application. As shown in fig. 13, there are at least two selection areas in the first selection control 1300, and in fig. 13, the at least two selection areas include an area 1, two areas 2, two areas 3, and two areas 4, where the area 1 may also be indicated by a green area, the area 2 is indicated by a yellow area, the area 3 is indicated by an orange area, and the area four is indicated by a gray area (the above colors are schematic representations, and are not shown in the figure). When the user performs the blowing operation on the microphone sensor and the region selection pointer 1301 selects the region 1, the acceleration duration of the acceleration operation of the first virtual vehicle is longest, and when the user performs the blowing operation on the microphone sensor and the region selection pointer 1301 selects the region 2, the acceleration duration of the acceleration operation of the first virtual vehicle is slightly shorter than the acceleration duration corresponding to the region 1; when the user performs a blowing operation on the microphone sensor and the region selection pointer 1301 selects the region 3, the acceleration duration of the acceleration operation of the first virtual vehicle is slightly shorter than the acceleration duration corresponding to the region 2; when the user performs a blowing operation on the microphone sensor and the region selection pointer 1301 selects the region 4, the acceleration duration of the first virtual vehicle acceleration operation is 0, or the acceleration duration of the first virtual vehicle acceleration operation is the shortest.
In one possible implementation, the acceleration magnitude of the first virtual vehicle is determined based on the selection region corresponding to the designation operation.
That is, the selection region corresponding to the specified operation may determine at least one of the acceleration period and the acceleration magnitude of the first virtual vehicle, so that the first virtual vehicle performs the acceleration operation according to the acceleration period and the acceleration magnitude.
And step 404, controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation, the first virtual vehicle is controlled to perform the acceleration operation based on an acceleration period of the first virtual vehicle and an acceleration magnitude of the first virtual vehicle.
That is, when the first selection control corresponding to the first virtual vehicle is in an available state, the terminal may select a selection area of the first selection control in response to receiving a specified operation on the microphone sensor, determine an acceleration duration and an acceleration amplitude of the first virtual vehicle according to the selected selection area, and control the first virtual vehicle to perform an acceleration operation according to the acceleration duration and the acceleration amplitude of the first virtual vehicle.
In one possible implementation manner, the user terminal further includes a gyroscope sensor, and the first virtual vehicle is controlled to perform a steering operation in response to the gyroscope sensor receiving a turning operation of the user terminal by the user.
In the racing game, the important functions of the virtual vehicle during the racing competition are the acceleration function and the steering function, when the acceleration function is realized through the microphone sensor and the steering function of the virtual vehicle is realized through the gyroscope, the operation controls superposed on the virtual scene picture are fewer, the user is not easy to miss-touch when clicking other operation controls on the virtual scene interface, and the interaction accuracy between the user and the terminal is improved.
Please refer to fig. 14, which illustrates a schematic diagram of a terminal rotation operation according to an embodiment of the present application. As shown in fig. 14, a portion 1401 in fig. 14 is a normal posture when the user manipulates the terminal, and when the user controls the terminal to turn left, a leftward rollover posture of the terminal is shown in a portion 1402 in fig. 14, and the terminal detects rotation information of the terminal through a gyro sensor and generates a corresponding rotation signal to control the first virtual vehicle to turn left; when the user controls the terminal to rotate to the right, the rightward rollover gesture of the terminal is shown as a part 1403 in fig. 14, and at this time, the terminal detects rotation information of the terminal through the gyroscope sensor and generates a corresponding rotation signal to control the first virtual vehicle to turn to the right.
In a possible implementation manner, a direction operation control is further superimposed on the virtual scene picture, and in response to receiving a trigger operation of a user on the direction operation control, the first virtual vehicle is controlled to perform steering operation according to the trigger operation.
Please refer to fig. 15, which illustrates a schematic diagram of a directional operation according to an embodiment of the present application. In the virtual scene picture shown in fig. 15, there may also be directional operation controls, where the directional operation controls include a left directional control 1501 and a right directional control 1502, and the left directional control 1501 and the right directional control 1502 are respectively overlapped on two sides of the virtual scene picture, so that when a user holds the user terminal with both hands, the two directional controls are used to realize directional control over the first virtual vehicle, and meanwhile, the microphone sensor is used to blow air to realize acceleration operation over the first virtual vehicle.
Please refer to fig. 16, which illustrates a schematic diagram of a directional operation according to an embodiment of the present application. In the virtual scene picture as shown in fig. 16, there may also be a direction operation control, where the direction operation control is a left-right direction control 1601, the left-right direction control 1601 is located at one side of the virtual scene picture, and the left-right direction control may be superimposed on the left side of the virtual scene picture or on the right side of the virtual scene picture, so that when a user holds the user terminal with one hand, the direction control of the first virtual vehicle may be implemented with one hand, and at the same time, the acceleration operation of the first virtual vehicle may be implemented by a blowing operation of the microphone sensor.
In a possible implementation manner, since the user needs to control the first virtual vehicle to perform the first operation through the blowing operation on the microphone sensor, the virtual scene picture should present a vertical screen display on the terminal, that is, the microphone sensor part of the user terminal is facing the mouth of the user, so that the user can perform the blowing operation on the microphone sensor at any time.
By means of the virtual vehicle control mode, a player controls the racing car to move forward in a blowing mode at a proper time, the player plays games in easy and interesting blowing interaction, fingers of the player are liberated, and the lungs are exercised, so that the virtual vehicle control mode is a game interaction mode which is more beneficial to body health and more inclusive to player crowds. And for the crowd with inflexible and obstructed fingers and the crowd who is in the scene inconvenient to use the fingers to operate, the control of the virtual vehicle can be realized by the scheme shown in the embodiment of the application.
To sum up, in the solution shown in the embodiment of the present application, a user performs a selection operation on a first selection control by performing a designated operation on a microphone sensor, and a terminal controls a first virtual vehicle to execute a first operation according to a selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, the control mode of the virtual vehicle is expanded, and therefore the control effect of the virtual vehicle is improved.
Referring to FIG. 17, a flowchart illustrating a first virtual vehicle acceleration method according to an exemplary embodiment is shown. The first virtual vehicle acceleration method is performed by a user terminal having a microphone sensor, the method comprising:
s1701, the terminal determines whether or not a sound pressure signal is detected. That is, the microphone sensor determines whether a sound pressure signal is detected according to a sound pressure detection model (i.e., an electret film module) therein within a specified time. When the sound pressure signal is not detected within the specified time, the first virtual vehicle does not perform an acceleration operation.
S1702, the microphone sensor receives the signal and converts the signal into an electrical signal. When the microphone sensor of the terminal detects a sound pressure signal, the sound pressure signal is converted into an electric signal and transmitted to the CPU.
S1703, the CPU receives the electric signal and sends out a program instruction. After receiving the electric signal, the CPU judges the size of the electric signal to determine whether the size of the sound pressure signal received by the microphone sensor meets the characteristics of the sound pressure signal corresponding to the blowing operation of the microphone sensor performed by a user. When the sound pressure signal indicated by the magnitude of the electric signal accords with the characteristics of the sound pressure signal corresponding to the blowing operation, the user is indicated to perform the blowing operation on the microphone sensor, and a program instruction corresponding to the blowing operation is sent.
And S1704, the terminal judges the position where the pointer is located according to the time generated by the electric signal. When the position where the pointer falls is in the invalid area, the first virtual vehicle does not execute acceleration operation; when the position corresponding to the pointer is in a yellow area, the first virtual vehicle performs small acceleration; when the position corresponding to the pointer is in the orange area, the first virtual vehicle performs medium acceleration; when the position corresponding to the pointer is in the green area, the first virtual vehicle performs strong acceleration.
S1705, the terminal judges whether the first virtual vehicle reaches the terminal, and when the first virtual vehicle reaches the terminal, the game is ended; when the first virtual vehicle does not reach the end point, the operation steps shown in S1701 to S1704 are re-executed.
FIG. 18 is a block diagram illustrating a virtual vehicle control apparatus according to an exemplary embodiment of the present application. The virtual vehicle control apparatus may be applied to a computer device, which may be a user terminal having a microphone sensor, wherein the user terminal may be the terminal shown in fig. 1. As shown in fig. 14, the virtual vehicle control device includes:
a virtual scene picture display module 1801 for displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; at least two selection areas are contained in the first selection control;
a selection area determination module 1802 configured to, in response to receiving a specified operation on the microphone sensor, determine a selection area corresponding to the specified operation within the first selection control;
a virtual vehicle control module 1803, configured to control the first virtual vehicle to execute the first operation based on the selection area corresponding to the specified operation.
In a possible implementation manner, the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
In one possible implementation, the selection area determining module 1802 includes:
a first signal generation submodule for generating a first signal in response to receiving the blowing operation on the microphone sensor;
a selection region determination sub-module to determine a selection region within the first selection control corresponding to the blowing operation based on the first signal.
In one possible implementation, the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is in positive correlation with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
the selection area determination submodule is further configured to:
in response to the signal strength of the first signal being greater than an intensity threshold, determining a selection region within the first selection control that corresponds to the blow operation.
In one possible implementation manner, the selection area determining sub-module includes:
the first time information determining unit is used for responding to the condition that the signal intensity of the first signal is greater than the intensity threshold value, and acquiring first time information corresponding to the air blowing operation; the first time information is used for indicating the time when the blowing operation on the microphone sensor is received;
and the selection area determining unit is used for determining a selection area corresponding to the blowing operation based on the first time information.
In one possible implementation, the first selection control contains a region selection pointer; the area pointed to by the area selection pointer varies over time.
In a possible implementation manner, the selection area determining module 1802 further includes:
a pointing region determination submodule configured to determine a region pointed to by the region selection pointer based on the first time information;
and the pointer area determining submodule is used for determining the area pointed by the area selection pointer as the selection area corresponding to the specified operation.
In one possible implementation, the first operation is an acceleration operation of the first virtual vehicle;
the virtual vehicle control module 1803 includes:
an acceleration duration determination submodule, configured to determine an acceleration duration of the first virtual vehicle based on a selection region corresponding to the specified operation;
and the acceleration operation execution submodule is used for controlling the first virtual vehicle to execute the acceleration operation based on the acceleration duration of the first virtual vehicle.
In one possible implementation, the selection area determination module 1802 is configured to,
in response to receiving a specified operation on the microphone sensor and the first selection control being in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
In one possible implementation, the apparatus further includes:
the time parameter acquisition module is used for acquiring the time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in a virtual scene picture;
a first state determination module to determine the first selection control to be in an available state in response to a time parameter of the virtual scene screen satisfying a first time condition.
In one possible implementation, the apparatus further includes:
the virtual position acquisition module is used for acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
a second state determination module to determine the first selection control to be in an available state in response to the virtual location information satisfying a first location condition.
In one possible implementation, the apparatus further includes:
a third state determination module to determine the first selection control to be in an unavailable state in response to the first virtual vehicle performing the first operation.
To sum up, in the solution shown in the embodiment of the present application, a user performs a selection operation on a first selection control by performing a designated operation on a microphone sensor, and a terminal controls a first virtual vehicle to execute a first operation according to a selection area in the first selection control corresponding to the designated operation. Through the scheme, the user can control the first virtual vehicle through the microphone sensor, the control mode of the virtual vehicle is expanded, and therefore the control effect of the virtual vehicle is improved.
FIG. 19 is a block diagram illustrating the architecture of a computer device 1900 according to an example embodiment. The computer device may be implemented as a server in the above-mentioned aspects of the present application.
The computer device 1900 includes a Central Processing Unit (CPU) 1901, a system Memory 1904 including a Random Access Memory (RAM) 1902 and a Read-Only Memory (ROM) 1903, and a system bus 1905 connecting the system Memory 1904 and the CPU 1901. The computer device 1900 also includes a basic Input/Output system (I/O system) 1906 for facilitating information transfer between devices within the computer, and a mass storage device 1907 for storing an operating system 1913, application programs 1914, and other program modules 1915.
The basic input/output system 1906 includes a display 1908 for displaying information and an input device 1909, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1908 and input device 1909 are coupled to the central processing unit 1901 through an input-output controller 1910 coupled to the system bus 1905. The basic input/output system 1906 may also include an input/output controller 1910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1907 is connected to the central processing unit 1901 through a mass storage controller (not shown) connected to the system bus 1905. The mass storage device 1907 and its associated computer-readable media provide non-volatile storage for the computer device 1900. That is, the mass storage device 1907 may include a computer-readable medium (not shown) such as a hard disk or Compact Disc-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1904 and mass storage device 1907 described above may be collectively referred to as memory.
The computer device 1900 may also operate as a remote computer connected to a network via a network, such as the internet, according to various embodiments of the present disclosure. That is, the computer device 1900 may connect to the network 1912 through the network interface unit 1911 connected to the system bus 1905, or may connect to other types of networks or remote computer systems (not shown) using the network interface unit 1911.
The memory further includes at least one instruction, at least one program, code set, or instruction set, which is stored in the memory, and the central processor 1901 implements all or part of the steps in the flow chart of the virtual vehicle control method shown in the above embodiments by executing the at least one instruction, at least one program, code set, or instruction set.
Fig. 20 is a block diagram illustrating the structure of a computer device 2000, according to an example embodiment. The computer device 2000 may be a terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 2000 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, the computer device 2000 includes: a processor 2001 and a memory 2002.
The processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2001 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2001 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2001 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 2002 may include one or more computer-readable storage media, which may be non-transitory. The memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the virtual vehicle control methods provided by method embodiments herein.
In some embodiments, the computer device 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002 and peripheral interface 2003 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2004, a display 2005, a camera assembly 2006, an audio circuit 2007, a positioning assembly 2008, and a power supply 2009.
The peripheral interface 2003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2001 and the memory 2002. In some embodiments, the processor 2001, memory 2002 and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2001, the memory 2002, and the peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 2004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 2004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 2004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2005 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2005 is a touch display screen, the display screen 2005 also has the ability to capture touch signals on or over the surface of the display screen 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, the display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 2005 may be one, providing the front panel of the computer device 2000; in other embodiments, the display screens 2005 can be at least two, each disposed on a different surface of the computer device 2000 or in a folded design; in still other embodiments, the display 2005 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 2000. Even more, the display screen 2005 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 2005 can be made of a material such as an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 2006 is used to capture images or video. Optionally, camera assembly 2006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing or inputting the electric signals to the radio frequency circuit 2004 so as to realize voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 2000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2007 may also include a headphone jack.
The Location component 2008 is configured to locate a current geographic Location of the computer device 2000 to implement navigation or LBS (Location Based Service). The Positioning component 2008 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
A power supply 2009 is used to power the various components of the computer device 2000. The power supply 2009 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 2009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 2000 also includes one or more sensors 2010. The one or more sensors 2010 include, but are not limited to: acceleration sensor 2011, gyro sensor 2012, pressure sensor 2013, fingerprint sensor 2014, optical sensor 2015, and proximity sensor 2016.
The acceleration sensor 2011 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the computer apparatus 2000. For example, the acceleration sensor 2011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2001 may control the display screen 2005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2011. The acceleration sensor 2011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2012 can detect the body direction and the rotation angle of the computer device 2000, and the gyro sensor 2012 cooperates with the acceleration sensor 2011 to acquire the 3D motion of the user on the computer device 2000. The processor 2001 may implement the following functions according to the data collected by the gyro sensor 2012: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 2013 may be disposed on the side bezel of the computer device 2000 and/or underlying the display screen 2005. When the pressure sensor 2013 is disposed on the side frame of the computer device 2000, the holding signal of the user to the computer device 2000 can be detected, and the processor 2001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2013. When the pressure sensor 2013 is disposed at the lower layer of the display screen 2005, the processor 2001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 2005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2014 is used for collecting fingerprints of the user, and the processor 2001 identifies the identity of the user according to the fingerprints collected by the fingerprint sensor 2014, or the fingerprint sensor 2014 identifies the identity of the user according to the collected fingerprints. Upon identifying that the user's identity is a trusted identity, the processor 2001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 2014 may be disposed on a front, back, or side of the computer device 2000. When a physical key or vendor Logo is provided on the computer device 2000, the fingerprint sensor 2014 may be integrated with the physical key or vendor Logo.
The optical sensor 2015 is used to collect ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the display screen 2005 according to the ambient light intensity collected by the optical sensor 2015. Specifically, when the ambient light intensity is high, the display luminance of the display screen 2005 is increased; when the ambient light intensity is low, the display luminance of the display screen 2005 is adjusted down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 according to the ambient light intensity collected by the optical sensor 2015.
The proximity sensor 2016, also known as a distance sensor, is typically disposed on a front panel of the computer device 2000. The proximity sensor 2016 is used to capture the distance between a user and the front of the computer device 2000. In one embodiment, the display screen 2005 is controlled by the processor 2001 to switch from a bright screen state to a dark screen state when the proximity sensor 2016 detects a gradually decreasing distance between the user and the front of the computer device 2000; when the proximity sensor 2016 detects a gradual increase in the distance between the user and the front of the computer device 2000, the display screen 2005 is controlled by the processor 2001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 20 is not intended to be limiting of the computer device 2000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiment of fig. 3 or 4 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes all or part of the steps of the method shown in the corresponding embodiment of fig. 3 or fig. 4.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A virtual vehicle control method for a user terminal having a microphone sensor, the method comprising:
displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; at least two selection areas are contained in the first selection control;
in response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation;
and controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
2. The method according to claim 1, wherein the specified operation is a blowing operation of the microphone sensor by a user corresponding to the user terminal.
3. The method of claim 2, wherein, in response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation comprises:
generating a first signal in response to receiving the blowing operation on the microphone sensor;
based on the first signal, a selection region corresponding to the blow operation is determined within the first selection control.
4. The method of claim 3, wherein the first signal is an electrical signal generated by the microphone sensor receiving the blowing operation; the signal intensity of the first signal is in positive correlation with a sound pressure signal corresponding to the blowing operation received by the microphone sensor;
the determining, within the first selection control, a selection region corresponding to the blowing operation based on the first signal includes:
in response to the signal strength of the first signal being greater than an intensity threshold, determining a selection region within the first selection control that corresponds to the blow operation.
5. The method of claim 4, wherein determining a selection region within the first selection control corresponding to the blowing operation in response to the signal strength of the first signal being greater than the strength threshold comprises:
responding to the fact that the signal intensity of the first signal is larger than the intensity threshold value, and acquiring first time information corresponding to the air blowing operation; the first time information is used for indicating the time when the blowing operation on the microphone sensor is received;
and determining a selection area corresponding to the blowing operation based on the first time information.
6. The method of any of claims 1-5, wherein the first selection control comprises a region selection pointer; the area pointed to by the area selection pointer varies over time.
7. The method according to claim 6, wherein the determining the selection area corresponding to the designated operation based on the first time information comprises:
determining a region pointed by the region selection pointer based on the first time information;
and determining the region pointed by the region selection pointer as a selection region corresponding to the specified operation.
8. The method according to any one of claims 1 to 7, characterized in that the first operation is an acceleration operation of the first virtual vehicle;
the control of the first virtual vehicle to execute a first operation based on the selection area corresponding to the designated operation comprises the following steps:
determining an acceleration duration of the first virtual vehicle based on a selection area corresponding to the designated operation;
controlling the first virtual vehicle to perform the acceleration operation based on an acceleration period of the first virtual vehicle.
9. The method of claim 1, wherein, in response to receiving a specified operation on the microphone sensor, determining a selection region within the first selection control that corresponds to the specified operation comprises:
in response to receiving a specified operation on the microphone sensor and the first selection control being in an available state, a selection region corresponding to the specified operation is determined within the first selection control.
10. The method of claim 9, wherein, in response to receiving a specified operation on the microphone sensor, prior to determining a selection region within the first selection control that corresponds to the specified operation, further comprising:
acquiring a time parameter corresponding to the virtual scene picture; the time parameter is used for indicating the running time of the first virtual vehicle in a virtual scene picture;
determining the first selection control to be in an available state in response to a time parameter of the virtual scene screen satisfying a first time condition.
11. The method of claim 9, wherein, in response to receiving a specified operation on the microphone sensor, prior to determining a selection region within the first selection control that corresponds to the specified operation, further comprising:
acquiring virtual position information of the first virtual vehicle in the virtual scene picture;
determining the first selection control to be in an available state in response to the virtual location information satisfying a first location condition.
12. The method according to any one of claims 9 to 11, further comprising:
determining the first selection control to be in an unavailable state in response to the first virtual vehicle performing the first operation.
13. A virtual vehicle control apparatus, for a user terminal having a microphone sensor, the apparatus comprising:
the virtual scene picture display module is used for displaying a virtual scene picture; the virtual scene picture comprises a first virtual vehicle; a first selection control is superposed on the virtual scene picture; at least two selection areas are contained in the first selection control;
a selection area determination module, configured to determine, in response to receiving a specified operation on the microphone sensor, a selection area corresponding to the specified operation within the first selection control;
and the virtual vehicle control module is used for controlling the first virtual vehicle to execute a first operation based on the selection area corresponding to the specified operation.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the virtual vehicle control method of any of claims 1 to 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by a processor to implement the virtual vehicle control method of any of claims 1 to 12.
CN202110090249.6A 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium Active CN112717409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110090249.6A CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110090249.6A CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112717409A true CN112717409A (en) 2021-04-30
CN112717409B CN112717409B (en) 2023-06-20

Family

ID=75595213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110090249.6A Active CN112717409B (en) 2021-01-22 2021-01-22 Virtual vehicle control method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112717409B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363337A (en) * 2023-04-04 2023-06-30 如你所视(北京)科技有限公司 Model tour method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160059128A1 (en) * 2014-08-28 2016-03-03 Nintendo Co., Ltd. Information processing terminal, non-transitory storage medium encoded with computer readable information processing program, information processing terminal system, and information processing method
CN107329725A (en) * 2016-04-28 2017-11-07 上海连尚网络科技有限公司 Method and apparatus for controlling many people's interactive applications
CN108499106A (en) * 2018-04-10 2018-09-07 网易(杭州)网络有限公司 The treating method and apparatus of race games prompt message
CN109847348A (en) * 2018-12-27 2019-06-07 努比亚技术有限公司 A kind of control method and mobile terminal, storage medium of operation interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160059128A1 (en) * 2014-08-28 2016-03-03 Nintendo Co., Ltd. Information processing terminal, non-transitory storage medium encoded with computer readable information processing program, information processing terminal system, and information processing method
CN107329725A (en) * 2016-04-28 2017-11-07 上海连尚网络科技有限公司 Method and apparatus for controlling many people's interactive applications
CN108499106A (en) * 2018-04-10 2018-09-07 网易(杭州)网络有限公司 The treating method and apparatus of race games prompt message
CN109847348A (en) * 2018-12-27 2019-06-07 努比亚技术有限公司 A kind of control method and mobile terminal, storage medium of operation interface

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363337A (en) * 2023-04-04 2023-06-30 如你所视(北京)科技有限公司 Model tour method and device and electronic equipment

Also Published As

Publication number Publication date
CN112717409B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN110052027B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN108536295B (en) Object control method and device in virtual scene and computer equipment
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN112245912B (en) Sound prompting method, device, equipment and storage medium in virtual scene
WO2020156252A1 (en) Method, device, and apparatus for constructing building in virtual environment, and storage medium
CN111603770A (en) Virtual environment picture display method, device, equipment and medium
CN110743168A (en) Virtual object control method in virtual scene, computer device and storage medium
CN111026318A (en) Animation playing method, device and equipment based on virtual environment and storage medium
JP2021535824A (en) Viewing angle rotation method, device and computer program
CN111589141A (en) Virtual environment picture display method, device, equipment and medium
CN113577765A (en) User interface display method, device, equipment and storage medium
CN110533756B (en) Method, device, equipment and storage medium for setting attaching type ornament
CN109806583B (en) User interface display method, device, equipment and system
WO2022237076A1 (en) Method and apparatus for controlling avatar, and device and computer-readable storage medium
CN112494958B (en) Method, system, equipment and medium for converting words by voice
CN114130023A (en) Virtual object switching method, device, equipment, medium and program product
CN111589143B (en) Animation playing method, device, equipment and storage medium
CN112717409B (en) Virtual vehicle control method, device, computer equipment and storage medium
CN112755517A (en) Virtual object control method, device, terminal and storage medium
CN110597389B (en) Virtual object control method in virtual scene, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042617

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant