CN112090067A - Virtual vehicle control method, device, equipment and computer readable storage medium - Google Patents

Virtual vehicle control method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112090067A
CN112090067A CN202011010631.3A CN202011010631A CN112090067A CN 112090067 A CN112090067 A CN 112090067A CN 202011010631 A CN202011010631 A CN 202011010631A CN 112090067 A CN112090067 A CN 112090067A
Authority
CN
China
Prior art keywords
virtual
virtual vehicle
vehicle
controlling
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011010631.3A
Other languages
Chinese (zh)
Other versions
CN112090067B (en
Inventor
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011010631.3A priority Critical patent/CN112090067B/en
Publication of CN112090067A publication Critical patent/CN112090067A/en
Application granted granted Critical
Publication of CN112090067B publication Critical patent/CN112090067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a control method, a device, equipment and a computer readable storage medium of a virtual vehicle; the method comprises the following steps: presenting a virtual carrier in a first form and a virtual object taken in the virtual carrier in a picture of a virtual scene, wherein the virtual object and the virtual carrier are in a first environment area matched with the first form; responding to a form conversion instruction for the virtual vehicle, controlling the virtual vehicle to convert from the first form to a second form, and displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form; wherein the second environmental region is adapted to the second morphology and is different from the first environmental region. By the method and the device, the virtual vehicle form can be switched in a high-efficiency and low-resource consumption mode.

Description

Virtual vehicle control method, device, equipment and computer readable storage medium
Technical Field
The present disclosure relates to computer technologies, and in particular, to a method, an apparatus, a device and a computer-readable storage medium for controlling a virtual vehicle.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction between virtual objects controlled by users or artificial intelligence according to the actual application requirements, has various typical application scenes, and can simulate the real fighting process between the virtual objects in the virtual scenes of military exercise simulation, games and the like.
When the virtual object needs to be controlled to go from the current location to another location in the virtual scene and the distance between the two locations is long, the user can control the virtual object to ride the virtual carrier in the virtual scene, so that the virtual object is sent to the destination through the virtual carrier. However, the virtual vehicles in the related art are usually of a single form, and if the virtual vehicles are adapted to the complex environment in the virtual scene, the virtual objects need to be controlled to take different virtual vehicles, so that the interaction process is complex, and the computing resources of the computer device are additionally consumed.
Disclosure of Invention
Embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for controlling a virtual vehicle, which can implement switching of virtual vehicle configurations in an efficient and low-resource-consumption manner.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a method for controlling a virtual vehicle, including:
presenting a virtual carrier in a first form and a virtual object taken in the virtual carrier in a picture of a virtual scene, wherein the virtual object and the virtual carrier are in a first environment area matched with the first form;
in response to a form conversion instruction for the virtual vehicle, controlling the virtual vehicle to convert from the first form to a second form, and
displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form;
wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
An embodiment of the present application provides a control device for a virtual vehicle, including:
the virtual vehicle comprises a presentation module, a display module and a control module, wherein the presentation module is used for presenting a virtual vehicle in a first form and a virtual object taken by the virtual vehicle in a picture of a virtual scene, and the virtual object and the virtual vehicle are in a first environment area matched with the first form;
a transformation module for responding to the form transformation instruction of the virtual vehicle, controlling the virtual vehicle to be transformed from the first form to the second form, and
the display module is used for displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form;
wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
In the above solution, the presenting module is further configured to present a virtual object in an independent state in a picture of a virtual scene;
responding to a call instruction aiming at the virtual vehicle, presenting the virtual vehicle in a first form, and
and controlling the virtual object to take the virtual vehicle.
In the foregoing solution, the display module is further configured to receive a control instruction for the virtual vehicle in the second form;
and responding to the control instruction, and controlling the virtual object to move from the first environment area to the second environment area by taking the virtual vehicle with the second form.
In the above aspect, the conversion module is further configured to control the virtual object to move from the first environmental area to the second environmental area by using the virtual vehicle when the virtual vehicle is converted from the first form to the second form.
In the above scheme, the transformation module is further configured to receive a trigger operation for a key for transforming the virtual vehicle form;
in response to the triggering operation, triggering a form transformation instruction for the virtual vehicle.
In the above solution, the transformation module is further configured to control the virtual vehicle to be transformed from the second form to the third form in response to a form transformation instruction for the virtual vehicle, and
and displaying a picture of the virtual object in a third environment area by taking the virtual carrier in the third form.
In the above scheme, the transformation module is further configured to present a form transformation icon in the interface of the virtual scene;
presenting at least two form options of the virtual vehicle in response to a triggering operation for the form transformation icon;
and in response to a trigger operation for a target form selection item in the at least two form selection items, taking a form corresponding to the target form selection item as the second form of the virtual vehicle.
In the above scheme, the transformation module is further configured to obtain a first position of the virtual vehicle in the first environment area when the form transformation instruction is received;
determining an idle area closest to the first position in the second environment area, wherein the idle area is an area without obstacles;
and controlling the virtual object to move to the idle area by taking the virtual vehicle in the second state.
In the above scheme, the apparatus further comprises:
the moving module is used for receiving a moving instruction aiming at the virtual carrier in the second form;
and responding to the movement instruction, controlling the virtual vehicle of the second form to move in the second environment area at a movement speed matched with the second form.
In the above scheme, the apparatus further comprises:
the moving module is used for receiving a moving instruction aiming at the virtual carrier in the second form;
when the second environment area is an aerial area of the virtual scene, controlling the virtual vehicle of the second form to fly in the aerial area at a target height from the ground so as to cross at least part of objects on the ground in response to the movement instruction.
In the foregoing solution, the transformation module is further configured to, when a target object exists in the first environment area, respond to a separation instruction for the virtual object and the virtual vehicle in the first form, control the virtual object to be separated from the virtual item in the first form, and control the virtual object to be separated from the virtual item in the first form
And controlling the virtual carrier in the first form to release the attack skills to a target object, wherein the target object and the virtual object are in an enemy relationship.
In the above solution, the transformation module is further configured to control the virtual vehicle in the first form to move to the virtual object;
and when the distance between the virtual vehicle in the first form and the virtual object meets a distance condition, controlling the virtual object to take the virtual vehicle in the first form.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
the processor is configured to implement the method for controlling the virtual vehicle provided in the embodiment of the present application when the processor executes the executable instructions stored in the memory.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the method for controlling a virtual vehicle provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
the virtual vehicle is controlled to be converted from the first form to the second form by responding to the form conversion instruction aiming at the virtual vehicle, and the picture of the virtual object taking the virtual vehicle in the second form in the second environment area is displayed, so that the form conversion of the virtual vehicle is realized, one virtual vehicle can adapt to different environments, and compared with the control of the virtual object for searching and taking different virtual vehicles, the interaction process is simplified, and further the consumption of computing resources is reduced.
Drawings
Fig. 1 is a schematic diagram of a virtual object transformation process provided in the related art;
fig. 2 is a schematic view of an optional implementation scenario of the control method for a virtual item provided in the embodiment of the present application;
fig. 3 is an alternative structural diagram of an electronic device 500 provided in the embodiments of the present application;
fig. 4 is an alternative flowchart of a control method for a virtual vehicle according to an embodiment of the present disclosure;
FIG. 5 is an alternative interface schematic diagram of a virtual vehicle summoning process provided by an embodiment of the present application;
fig. 6 is an alternative schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application;
fig. 7 is an alternative schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application;
fig. 8 is an alternative schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application;
fig. 9 is an alternative schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application;
fig. 10 is an alternative interface schematic diagram of the movement of the virtual vehicle provided by the embodiment of the present application;
fig. 11 is an alternative interface diagram of a virtual object and virtual vehicle separation process according to an embodiment of the present disclosure;
fig. 12 is an alternative flowchart of a control method for a virtual vehicle according to an embodiment of the present application;
fig. 13 is an alternative flowchart of a control method for a virtual vehicle according to an embodiment of the present application;
fig. 14 is an alternative schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a control device of a virtual vehicle according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video playing client, a game client, etc.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on the terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
6) The virtual vehicle refers to a vehicle for transporting a virtual object in a virtual scene, and may be a vehicle, an airplane, an animal, etc.
In the related art, a user can control the virtual object to be transformed into a virtual vehicle for riding the virtual object controlled by other users. For example, fig. 1 is a schematic view of a virtual object transformation process provided in the related art, and referring to fig. 1, a user may control a virtual object of a goby family to be transformed into a white tiger ride and then ride other virtual objects.
In the implementation of the embodiments of the present application, it is found that in the related art, the morpher controls the transformation of the virtual object form (including the transformation from the virtual object to the virtual vehicle and the transformation from the virtual vehicle to the virtual object), and the occupant cannot control the transformation, and in the method, the virtual vehicle form is only one, and can only be used on land, and cannot be applied to different environmental areas (such as an aerial area and an underwater area).
Based on this, embodiments of the present application provide a method, an apparatus, a device and a computer-readable storage medium for controlling a virtual vehicle, so as to solve at least the above problems in the related art, which are described below respectively.
Referring to fig. 2, fig. 2 is a schematic diagram of an optional implementation scenario of the method for controlling a virtual item provided in this embodiment, in order to support an exemplary application, terminals (exemplary terminals 400-1 and 400-2 are shown) are connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
In actual implementation, the terminal is installed and runs with an application program supporting a virtual scene. The application program may be any one of a Massively Multiplayer Online Role Playing Game (MMORPG), a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online tactical sports game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type survival game. The user uses the terminal to operate the virtual object located in the virtual scene to perform activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
In an exemplary scenario, the virtual object (first virtual object) controlled by the terminal 400-1 and the virtual object (second virtual object) controlled by the other terminal 400-2 are in the same virtual scenario, and the first virtual object can interact with the second virtual object in the virtual scenario. In some embodiments, the first virtual object and the second virtual object may be in a hostile relationship, for example, the first virtual object and the second virtual object belong to different teams and organizations, and the hostile relationship between the virtual objects may enable antagonistic interaction on land in a manner of shooting each other.
In an exemplary scenario, when the terminal 400-1 controls a first virtual object, a screen of the virtual scene is presented on the terminal, and in the screen of the virtual scene, a virtual vehicle in a first form and a virtual object riding on the virtual vehicle are presented, wherein the virtual object and the virtual vehicle are in a first environment area adapted to the first form; responding to a form conversion instruction for the virtual vehicle, controlling the virtual vehicle to convert from the first form to a second form, and displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form; wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
In actual implementation, the server 200 calculates scene data in a virtual scene and sends the scene data to the terminal, the terminal depends on the graphic calculation hardware to complete the loading, analysis and rendering of calculation display data, and depends on the graphic output hardware to output the virtual scene to form visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of a corresponding hardware output of the terminal, e.g. using a microphone output, a tactile perception using a vibrator output, etc.
The terminal runs a client (e.g., a network version game application), performs game interaction with other users through the connection server 200, outputs a picture of a virtual scene, where the picture includes a virtual vehicle in a first form and a first virtual object riding on the virtual vehicle, where the first virtual object is a game character controlled by the user, that is, the first virtual object is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice control switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the first virtual object will move to the left in the virtual scene, and can also keep still in place, jump, and use various functions (such as skills and props).
For example, when the user triggers a configuration conversion command for the virtual vehicle through the client running on the terminal 400-1, the virtual vehicle is controlled to be converted from the first configuration to the second configuration, and a screen of the virtual object in the second environment area by riding the virtual vehicle in the second configuration is displayed. For example, if the first configuration is horse and the second configuration is dragon, the virtual vehicle is controlled to be switched from horse to dragon, and a screen that the virtual object takes the dragon shape and the virtual vehicle is in the air is displayed.
In an exemplary scene, in a military virtual simulation application, virtual scene technology is adopted to enable a trainee to visually and aurally experience a battlefield environment, to be familiar with the environmental characteristics of an area to be battled, to interact with objects in the virtual environment through necessary equipment, and a virtual battlefield environment implementation method can create a three-dimensional battlefield environment with a dangerous image ring life and a near reality through background generation and image synthesis through a corresponding three-dimensional battlefield environment graphic image library comprising a battlefield background, a battlefield scene, various weaponry, fighters and the like.
In actual implementation, the terminal runs the client (military simulation program), performs military exercises with other users through the connection server 200, and outputs a picture of a virtual scene (e.g., city a), where the picture includes a virtual vehicle in a first form and a first virtual object riding on the virtual vehicle, where the first virtual object is a simulated fighter under the control of the user. For example, when the user triggers a configuration conversion command for the virtual vehicle through the client running on the terminal 400-1, the virtual vehicle is controlled to be converted from the first configuration to the second configuration, and a screen of the virtual object in the second environment area by riding the virtual vehicle in the second configuration is displayed. For example, if the first configuration is an automobile and the second configuration is an airplane, the virtual vehicle is controlled to be switched from the automobile to the airplane, and a screen that the virtual object is in the air in the airplane configuration is displayed.
Referring to fig. 3, fig. 3 is an optional schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in an actual application, the electronic device 500 may be the terminal or the server 200 in fig. 2, and a computer device for implementing the information presentation method in the virtual scene in the embodiment of the present application is described by taking the electronic device as the terminal shown in fig. 2 as an example. The electronic device 500 shown in fig. 3 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 3.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the information presentation apparatus in the virtual scene provided by the embodiments of the present application may be implemented in software, and fig. 2 illustrates a control apparatus 555 of the virtual vehicle stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: a presentation module 5551, a transformation module 5552 and a presentation module 5553, which are logical and thus can be arbitrarily combined or further split depending on the functionality implemented.
The functions of the respective modules will be explained below.
In other embodiments, the information displaying Device in the virtual scene provided in this embodiment may be implemented in hardware, for example, the information displaying Device in the virtual scene provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the information displaying method in the virtual scene provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The method for controlling a virtual vehicle according to the embodiment of the present application will be described with reference to an exemplary application and implementation of a terminal according to the embodiment of the present application.
Referring to fig. 4, fig. 4 is an alternative flowchart of a method for controlling a virtual vehicle according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step 401: the terminal presents the virtual carrier in the first form and the virtual object riding in the virtual carrier in the picture of the virtual scene.
Here, the virtual object and the virtual vehicle are in a first environmental area adapted to the first form.
In practical applications, an application program supporting a virtual scene is installed on the terminal. The application program can be any one of a large-scale multi-player online role playing game, a first person shooting game, a third person shooting game, a multi-player online tactical competitive game, a virtual reality application program, a three-dimensional map program, a military simulation program or a multi-player gunfight type survival game. The user can use the terminal to operate the virtual object located in the virtual scene to perform activities, including but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
When a user opens an application program on a terminal, and the terminal runs the application program, the terminal presents a picture of a virtual scene, wherein the picture of the virtual scene is obtained by observing the virtual scene at a first person object view angle or a third person object view angle, and the picture of the virtual scene comprises an interactive object and an object interactive environment, such as a virtual object controlled by the current user and a virtual carrier taken by the virtual object.
It should be noted that the virtual vehicle may be a vehicle, such as a vehicle, a ship, an airplane, etc., or an animal, including a real animal and an imaginary mythical animal, such as a horse, a dragon, etc.
In some embodiments, before presenting the virtual vehicle in the first form and the virtual object riding in the virtual vehicle, the terminal may further present the virtual object in an independent state in a screen of the virtual scene; and responding to the call instruction aiming at the virtual carrier, presenting the virtual carrier in the first form, and controlling the virtual object to take the virtual carrier.
Here, the independent state refers to a state when the virtual object is not seated on the virtual vehicle, and may be a standing state without any movement, an interactive state when the virtual object interacts with another virtual object, or a movement state such as walking, running, climbing, or the like.
In actual implementation, the call instruction for the virtual vehicle may be generated by triggering a corresponding call control by a user, where the call control may be a key, an icon, or the like, and the triggering manner for the call control may be at least one of clicking, double-clicking, long-pressing, and sliding, for example, the call instruction for the virtual vehicle may be generated by clicking a key "Q" on a keyboard, and the call instruction for the virtual vehicle may be generated by clicking a call icon on a screen with a mouse; the call instruction for the virtual vehicle may also be generated by recognizing a voice instruction or a limb action of the user, for example, the user may generate the call instruction for the virtual vehicle by speaking "call ride"; the call instruction for the virtual vehicle may also be automatically generated when a call condition is reached, such as when a user controls the virtual object to kill a target number of enemies.
As an example, fig. 5 is an optional interface schematic diagram of a virtual vehicle calling process provided in the embodiment of the present application, and referring to fig. 5, when a user triggers a calling control to generate a calling instruction of a virtual vehicle, a virtual vehicle 502 in a white horse shape is presented, and the virtual object is controlled to ride on the virtual vehicle.
In practical applications, when receiving a call instruction for a virtual vehicle, the terminal may obtain a position where the virtual object is located when receiving the call instruction, so as to present the virtual vehicle in the first form near the virtual object, for example, the virtual vehicle in the first form may be presented at a position on the right side of the virtual object and at a preset distance (e.g., 1 meter) from the virtual object, so that the virtual object can ride on the virtual vehicle.
As an example, a first form of virtual vehicle may be presented at a position directly on the right side of the virtual object and at a preset distance from the virtual object, and then the virtual object is controlled to take the first form of virtual vehicle; the virtual vehicle in the first form may be moved from one side of the display area to the display area, and moved to a position on the right side of the virtual object, where the distance between the virtual vehicle and the virtual object is a preset distance.
Step 402: and responding to the form conversion instruction aiming at the virtual vehicle, controlling the virtual vehicle to convert from the first form to the second form, and displaying a picture that the virtual object takes the virtual vehicle in the second form and is in the second environment area.
Wherein the second environmental region is adapted to the second morphology and is different from the first environmental region. The environmental area here may be a land area, an air area, a water area, etc.
The form transformation command can be generated by triggering a corresponding form transformation control by a user, the form transformation control can be a key, an icon and the like, and the triggering mode aiming at the form transformation control can be at least one of clicking, double-clicking, long-pressing and sliding; the shape conversion instruction may be generated by recognizing a voice instruction or a body motion of the user.
In practical applications, the virtual vehicle may be controlled to change from the first form to the second form, and then the virtual vehicle in the second form is controlled to move from the first environmental area to the second environmental area; the virtual carrier can be controlled to move from the first environment area to the second environment area in the process of controlling the virtual carrier to have the first form and change into the second form; the virtual carrier of the first form can be controlled to move from the first environment area to the second environment area, and then the virtual carrier is controlled to have the first form and be converted into the second form. Here, the order of execution of the form conversion process and the movement process of the virtual vehicle is not limited.
In the process of changing the virtual vehicle from the first form to the second form, the virtual object is always seated on the virtual vehicle and does not leave the virtual vehicle. In practice, a plurality of virtual objects may be seated on the virtual vehicle, and none of the plurality of virtual objects on the virtual vehicle may leave the virtual vehicle during the transition of the virtual vehicle from the first configuration to the second configuration. For example, if two virtual objects are riding on the virtual vehicle of the first form, when the virtual vehicle is converted from the first form to the second form, the two virtual objects originally riding on the virtual vehicle are still riding on the virtual vehicle, that is, the screen showing that the two virtual objects riding on the virtual vehicle of the second form are in the second environment area is displayed.
In some embodiments, the terminal may trigger the form transformation instruction for the virtual vehicle by: receiving a trigger operation aiming at a key for transforming the virtual vehicle form; and responding to the triggering operation, and triggering a form transformation instruction for the virtual vehicle.
Here, the key may be a key on a keyboard or a virtual key, such as a space bar, and when the user clicks the key for changing the form of the virtual vehicle, the terminal receives a corresponding trigger operation, triggers a form change instruction for the virtual vehicle, then controls the virtual vehicle to change from the first form to the second form, and displays a picture that the virtual object takes the virtual vehicle in the second form in the second environment area.
In some embodiments, before the screen showing that the virtual object takes the second form of the virtual vehicle to be in the second environment area, the terminal may further receive a control instruction for the second form of the virtual vehicle; and controlling the virtual object to move from the first environment area to the second environment area by taking the virtual vehicle of the second form in response to the control instruction.
The control instruction can be generated by triggering a corresponding control by a user, the control can be a key, an icon and the like, and the triggering mode for the shooting control can be at least one of clicking, double clicking, long pressing and sliding; the control instruction can also be generated by recognizing a voice instruction or a limb action of the user; the control command may also be automatically generated when a preset condition is reached, for example, the control command for the virtual vehicle in the second form may be automatically generated after the virtual vehicle completes the transformation of the form.
In practical implementation, the virtual vehicle may complete the form transformation in the first environment area, that is, in the first environment area, the virtual vehicle is controlled to be transformed from the first form to the second form; after the virtual vehicle has the first form and is transformed into the second form, and the terminal receives the control command, the virtual object is controlled to move from the first environment area to the second environment area by riding the virtual vehicle of the second form.
For example, fig. 6 is an optional schematic diagram of a configuration transformation process of a virtual vehicle according to an embodiment of the present application, referring to fig. 6, when a first environment area is a land area and a second environment area is an air area, when a user clicks a space bar, a terminal receives a configuration transformation command for the virtual vehicle, and controls the virtual vehicle to be transformed from a first configuration (vehicle configuration) to a second configuration (airplane configuration) in the land area, where the terminal displays a process of transforming the virtual vehicle from the first configuration to the second configuration in a screen of a virtual scene; after the virtual vehicle is transformed from the first form to the second form, the user triggers a control command for the virtual vehicle in the second form by clicking a direction key, and the virtual vehicle in the second form is controlled to move from the land area to the air area.
In some embodiments, the virtual object is controlled to move from the first environmental area to the second environmental area by the virtual vehicle during the process of transforming the virtual vehicle from the first form to the second form.
In actual implementation, the virtual vehicle is controlled to move from the first environment area to the second environment area by taking the virtual vehicle as the virtual vehicle is converted from the first form to the second form.
For example, fig. 7 is an optional schematic diagram of a configuration transformation process of the virtual vehicle according to an embodiment of the present application, referring to fig. 7, when the first environment area is a land area and the second environment area is an air area, when the user clicks a space bar, the terminal receives a configuration transformation command for the virtual vehicle, controls the virtual vehicle to be transformed from the first configuration (horse configuration) to the second configuration (dragon configuration), and simultaneously controls the virtual vehicle to gradually move upward from the land area to move to the air area.
In some embodiments, after controlling the virtual vehicle to transform from the first form to the second form, the terminal also controls the virtual vehicle to transform from the second form to the third form in response to a form transformation instruction for the virtual vehicle, and presents a screen of the virtual object in a third environment area by riding the virtual vehicle in the third form.
Here, the third modality may be the same as or different from the first modality, and when the third modality is different from the first modality, the third environmental region is different from the first environmental region, and the third environmental region is adapted to the third modality.
As an example, when the third configuration is the same as the first configuration, fig. 8 is an optional schematic diagram of the configuration transformation process of the virtual vehicle provided in the embodiment of the present application, referring to fig. 8, first showing a screen of the virtual object in the first environment area (land area) by riding the virtual vehicle in the first configuration, when the user clicks the space key, the terminal receives the configuration transformation command for the virtual vehicle, controls the virtual vehicle to move from the first environment area (land area) to the air area, and controls the virtual vehicle to transform from the first configuration to the second configuration after the virtual vehicle moves to the second environment area (air area); when the user clicks the space bar again, the terminal receives a form transformation command for the virtual vehicle, controls the virtual vehicle to move from the second environment area (air area) to the first environment area (land area), and controls the virtual vehicle to be transformed from the second form to the first form after the virtual vehicle moves to the first environment area (land area). Therefore, the virtual carrier can be switched back and forth between the first form and the second form, and a user can change the form of the virtual carrier in real time according to the current environment of the virtual object in the virtual scene.
As an example, when the third form is different from the first form, fig. 9 is an optional schematic diagram of a form transformation process of the virtual vehicle provided in the embodiment of the present application, referring to fig. 9, first showing a screen of the virtual object in the first environment area (land area) by the virtual vehicle in the first form, when the user clicks the space bar, the terminal receives a form transformation command for the virtual vehicle, controls the virtual vehicle to move from the first environment area (land area) to the air area, and controls the virtual vehicle to transform from the first form to the second form after the virtual vehicle moves to the second environment area (air area); after the virtual vehicle is transformed from the first form to the second form, the user can control the virtual vehicle in the second form to move in the air area, when the user clicks the space bar again, the terminal receives a form transformation command for the virtual vehicle, controls the virtual vehicle to move from the second environment area (the air area) to the third environment area (the underwater area), and controls the virtual vehicle to be transformed from the second form to the third form after the virtual vehicle moves to the third environment area (the underwater area).
In some embodiments, before the terminal transforms the control virtual vehicle from the first form to the second form, the method further includes: presenting a form transformation icon in a screen of a virtual scene; presenting at least two form options of the virtual vehicle in response to a trigger operation for the form transformation icon; and in response to the triggering operation of the target form selection item in the at least two form selection items, taking the form corresponding to the target form selection item as the second form of the virtual vehicle.
Here, when there are a plurality of transformable forms, the user performs a trigger operation with respect to the form transformation icon, and the terminal displays form selection items corresponding to the plurality of transformable forms on the screen of the virtual scene so that the user can select a form to be transformed from the plurality of presented form selection items, thereby enabling to realize form transformation for accurately controlling the virtual vehicle.
In some embodiments, before displaying that the virtual object taking the second form is in the picture of the second environment area, the terminal may further obtain a position of the virtual vehicle in the first environment area when the form conversion instruction is received; determining an idle area closest to the position in the second environment area, wherein the idle area is an area without the barrier; and controlling the virtual object to move to the idle area by taking the virtual vehicle in the second state.
In actual implementation, when the form conversion command is received, the current position of the virtual vehicle is obtained, and an idle area closest to the current position in the second environment area is obtained, so that the virtual object is controlled to move to the idle area by riding the virtual vehicle in the second form.
For example, when the first environmental area is an aerial area and the second environmental area is a land area, and when the terminal receives the form change command and the virtual vehicle is just above a certain land obstacle, an idle area closest to the current position of the virtual vehicle is searched on the land, and the virtual vehicle is controlled to descend to the idle area.
In some embodiments, after the terminal transforms the control virtual vehicle from the first form to the second form, the terminal also receives a movement instruction for the virtual vehicle in the second form; and responding to the movement command, controlling the virtual vehicle of the second form to move in the second environment area at a movement speed matched with the second form.
In practical implementation, the movement speeds adapted to different forms are different, that is, after the virtual vehicle is transformed from the first form to the second form, the corresponding movement speed is also transformed accordingly. For example, when the first configuration is a land configuration (e.g., horse configuration) and the second configuration is an air configuration (e.g., dragon configuration), the moving speed increases when the virtual vehicle is transformed from the first configuration to the second configuration.
In some embodiments, after the terminal transforms the control virtual vehicle from the first form to the second form, the terminal also receives a movement instruction for the virtual vehicle in the second form; when the second environment area is an aerial area of the virtual scene, the virtual vehicle in the second form is controlled to fly in the aerial area at the target height from the ground to cross at least part of the objects on the ground in response to the movement instruction.
In practical implementation, when the second environment area is an aerial area in the virtual scene, the user may control the virtual vehicle in the second form to fly in the aerial area at a height from the ground target, and during the moving process, a partial virtual scene screen corresponding to the current position of the virtual vehicle is displayed in real time to present the moving process of the virtual vehicle in the second form, where when the height of the object on the ground is lower than the target height, the partial object may be spanned.
For example, fig. 10 is an alternative interface schematic diagram of the movement of the virtual vehicle provided by the embodiment of the present application, and referring to fig. 10, the virtual object flies in the air by riding on the virtual vehicle in the second form, so as to be able to cross the object on the ground.
In some embodiments, when the second environment area is an aerial area in the virtual scene, the user may further control the flying height of the virtual vehicle in the second form in the aerial area, for example, when there is an obstacle in the front side, the virtual vehicle in the second form may be controlled to ascend to cross the obstacle.
In some embodiments, before the terminal transforms the control virtual vehicle from the first form to the second form, the terminal may further control the virtual vehicle of the first form to detach from the virtual prop of the first form and release the attack skill to the target object in response to a detaching instruction for the virtual object and the virtual vehicle of the first form when the target object is present in the first environmental area, the target object and the virtual object being in an enemy relationship.
Here, the separation instruction for the virtual object and the virtual vehicle in the first form may be generated by triggering a corresponding separation control by a user, where the separation control may be a key, an icon, or the like, and the triggering manner for the call control may be at least one of clicking, double-clicking, long-pressing, and sliding; or by recognizing a user's voice command or body movement.
In actual implementation, after receiving a separation instruction for the virtual object and the virtual vehicle in the first form, the terminal presents a process of separating the virtual object from the virtual item in the first form, for example, presents a process of jumping off the virtual object from the virtual item in the first form, and presents a process of releasing the attack skill from the virtual vehicle in the first form to the target object, where after the attack skill is released, an effect of the attack skill and a corresponding injury effect are presented in a picture of a virtual scene.
For example, fig. 11 is an optional interface schematic diagram of the virtual object and virtual vehicle separation process provided in the present embodiment, referring to fig. 11, when the virtual object meets the target object in the virtual vehicle, a separation command is triggered to leave the virtual vehicle for fighting; when the virtual object leaves the virtual carrier, the virtual carrier does not disappear, but takes off and collides, and a row of ice cones is released to attack the target object.
In some embodiments, after the terminal transforms the control virtual vehicle from the first form to the second form, when the target object exists in the first environment area, the terminal may control the virtual object to be detached from the virtual prop of the second form in response to a detaching instruction for the virtual object and the virtual vehicle of the second form, and control the virtual vehicle of the second form to release the attack skill to the target object, the target object and the virtual object being in an enemy relationship.
That is, regardless of the form of the virtual vehicle, the attack skill can be released to the target object when the corresponding detachment instruction is received. Here, the attack skills released by the virtual vehicles of different configurations may be the same or different.
In some embodiments, after the terminal controls the first form of virtual vehicle to release the attack skill to the target object, the display of the first form of virtual vehicle is cancelled. And when a calling instruction for the virtual vehicle is received, the virtual vehicle in the first form is re-presented.
In some embodiments, after controlling the virtual vehicle in the first form to release the attack skill to the target object, the terminal may further control the virtual vehicle in the first form to move to the virtual object; and when the distance between the virtual vehicle in the first form and the virtual object meets the distance condition, controlling the virtual object to take the virtual vehicle in the first form.
In practical implementation, after the first form of virtual vehicle releases the attack skill to the target object, the virtual vehicle automatically returns to the side of the virtual object, and controls the virtual object to ride the virtual vehicle, that is, the virtual object and the first form of virtual vehicle are separated temporarily and then return to the same place again, so as to continue to play other purposes of the virtual vehicle.
In practical application, a target area containing the virtual object is determined according to the position of the virtual object, after the target object in the first form of the virtual carrier item releases the attack skill, the process that the virtual carrier in the first form moves to the target area is shown, and when the virtual carrier in the first form moves to the target area, the process that the virtual object takes the virtual carrier in the first form is shown.
According to the embodiment of the application, the virtual vehicle is controlled to be converted from the first form to the second form by responding to the form conversion instruction aiming at the virtual vehicle, and the picture that the virtual object takes the virtual vehicle in the second form and is located in the second environment area is displayed, so that the conversion of the form of the virtual vehicle is realized, one virtual vehicle can adapt to different environments, and compared with the control of the virtual object, the interaction process is simplified, and further the consumption of computing resources is reduced.
Continuing to describe the control method of the virtual vehicle provided in the embodiment of the present application, the control method of the virtual vehicle is implemented by a terminal and a server in cooperation, fig. 12 is an optional flowchart of the control method of the virtual vehicle provided in the embodiment of the present application, and referring to fig. 12, the control method of the virtual vehicle provided in the embodiment of the present application includes:
step 1201: the terminal presents a start game button.
Step 1202: the terminal responds to click operation aiming at the game key and sends an acquisition request of scene data of the virtual scene to the server.
Step 1203: and the server sends the scene data to the terminal.
Step 1204: the terminal renders based on the received scene data, presents the picture of the virtual scene, and presents the virtual object in an independent state in the picture of the virtual scene.
Step 1205: and the terminal responds to the call instruction aiming at the virtual carrier and sends an image data acquisition request carrying the call instruction to the server based on the image data of the current virtual scene.
Step 1206: and the server executes corresponding picture data calculation logic based on the picture data of the current virtual scene and the call instruction, and returns a corresponding data calculation result to the terminal.
Step 1207: and the terminal performs picture rendering of the virtual scene based on the calculation result returned by the server, and refreshes the picture of the currently displayed virtual scene so as to show the picture of the virtual object taking the virtual carrier in the first form.
Step 1208: the terminal responds to the form transformation instruction aiming at the virtual carrier, and sends an image data acquisition request carrying the form transformation instruction to the server based on the image data of the current virtual scene.
Step 1209: and the server executes corresponding picture data calculation logic based on the picture data and the form conversion instruction of the current virtual scene and returns a corresponding data calculation result to the terminal.
Step 1210: and the terminal performs picture rendering of the virtual scene based on a calculation result returned by the server, refreshes the picture of the currently displayed virtual scene so as to show the process of converting the virtual carrier from the first form to the second form and show the process of moving the virtual carrier from the land area to the air area.
Step 1211: and the terminal displays the picture of the virtual object in the air area by taking the virtual carrier in the second form.
Step 1212: and responding to the movement instruction aiming at the virtual carrier, and sending an image data acquisition request carrying the movement instruction to the server based on the image data of the current virtual scene.
Step 1213: and the server executes corresponding picture data calculation logic based on the picture data and the moving instruction of the current virtual scene and returns a corresponding data calculation result to the terminal.
Step 1214: and the terminal performs picture rendering of the virtual scene based on the calculation result returned by the server, and refreshes the picture of the currently displayed virtual scene so as to show the process that the virtual carrier flies in the air area.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Fig. 13 is an optional flowchart of the method for controlling a virtual vehicle according to the embodiment of the present application, and referring to fig. 13, the method for controlling a virtual vehicle according to the embodiment of the present application includes:
step 1301: the terminal presents a virtual object in an independent state in a picture of a virtual scene.
Here, the independent state refers to a state when the virtual object is not seated on the virtual vehicle, and may be a standing state without any movement, an interactive state when the virtual object interacts with another virtual object, or a movement state such as walking, running, climbing, or the like.
Step 1302: and responding to a calling instruction aiming at the virtual vehicle, presenting the virtual vehicle in a first form, and controlling the virtual object to take the virtual vehicle.
Here, the call instruction for the virtual vehicle may be generated by triggering a corresponding call control by a user, where the call control may be a key, an icon, or the like. As an example, referring to fig. 5, when the virtual object 501 is in a standing state, a user triggers a call control, a call instruction of a virtual vehicle is generated, the virtual vehicle 502 in a white horse shape is presented, and the virtual object is controlled to ride on the virtual vehicle. Wherein the white horse form is the first form.
Step 1303: and controlling the first form of virtual vehicle to walk in the land area in response to the movement instruction aiming at the first form of virtual vehicle.
The first form is an initial form, such as a white horse form, the virtual vehicle of the first form can travel only on the land, and the virtual object can travel faster on the land by riding the virtual vehicle.
Step 1304: and controlling the virtual vehicle to be transformed from the first form to the second form in response to the form transformation instruction for the virtual vehicle.
In practical implementation, the form transformation command may be triggered by a space bar on the keyboard, for example, referring to fig. 6, when the user clicks the space bar, the virtual vehicle is controlled to move from the land to the air, and the form transformation is started, and the first form (white horse form) is transformed into the second form (dragon form); for another example, fig. 14 is an optional schematic diagram of a configuration transformation process of the virtual vehicle according to the embodiment of the present application, and referring to fig. 14, when the user clicks the space bar, the user controls the virtual vehicle to move from the land to the air, and starts the configuration transformation, so as to transform the first configuration (land configuration of the mechanical dragon) into the second configuration (air configuration of the mechanical dragon).
It should be noted that, when the virtual vehicle is in the first form, one virtual object takes the virtual vehicle, and when the virtual vehicle is switched to the second form, one virtual object takes the virtual vehicle, that is, the virtual object taking the virtual vehicle does not need to be separated from the virtual vehicle during the form conversion process; when the virtual vehicle is in the first form, two virtual objects ride on the virtual vehicle, and when the virtual vehicle is switched to the second form, the two virtual objects also ride on the virtual vehicle.
Step 1305: and controlling the virtual vehicle in the second form to fly in the air area in response to the movement instruction for the virtual vehicle in the second form.
In practical implementation, the second form corresponds to an aerial region, and the user can control the virtual vehicle in the second form to fly in the air, and during the flying process, the virtual vehicle in the second form can span all obstacles on the ground, and the moving speed of the virtual vehicle in the second form is higher than that of the virtual vehicle in the first form. For example, referring to fig. 10, a virtual object flies in the air while riding on a virtual vehicle of the second configuration.
Step 1306: and controlling the virtual vehicle to be transformed from the second form to the first form in response to the form transformation instruction for the virtual vehicle.
In practical implementation, when the user triggers the form conversion command again, the virtual vehicle is converted from the second form back to the first form, for example, referring to fig. 11, when the user clicks the space bar again, the virtual vehicle is controlled to move from the air to the land, and the form conversion is started, and the second form (dragon form) is switched to the second form (white horse form).
Here, if the user triggers the form change command and the virtual vehicle is located above the land obstacle point, the virtual vehicle automatically descends to a nearby land walking point when descending.
Step 1307: controlling the virtual object to be separated from the virtual prop of the first form in response to a separation instruction for the virtual object and the virtual vehicle of the first form.
Here, after the virtual object is separated from the virtual item in the first form, the virtual object and the virtual vehicle become two units that exist independently, that is, the virtual object returns to an independent state, and the user can control the virtual object independently.
Step 1308: and controlling the virtual vehicle in the first form to release the attack skill to the target object.
Here, the virtual vehicle in the first aspect releases the attack skill in a form of moving forward to attack the target object, where the target object and the virtual object are in an adversarial relationship, that is, the interdependency relationship inherits the virtual object controlled by the user, and the target object is determined according to the interdependency relationship of the virtual object.
In practical application, after the virtual vehicle releases the attack skill, the virtual vehicle can be cancelled, and when a calling instruction is received, the virtual vehicle is re-presented; or after the virtual carrier releases the attack skill, the virtual carrier automatically returns to the side of the virtual object and controls the virtual object to take the virtual carrier, namely, the virtual object and the virtual carrier are separated temporarily and then return to the same place again to continue to play other purposes of the virtual carrier.
The virtual carrier in the embodiment of the application is different from the fighting pet and the virtual carrier in the related art, the fighting pet in the related art cannot be seated and can always participate in fighting, and the virtual carrier in the related art can be seated but is slow in speed and cannot be deformed. The embodiment of the application provides a virtual vehicle function for increasing moving speed, and a function of form transformation is newly added to realize an air-ground amphibious walking-instead function, and when a virtual object is separated from the virtual vehicle, the virtual vehicle can independently exist and collide with a target object, so that the existence sense of the virtual vehicle is greatly improved, and the virtual vehicle is not a simple walking-instead tool any more.
The embodiment of the application has the following beneficial effects:
1) similarly, the virtual vehicle is taken, and the embodiment of the application realizes that the virtual vehicle is controlled to change the form under the condition that the virtual vehicle is not separated from the virtual vehicle and is replaced, so that the virtual vehicle after form change has flight capability and can cross over ground obstacles. Here, if the virtual vehicle is taken by two persons, the two persons do not need to be separated from the virtual vehicle, all states are directly inherited, the use experience of a user is greatly improved, the form transformation is triggered through a space key, and the operation is very convenient.
2) The virtual carrier does not have an attack characteristic in the related technology, the control method of the virtual carrier provided by the embodiment of the application can control the virtual carrier to collide with a target object when the virtual object is separated from the virtual carrier, people can rush into people by riding a horse and hit the people with the same action as a plurality of real ancient war movies, and the people can hit the people by the horse to achieve the action of a rushing-down matrix.
Specific differences between the present application and the related art are shown in table 1.
TABLE 1
Figure BDA0002697443750000241
Figure BDA0002697443750000251
Referring to fig. 15, fig. 15 is a schematic structural composition diagram of a control device of a virtual vehicle according to an embodiment of the present disclosure, and as shown in fig. 15, the control device 555 of a virtual vehicle according to an embodiment of the present disclosure includes:
a presenting module 5551, configured to present, in a screen of a virtual scene, a virtual vehicle in a first form and a virtual object riding on the virtual vehicle, where the virtual object and the virtual vehicle are in a first environment area adapted to the first form;
a transformation module 5552, configured to control the virtual vehicle to be transformed from the first form to the second form in response to a form transformation command for the virtual vehicle, and
a display module 5553, configured to display a picture of the virtual object in the second environment area by riding the virtual vehicle in the second form;
wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
In some embodiments, the presenting module is further configured to present the virtual object in an independent state in a screen of the virtual scene;
responding to a call instruction aiming at the virtual vehicle, presenting the virtual vehicle in a first form, and
and controlling the virtual object to take the virtual vehicle.
In some embodiments, the display module is further configured to receive a control instruction for the virtual vehicle in the second form;
and responding to the control instruction, and controlling the virtual object to move from the first environment area to the second environment area by taking the virtual vehicle with the second form.
In some embodiments, the transformation module is further configured to control the virtual object to move from the first environmental area to the second environmental area by riding the virtual vehicle during the process of transforming the virtual vehicle from the first configuration to the second configuration.
In some embodiments, the transformation module is further configured to receive a trigger operation for a key used for transforming the virtual vehicle form;
in response to the triggering operation, triggering a form transformation instruction for the virtual vehicle.
In some embodiments, the transformation module is further configured to control the virtual vehicle to be transformed from the second form to a third form in response to a form transformation instruction for the virtual vehicle, and
and displaying a picture of the virtual object in a third environment area by taking the virtual carrier in the third form.
In some embodiments, the transformation module is further configured to present a morphological transformation icon in the interface of the virtual scene;
presenting at least two form options of the virtual vehicle in response to a triggering operation for the form transformation icon;
and in response to a trigger operation for a target form selection item in the at least two form selection items, taking a form corresponding to the target form selection item as the second form of the virtual vehicle.
In some embodiments, the transformation module is further configured to obtain a first location of the virtual vehicle in the first environmental area when the form transformation instruction is received;
determining an idle area closest to the first position in the second environment area, wherein the idle area is an area without obstacles;
and controlling the virtual object to move to the idle area by taking the virtual vehicle in the second state.
In some embodiments, the apparatus further comprises:
the moving module is used for receiving a moving instruction aiming at the virtual carrier in the second form;
and responding to the movement instruction, controlling the virtual vehicle of the second form to move in the second environment area at a movement speed matched with the second form.
In some embodiments, the apparatus further comprises:
the moving module is used for receiving a moving instruction aiming at the virtual carrier in the second form;
when the second environment area is an aerial area of the virtual scene, controlling the virtual vehicle of the second form to fly in the aerial area at a target height from the ground so as to cross at least part of objects on the ground in response to the movement instruction.
In some embodiments, the transformation module is further configured to, when a target object exists in the first environment area, control the virtual object to be separated from the virtual prop in the first form in response to a separation instruction for the virtual object and the virtual vehicle in the first form, and
and controlling the virtual carrier in the first form to release the attack skills to a target object, wherein the target object and the virtual object are in an enemy relationship.
In some embodiments, the transformation module is further configured to control the virtual vehicle in the first form to move towards the virtual object;
and when the distance between the virtual vehicle in the first form and the virtual object meets a distance condition, controlling the virtual object to take the virtual vehicle in the first form.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method for controlling the virtual vehicle according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for controlling a virtual vehicle, the method comprising:
presenting a virtual carrier in a first form and a virtual object taken in the virtual carrier in a picture of a virtual scene, wherein the virtual object and the virtual carrier are in a first environment area matched with the first form;
in response to a form conversion instruction for the virtual vehicle, controlling the virtual vehicle to convert from the first form to a second form, and
displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form;
wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
2. The method of claim 1, wherein the virtual vehicle in the first configuration and the virtual object in the virtual vehicle are preceded by the method further comprising:
presenting a virtual object in an independent state in a picture of a virtual scene;
responding to a call instruction aiming at the virtual vehicle, presenting the virtual vehicle in a first form, and
and controlling the virtual object to take the virtual vehicle.
3. The method of claim 1, wherein the virtual vehicle displaying the virtual object in the second configuration is located in front of a screen of a second environment area, the method further comprising:
receiving a control instruction aiming at the virtual vehicle in the second form;
and responding to the control instruction, and controlling the virtual object to move from the first environment area to the second environment area by taking the virtual vehicle with the second form.
4. The method of claim 1, wherein the method further comprises:
and controlling the virtual object to move from the first environment area to the second environment area by using the virtual vehicle in the process of changing the virtual vehicle from the first form to the second form.
5. The method of claim 1, wherein prior to said controlling said virtual vehicle from said first configuration to said second configuration, said method further comprises:
receiving a trigger operation aiming at a key for transforming the virtual vehicle form;
in response to the triggering operation, triggering a form transformation instruction for the virtual vehicle.
6. The method of claim 1, wherein after said controlling said virtual vehicle to transform from said first configuration to a second configuration, said method further comprises:
controlling the virtual vehicle to transform from the second form to a third form in response to a form transformation instruction for the virtual vehicle, and
and displaying a picture of the virtual object in a third environment area by taking the virtual carrier in the third form.
7. The method of claim 1, wherein prior to said controlling said virtual vehicle from said first configuration to said second configuration, said method further comprises:
presenting a form transformation icon in a screen of the virtual scene;
presenting at least two form options of the virtual vehicle in response to a triggering operation for the form transformation icon;
and in response to a trigger operation for a target form selection item in the at least two form selection items, taking a form corresponding to the target form selection item as the second form of the virtual vehicle.
8. The method of claim 1, wherein the virtual vehicle displaying the virtual object in the second configuration is located in front of a screen of a second environment area, the method further comprising:
acquiring the position of the virtual carrier in the first environment area when the form conversion instruction is received;
determining an idle area closest to the position in the second environment area, wherein the idle area is an area without obstacles;
and controlling the virtual object to move to the idle area by taking the virtual vehicle in the second state.
9. The method of claim 1, wherein after said controlling said virtual vehicle to transform from said first configuration to a second configuration, said method further comprises:
receiving a movement instruction aiming at the virtual vehicle in the second form;
and responding to the movement instruction, controlling the virtual vehicle of the second form to move in the second environment area at a movement speed matched with the second form.
10. The method of claim 1, wherein the method further comprises:
receiving a movement instruction aiming at the virtual vehicle in the second form;
when the second environment area is an aerial area of the virtual scene, controlling the virtual vehicle of the second form to fly in the aerial area at a target height from the ground so as to cross at least part of objects on the ground in response to the movement instruction.
11. The method of claim 1, wherein prior to said controlling said virtual vehicle from said first configuration to said second configuration, said method further comprises:
when a target object exists in the first environment area, responding to a separation instruction aiming at the virtual object and the virtual carrier in the first form, controlling the virtual object to be separated from the virtual prop in the first form, and controlling the virtual object to be separated from the virtual prop in the first form
And controlling the virtual carrier in the first form to release the attack skills to a target object, wherein the target object and the virtual object are in an enemy relationship.
12. The method of claim 11, wherein after the controlling the first modality virtual vehicle to release the attacking skill to the target object, the method further comprises:
controlling the virtual carrier in the first form to move towards the virtual object;
and when the distance between the virtual vehicle in the first form and the virtual object meets a distance condition, controlling the virtual object to take the virtual vehicle in the first form.
13. An apparatus for controlling a virtual vehicle, the apparatus comprising:
the virtual vehicle comprises a presentation module, a display module and a control module, wherein the presentation module is used for presenting a virtual vehicle in a first form and a virtual object taken by the virtual vehicle in a picture of a virtual scene, and the virtual object and the virtual vehicle are in a first environment area matched with the first form;
a transformation module for responding to the form transformation instruction of the virtual vehicle, controlling the virtual vehicle to be transformed from the first form to the second form, and
the display module is used for displaying a picture of the virtual object in a second environment area by taking the virtual vehicle in the second form;
wherein the second environmental region is adapted to the second morphology and is different from the first environmental region.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory to implement the method for controlling a virtual vehicle according to any one of claims 1 to 12.
15. A computer-readable storage medium storing executable instructions for implementing the method for controlling a virtual vehicle according to any one of claims 1 to 12 when executed by a processor.
CN202011010631.3A 2020-09-23 2020-09-23 Virtual carrier control method, device, equipment and computer readable storage medium Active CN112090067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011010631.3A CN112090067B (en) 2020-09-23 2020-09-23 Virtual carrier control method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011010631.3A CN112090067B (en) 2020-09-23 2020-09-23 Virtual carrier control method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112090067A true CN112090067A (en) 2020-12-18
CN112090067B CN112090067B (en) 2023-11-14

Family

ID=73755206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011010631.3A Active CN112090067B (en) 2020-09-23 2020-09-23 Virtual carrier control method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112090067B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114344906A (en) * 2022-01-11 2022-04-15 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for controlling partner object in virtual scene
WO2022142626A1 (en) * 2020-12-31 2022-07-07 腾讯科技(深圳)有限公司 Adaptive display method and apparatus for virtual scene, and electronic device, storage medium and computer program product
WO2022252905A1 (en) * 2021-05-31 2022-12-08 腾讯科技(深圳)有限公司 Control method and apparatus for call object in virtual scene, device, storage medium, and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6234901B1 (en) * 1996-11-22 2001-05-22 Kabushiki Kaisha Sega Enterprises Game device, picture data and flare forming method
CN110507994A (en) * 2019-09-05 2019-11-29 腾讯科技(深圳)有限公司 Control method, apparatus, equipment and the storage medium of virtual aircraft flight
CN110507990A (en) * 2019-09-19 2019-11-29 腾讯科技(深圳)有限公司 Interactive approach, device, terminal and storage medium based on virtual aircraft
CN111111195A (en) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6234901B1 (en) * 1996-11-22 2001-05-22 Kabushiki Kaisha Sega Enterprises Game device, picture data and flare forming method
CN110507994A (en) * 2019-09-05 2019-11-29 腾讯科技(深圳)有限公司 Control method, apparatus, equipment and the storage medium of virtual aircraft flight
CN110507990A (en) * 2019-09-19 2019-11-29 腾讯科技(深圳)有限公司 Interactive approach, device, terminal and storage medium based on virtual aircraft
CN111111195A (en) * 2019-12-26 2020-05-08 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
呆呆CUTE: "我的世界:海陆空一体化变形载具,飞天跨山越海无所不能", Retrieved from the Internet <URL:https://www.bilibili.com/video/av455375094/> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142626A1 (en) * 2020-12-31 2022-07-07 腾讯科技(深圳)有限公司 Adaptive display method and apparatus for virtual scene, and electronic device, storage medium and computer program product
US11995311B2 (en) 2020-12-31 2024-05-28 Tencent Technology (Shenzhen) Company Limited Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product
WO2022252905A1 (en) * 2021-05-31 2022-12-08 腾讯科技(深圳)有限公司 Control method and apparatus for call object in virtual scene, device, storage medium, and program product
CN114344906A (en) * 2022-01-11 2022-04-15 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for controlling partner object in virtual scene

Also Published As

Publication number Publication date
CN112090067B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112121430B (en) Information display method, device, equipment and storage medium in virtual scene
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN112090067B (en) Virtual carrier control method, device, equipment and computer readable storage medium
CN112691377A (en) Control method and device of virtual role, electronic equipment and storage medium
CN113181650A (en) Control method, device, equipment and storage medium for calling object in virtual scene
CN112416196B (en) Virtual object control method, device, equipment and computer readable storage medium
TWI831074B (en) Information processing methods, devices, equipments, computer-readable storage mediums, and computer program products in virtual scene
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN112295230B (en) Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112402946B (en) Position acquisition method, device, equipment and storage medium in virtual scene
CN113181649A (en) Control method, device, equipment and storage medium for calling object in virtual scene
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN113633964A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN113797536A (en) Method, apparatus, storage medium, and program product for controlling object in virtual scene
CN111921198A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN114404969A (en) Virtual article processing method and device, electronic equipment and storage medium
CN113101667A (en) Virtual object control method, device, equipment and computer readable storage medium
CN114344906A (en) Method, device, equipment and storage medium for controlling partner object in virtual scene
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN113144603A (en) Method, device, equipment and storage medium for switching call objects in virtual scene
CN113274724B (en) Virtual object control method, device, equipment and computer readable storage medium
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210122

Address after: 5 / F, area C, 1801 Hongmei Road, Xuhui District, Shanghai, 201200

Applicant after: Tencent Technology (Shanghai) Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035400

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant