WO2021155694A1 - Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium - Google Patents

Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium Download PDF

Info

Publication number
WO2021155694A1
WO2021155694A1 PCT/CN2020/128377 CN2020128377W WO2021155694A1 WO 2021155694 A1 WO2021155694 A1 WO 2021155694A1 CN 2020128377 W CN2020128377 W CN 2020128377W WO 2021155694 A1 WO2021155694 A1 WO 2021155694A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
driving
virtual vehicle
destination
control
Prior art date
Application number
PCT/CN2020/128377
Other languages
French (fr)
Chinese (zh)
Inventor
刘智洪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227008387A priority Critical patent/KR20220046651A/en
Priority to JP2022520700A priority patent/JP7374313B2/en
Publication of WO2021155694A1 publication Critical patent/WO2021155694A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5378Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8017Driving on land or water; Flying

Definitions

  • This application relates to the field of human-computer interaction, and in particular to a method, device, terminal and storage medium for driving a vehicle in a virtual environment.
  • First-Person Shooting game is an application based on a three-dimensional virtual environment. Users can manipulate virtual objects in the virtual environment to walk, run, climb, shoot, etc., and multiple users You can team up online to complete a task in the same virtual environment.
  • the user can control the virtual object to drive the virtual vehicle set in the virtual environment (such as car, airplane) , motorcycles, etc.), so as to deliver the virtual object to the destination through the virtual vehicle.
  • the user needs to control the driving of the virtual vehicle through driving controls.
  • the driving controls include steering control, acceleration control, deceleration control, brake control, horn control, gear shift control, and so on.
  • the embodiments of the present application provide a method, a device, a terminal, and a storage medium for driving a vehicle in a virtual environment, which can reduce the operation difficulty for a user to control a virtual object driving a vehicle.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for driving a vehicle in a virtual environment.
  • the method is applied to a terminal, and the method includes:
  • the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
  • the virtual vehicle In response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination.
  • an embodiment of the present application provides a device for driving a vehicle in a virtual environment, and the device includes:
  • a display module for displaying a driving picture and a map display control the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
  • the receiving module is configured to receive a marking operation on the map display control in response to the virtual vehicle being located in the autonomous driving area in the virtual environment, where the marking operation refers to marking a location in the map display control operate;
  • the control module is configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically drive to the destination.
  • an embodiment of the present application provides a terminal.
  • the terminal includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction,
  • the at least one program, the code set or the instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment as described in the above aspect.
  • a computer-readable storage medium stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program,
  • the code set or instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment as described in the above aspect.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method for driving a vehicle in a virtual environment provided in the above aspect.
  • the virtual vehicle when the virtual vehicle is driven to the automatic driving area in the virtual environment in the manual driving mode, if a marking operation on the map display control is received, the virtual vehicle is switched according to the marking operation It is an automatic driving mode and controls the virtual vehicle to automatically drive to the destination without the user manually controlling the virtual vehicle. This simplifies the process of controlling the virtual vehicle and reduces the user's difficulty in controlling the virtual vehicle to drive in the virtual environment. .
  • Fig. 1 shows a schematic diagram of an interface for manually controlling the driving process of a virtual vehicle in the related art
  • Fig. 2 shows a schematic interface diagram of a process of driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application
  • Fig. 3 shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application
  • Fig. 4 shows a flowchart of a method for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application
  • Fig. 5 shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application
  • Fig. 6 is a schematic diagram of a collision detection box corresponding to an autonomous driving area provided by an exemplary embodiment
  • FIG. 7 is a schematic diagram of a collision detection box corresponding to a virtual vehicle and an autonomous driving area provided by an exemplary embodiment
  • Figure 8 is a schematic diagram of the implementation of the process of determining the destination according to the marked location
  • Fig. 9 is a schematic diagram of the implementation of controlling the automatic driving of the virtual vehicle according to the waypoints on the automatic driving path;
  • FIG. 10 is a schematic diagram of the interface of the user interface in the automatic driving mode and the manual driving mode
  • Fig. 11 shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application
  • Fig. 12 is a structural block diagram of an apparatus for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application;
  • Fig. 13 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Virtual environment It is the virtual environment displayed (or provided) when the application is running on the terminal.
  • the virtual environment may be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application.
  • the virtual environment is a three-dimensional virtual environment as an example.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in a three-dimensional virtual environment.
  • the virtual object is a three-dimensional model created based on animation skeletal technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
  • Virtual vehicle refers to a vehicle that can be driven and exercised by virtual objects in a virtual environment. It can be a virtual car, a virtual motorcycle, a virtual plane, a virtual bicycle, a virtual tank, a virtual boat, and so on. Among them, the virtual vehicles can be randomly set in the virtual environment, and each virtual vehicle has its own shape and volume in the three-dimensional virtual environment, occupies a part of the space in the three-dimensional virtual environment, and can interact with other virtual objects in the three-dimensional virtual environment. (Such as houses, trees) collide.
  • First-person shooting game refers to a shooting game that users can play from a first-person perspective.
  • the screen of the virtual environment in the game is a screen that observes the virtual environment from the perspective of the first virtual object.
  • at least two virtual objects play a single-game battle mode in the virtual environment.
  • the virtual objects can avoid the damage initiated by other virtual objects and the dangers in the virtual environment (for example, gas circle, swamp, etc.) to achieve the virtual environment.
  • the purpose of survival in the environment when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment ends, and the virtual object that finally survives in the virtual environment is the winner.
  • the battle starts with the moment when the first client joins the battle, and the moment when the last client exits the battle as the end time.
  • Each client can control one or more virtual objects in the virtual environment.
  • the competitive mode of the battle may include a single-player battle mode, a two-player team battle mode, or a multi-player team battle mode, and the embodiment of the present application does not limit the battle mode.
  • UI control refers to any visual control or element that can be seen on the user interface of the application, such as pictures, input boxes, text boxes, buttons, labels and other controls. Some of the UI controls respond The user’s operation, for example, the user triggers the dagger props corresponding to the UI control, and controls the virtual object to switch the currently used gun to the dagger; for example, when driving a vehicle, the user interface displays the driving control, and the user can control the virtual by triggering the driving control The subject is driving a virtual vehicle.
  • the method provided in this application can be applied to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games, multiplayer online battle Arena Games (MOBA), etc.
  • the following embodiments are based on Take an example in the application of the game.
  • Games based on virtual environments are often composed of one or more maps of the game world.
  • the virtual environment in the game simulates the scene of the real world. Users can manipulate virtual objects in the game to walk, run, jump, shoot, and fight in the virtual environment. , Driving, switching to use virtual props, using virtual props to damage other virtual objects and other actions, which are highly interactive, and multiple users can team up for competitive games online.
  • FIG. 1 shows a schematic diagram of an interface for controlling the driving process of a virtual vehicle in the related art.
  • the user interface 100 displays a driving screen, and the user interface 100 displays a map display control 101 and driving controls (including direction controls) 102. Acceleration control 103, brake control 104), and vehicle fuel quantity indicator 105.
  • the user can view the current location of the virtual object and the surrounding environment through the map display control 101, the direction control 102 can control the forward, backward and turning of the virtual vehicle, the acceleration control 103 can control the acceleration of the virtual vehicle, and the brake control 104 can control The virtual vehicle stops quickly, and the remaining fuel amount of the virtual vehicle can be known through the vehicle fuel level indicator 105.
  • the user When using the above method to manually control the virtual vehicle to drive, the user needs to operate different driving controls according to the environment of the virtual vehicle's current environment, and needs to manually select the driving route. For novice users, the operation is more difficult If the user does not operate properly or chooses the wrong route, it will take a lot of time to drive to the destination.
  • An embodiment of the present application provides a method for driving a vehicle in a virtual environment, as shown in FIG. 2, which shows a schematic interface diagram of a process of driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application.
  • the terminal when the user controls the virtual vehicle to drive in the virtual environment through the driving controls (including the driving control as shown in FIG. 1), if the virtual vehicle is in the automatic driving area, the terminal is The automatic driving prompt message 106 is displayed in the user interface 100, prompting the user to switch the virtual vehicle to the automatic driving mode. Further, when a trigger operation on the map display control 101 is received, the enlarged map display control 101 is displayed in the user interface 100, and the destination 107 marked in the map display control 101 by the user is received. After the destination is marked, the terminal switches the virtual vehicle to the automatic driving mode, and in the automatic driving mode, controls the virtual vehicle to automatically drive to the destination without the user manually touching the driving controls. Moreover, after switching to the automatic driving mode, the user interface 108 also displays a driving mode switching control 108. The user can switch the virtual vehicle to manual driving mode again by clicking the driving mode switching control 108, and then manually control the virtual vehicle through the driving control Driving.
  • the user needs to manually operate the driving controls to control the virtual vehicle, and needs to autonomously select the driving route during the driving process.
  • the user only needs to control the virtual vehicle to travel automatically.
  • Driving area, and setting the destination of automatic driving through the map display control the terminal can automatically determine the driving route and control the virtual vehicle to travel without manual operation by the user, which simplifies the control process of the virtual vehicle and reduces the operation of the virtual vehicle Difficulty, which helps to shorten the time required for the virtual vehicle to reach its goal.
  • FIG. 3 shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application.
  • the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
  • the first terminal 120 installs and runs an application program supporting the virtual environment.
  • the application program can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games.
  • the first terminal 120 is a terminal used by the first user.
  • the first user uses the first terminal 120 to control the first virtual object in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, shooting, throwing, switching virtual props, and using virtual props to damage other virtual objects.
  • the first virtual object is a first virtual character, such as a simulated character object or an animation character object.
  • the first terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 140 includes a processor 144 and a memory 142, and the memory 142 includes a display module 1421, a receiving module 1422, and a control module 1423.
  • the server 140 is used to provide background services for applications supporting the three-dimensional virtual environment.
  • the server 140 is responsible for the main calculation work, and the first terminal 120 and the second terminal 160 are responsible for the secondary calculation work; or, the server 140 is responsible for the secondary calculation work, and the first terminal 120 and the second terminal 160 are responsible for the main calculation work;
  • the server 140, the first terminal 120, and the second terminal 160 adopt a distributed computing architecture to perform collaborative computing.
  • the second terminal 160 installs and runs an application program supporting the virtual environment.
  • the application program can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games.
  • the second terminal 160 is a terminal used by the second user.
  • the second user uses the second terminal 160 to control the second virtual object in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, shooting, throwing, switching virtual props, and using virtual props to damage other virtual objects.
  • the second virtual object is a second virtual character, such as a simulated character object or an animation character object.
  • first virtual character and the second virtual character are in the same virtual environment.
  • first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication permissions.
  • the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different control system platforms.
  • the first terminal 120 may generally refer to one of multiple terminals
  • the second terminal 160 may generally refer to one of multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as examples.
  • the device types of the first terminal 120 and the second terminal 160 are the same or different, and the device types include at least one of a smart phone, a tablet computer, an e-book reader, a digital player, a laptop computer, and a desktop computer.
  • the terminal includes a smart phone as an example.
  • the number of the aforementioned terminals may be more or less. For example, there may be only one terminal, or there may be dozens or hundreds of terminals, or more.
  • the embodiments of the present application do not limit the number of terminals and device types.
  • FIG. 4 shows a flowchart of a method for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application.
  • the method is used in the first terminal 120 or the second terminal 160 in the implementation environment shown in FIG. 3 or other terminals in the implementation environment as an example for description.
  • the method includes the following steps.
  • Step 401 Display a driving picture and map display controls.
  • the driving picture is a picture of a virtual object driving a virtual vehicle in a virtual environment, and the virtual vehicle is in a manual driving mode.
  • the driving screen and the map display control are displayed in the user interface, and the map display control is superimposed and displayed on the upper layer of the driving screen.
  • the user interface is an interface of an application program supporting a virtual environment
  • the user interface includes a virtual environment screen and controls corresponding to various functions.
  • the virtual environment picture is the driving picture.
  • the virtual environment screen is a screen for observing the virtual environment from the perspective of the virtual object.
  • the angle of view refers to the viewing angle when the virtual object is observed in the virtual environment from the first person perspective or the third person perspective.
  • the angle of view is the angle when the virtual object is observed through the camera model in the virtual environment.
  • the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model follows the position of the virtual object in the virtual environment and changes at the same time, and the camera The model is always within the preset distance range of the virtual object in the virtual environment.
  • the relative position of the camera model and the virtual object does not change.
  • the camera model refers to the three-dimensional model located around the virtual object in the virtual environment.
  • the camera model is located near the head of the virtual object or the head of the virtual object;
  • the camera The model can be located behind the virtual object and bound with the virtual object, or can be located at any position with a preset distance from the virtual object.
  • the camera model can be used to observe the virtual object in the virtual environment from different angles, optional Specifically, when the third-person perspective is the over-the-shoulder perspective of the first person, the camera model is located behind the virtual object (such as the head and shoulders of the virtual character).
  • the perspective includes other perspectives, such as a top-view perspective; when a top-down perspective is adopted, the camera model can be located above the head of the virtual object, and the top-view perspective is viewed from the air Angle of view to observe the virtual environment.
  • the camera model is not actually displayed in the virtual environment, that is, the camera model is not displayed in the virtual environment displayed on the user interface.
  • a virtual object corresponds to a camera model
  • the camera model can be rotated with the virtual object as the center of rotation, such as: virtual object Any point of is the center of rotation to rotate the camera model.
  • the camera model not only rotates in angle, but also shifts in displacement.
  • the distance between the camera model and the center of rotation remains unchanged. That is, the camera model is rotated on the surface of the sphere with the center of rotation as the center of the sphere, where any point of the virtual object can be the head, torso, or any point around the virtual object.
  • the center of the angle of view of the camera model points to the direction where the point on the spherical surface where the camera model is located points to the center of the sphere.
  • the driving picture is a picture when observing in a virtual environment with a third-person perspective.
  • the driving picture may also be a picture when observed in a virtual environment using a first-person perspective, which is not limited in this embodiment.
  • other elements in the virtual environment are also displayed in the driving picture, including at least one element of mountains, flatlands, rivers, lakes, oceans, deserts, sky, plants, and buildings.
  • the map display control is a control used to display the map of all or part of the area in the virtual environment.
  • the map screen displayed in the map display control is the screen when observing the virtual environment from a bird's-eye view.
  • the map display control In addition to displaying the virtual environment, the map display control also displays the object identifier of the current virtual object.
  • the object identifier is displayed in the center of the map displayed by the map display control, and when the position of the virtual object in the virtual environment changes, the map displayed by the map display control also changes accordingly.
  • the user interface in addition to displaying the driving screen and map display controls, the user interface also displays driving controls for controlling the virtual vehicle in the manual driving mode.
  • the type and number of driving controls corresponding to different virtual vehicles may be different.
  • the driving controls displayed on the user interface may include direction controls, acceleration controls, and brake controls; when the virtual object drives a virtual motorcycle
  • the driving controls displayed on the user interface may include a direction control, an acceleration control, a brake control, a head-up control, and a head-down control.
  • the embodiments of the present application do not limit the types and distribution positions of the driving controls in the user interface.
  • the user interface 100 includes a direction control 102, an acceleration control 103 and a brake control 104.
  • Step 402 In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, the marking operation refers to an operation of marking a location in the map display control.
  • the virtual vehicle is not capable of performing automatic driving in any area in the virtual environment, but is only capable of performing automatic driving in the automatic driving area.
  • an autonomous driving area is preset in the virtual environment, and when the virtual vehicle is located in the autonomous driving area, the user can set the destination of the automatic driving through the map display control.
  • the automatic driving area includes a preset road in the virtual environment, that is, the user needs to manually control the virtual vehicle to drive to the preset road before setting the destination of the automatic driving.
  • a preset road in the virtual environment, that is, the user needs to manually control the virtual vehicle to drive to the preset road before setting the destination of the automatic driving.
  • other simple environment areas in the virtual environment that is, areas containing fewer environmental elements
  • this embodiment does not limit the specific types of automatic driving areas.
  • the terminal detects in real time whether the virtual vehicle is located in the autonomous driving area, and if it detects that the virtual vehicle is located in the autonomous driving area, a prompt message is displayed on the user interface, prompting the user to set the destination of automatic driving through the map display control, and then Enter autopilot mode.
  • the terminal when receiving a viewing operation on the map display control, displays the enlarged map, and further receives a marking operation on the map display control, where the marking operation may be a certain operation on the map.
  • a click operation of a region correspondingly, the click position corresponding to the click operation is the position of the marked location.
  • the user can also mark the map display controls, but the location indicated by the mark operation is not used to control the virtual vehicle for automatic driving , But only has a location marking function to indicate the relative position of the marked location and the current location of the virtual object.
  • the virtual object controlled by the terminal is the driver of the virtual vehicle
  • the user can perform the marking operation.
  • the virtual exclusive is the occupant of the virtual vehicle, the user will not be able to perform the marking operation (ie Does not have the authority to set up automatic driving).
  • Step 403 In response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination.
  • the terminal switches the virtual vehicle to the automatic driving mode according to the marking operation, and determines the destination of the automatic driving, thereby controlling the virtual vehicle to automatically drive to the destination.
  • the driving path of the virtual vehicle from the current location to the destination is automatically planned by the terminal.
  • all virtual vehicles in the virtual environment support the automatic driving mode.
  • the preset virtual vehicle in the virtual environment supports an automatic driving mode.
  • the terminal switches the virtual vehicle to the automatic driving mode in response to the marking operation.
  • the preset virtual vehicle may include virtual cars, virtual tanks, and virtual ships, but does not include virtual bicycles and virtual motorcycles.
  • the terminal displays mode prompt information to remind the user that the virtual vehicle is currently in the automatic driving mode.
  • the user in the automatic driving mode, the user cannot manually control the virtual vehicle; or, the user can still manually control the virtual vehicle through the driving controls, and after the virtual vehicle is manually controlled, the virtual vehicle will exit Autopilot mode.
  • the terminal will update the destination according to the marking operation and control the virtual vehicle to automatically drive to the updated destination.
  • the virtual vehicle when the virtual vehicle drives to the autonomous driving area in the virtual environment in the manual driving mode, if it receives a marking operation on the map display control, the virtual vehicle will be loaded according to the marking operation.
  • the vehicle is switched to the automatic driving mode and the virtual vehicle is controlled to automatically drive to the destination. There is no need for the user to manually control the virtual vehicle. This simplifies the process of controlling the virtual vehicle to drive and reduces the user’s control of the virtual vehicle to drive in the virtual environment. Difficulty of operation.
  • the terminal needs high-frequency detection and control operations and responds to the control operations (such as the need to respond to the touch operations received on the touch screen).
  • Perform detection and response resulting in a large amount of terminal data processing during the automatic driving process, which in turn increases the power consumption of the terminal; using the solution provided by the embodiment of this application, the terminal can automatically drive to the destination based on the mark, and the user is not required during the automatic driving process.
  • Perform control operations thereby reducing the frequency of the terminal's control operation detection and response during the virtual vehicle driving process, thereby reducing the amount of terminal data processing, and helping to reduce the power consumption of the terminal.
  • the terminal only needs to send the destination to other terminals through the server, and other terminals can restore the virtual vehicle in automatic driving according to the destination, without the need to forward real-time control data and location data to other terminals through the server, which reduces The data forwarding volume of the server reduces the data forwarding pressure of the server.
  • the virtual vehicle Different from the automatic driving function of a real vehicle (which requires complex image recognition technologies such as vehicle recognition and lane recognition), in the embodiments of this application, in order to reduce the difficulty and computational complexity of the virtual vehicle to realize the automatic driving function, the virtual vehicle only
  • the automatic driving can be carried out in an automatic driving area (such as a preset road), that is, the automatic driving path of the virtual vehicle is located in the automatic driving area.
  • an illustrative embodiment is used to describe the process of realizing the automatic driving function.
  • FIG. 5 shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application.
  • the method is used in the first terminal 120 or the second terminal 160 in the implementation environment shown in FIG. 3 or other terminals in the implementation environment as an example for description.
  • the method includes the following steps.
  • Step 501 Display a driving picture and map display controls.
  • the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode.
  • step 501 For the implementation of step 501, reference may be made to step 401 above, and details are not described herein again in this embodiment.
  • Step 502 In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, the marking operation refers to an operation of marking a location on the map.
  • the automatic driving area in the virtual environment is also provided with a collision detection box, and the collision detection box is used to detect that other virtual objects in the virtual environment enter the automatic driving area.
  • each preset road corresponds to its own collision detection box 61 (the dotted area in the figure is the range of the collision detection box) .
  • the terminal determines that the virtual vehicle is located in the autonomous driving area, where the first collision detection box is a collision detection box corresponding to the virtual vehicle, and the second collision detection box The detection box is the collision detection box corresponding to the autonomous driving area.
  • the terminal determines that the virtual car pickup is located in the automatic driving area.
  • the terminal may also coordinate the location of the virtual vehicle in the virtual environment and the area coordinate range corresponding to the autonomous driving area. , It is determined whether the virtual vehicle is located in the automatic driving area (when the coordinates are within the range of the area coordinates, it is determined that the virtual vehicle is located in the automatic driving area), which is not limited in this embodiment.
  • the terminal receives the marking operation on the map display control.
  • the process of receiving the marking operation reference may be made to the above step 402, which will not be repeated in this embodiment.
  • Step 503 Determine the destination according to the marked location indicated by the marking operation, and the destination is located in the autonomous driving area.
  • the terminal determines a destination within the autonomous driving area according to the marked location indicated by the marking operation.
  • the terminal determines the marked location as the destination.
  • the terminal may determine whether the marked location is located in the automatic driving area according to the location coordinates of the marked location and the area coordinate range of the automatic driving area. This embodiment does not limit the specific determination method.
  • the terminal determines the location closest to the marked location in the automatic driving area as the destination.
  • the terminal automatically determines the closest location in the automated driving area to the marked location as the destination, so that subsequent automated driving can be carried out based on the destination.
  • the autonomous driving area is a preset road in the virtual environment
  • the terminal will mark the distance on the preset road
  • the nearest location of the location 81 is determined as the destination 82.
  • the terminal may also display mark prompt information, which is used to prompt the automatic driving Set the destination in the area until the marked location indicated by the marking operation is in the autonomous driving area.
  • Step 504 Determine an automatic driving path according to the current location and destination of the virtual vehicle, and the automatic driving path is located in the automatic driving area.
  • the terminal determines the automatic driving path in the automatic driving area.
  • the automatic driving path is the shortest path from the current location to the destination.
  • the terminal uses a depth-first search (Depth First Search) algorithm to determine at least one candidate path (each node is traversed only once) by using path branch points as nodes. Therefore, according to the length of each candidate path, the shortest candidate path is determined as the automatic driving path, where the path branch point is the preset branch point in the automatic driving area.
  • the terminal may also determine the candidate path through other graph algorithms, which is not limited in this embodiment.
  • the terminal determines at least one candidate route through the graph algorithm, it displays each candidate route on the map, and determines the automatic driving route according to the user's selection operation, which is not limited in this embodiment.
  • the terminal determines an automatic driving path 91.
  • step 505 the virtual vehicle is switched to the automatic driving mode, and the virtual vehicle is controlled to drive to the destination according to the automatic driving path.
  • waypoints are preset in the automatic driving area. Accordingly, the terminal controls the virtual vehicle to drive automatically according to the waypoints on the automatic driving path. To the destination.
  • this step includes the following sub-steps.
  • a number of waypoints 92 are set on a preset road (autonomous driving area) in the virtual environment, and the waypoints on the autonomous driving path 91 include: K, G, D, E, F .
  • the order of waypoints refers to the order of the waypoints passed by from the current location to the destination on the automatic driving path.
  • the order of the waypoints is K ⁇ G ⁇ D ⁇ E ⁇ F.
  • the terminal controlling the virtual vehicle to travel to the destination according to the order of the waypoints includes the following steps.
  • the terminal determines the first driving direction according to the current starting point and the first waypoint on the automatic driving path, thereby controlling the virtual vehicle to follow the first driving direction Drive to the first waypoint.
  • the terminal since no waypoint is set at the current location of the virtual vehicle, the terminal first controls the virtual prop to drive to waypoint K (the first waypoint).
  • step two if a waypoint is set at the current location of the virtual vehicle, the terminal directly executes step two.
  • the second driving direction is from the nth waypoint to the n+1th waypoint, and n is greater than or equal to An integer of 1 and less than or equal to k-1.
  • the path between adjacent waypoints in the autonomous driving area is a straight line (or an approximate straight line) and does not contain obstacles. Therefore, when the virtual vehicle travels to the nth waypoint, the terminal is The second driving direction is determined according to the nth waypoint and the n+1th waypoint, thereby controlling the virtual vehicle to travel to the n+1th waypoint according to the second driving direction. By looping this step, the virtual vehicle travels to the k-th waypoint (that is, the last waypoint on the automatic driving path).
  • the terminal controls the virtual vehicle to pass through waypoints K, G, D, E, and F in sequence.
  • the terminal determines the third driving direction according to the k-th waypoint and the destination, so as to control the virtual vehicle to drive to the destination in the third driving direction.
  • the terminal controls the virtual item to automatically drive to the destination according to the direction in which waypoint F points to the destination.
  • Step 506 In response to the virtual vehicle driving to the destination, the virtual vehicle is switched to the manual driving mode, and the virtual vehicle is controlled to stop driving.
  • the terminal after controlling the virtual vehicle to drive to the destination through the above steps, the terminal automatically switches the virtual vehicle to the manual driving mode and controls the virtual vehicle to stop at the destination.
  • the terminal can automatically display the marked location on the map display control after the virtual vehicle is switched to manual driving mode, so that the user can follow the marked location and purpose The relative position relationship between the ground, the virtual vehicle is manually controlled to drive to the marked location.
  • the terminal automatically controls the virtual vehicle to turn to the direction of the destination.
  • the terminal determines from the automatic driving area the destination closest to the marked location, and then determines the automatic driving path based on the destination and the current location , To avoid driving abnormalities caused by the virtual vehicle automatically driving to the non-autonomous driving area.
  • the terminal can determine the driving direction of the virtual vehicle according to the waypoints on the automatic driving path, and then according to the driving direction.
  • the direction control virtual vehicle automatically drives, that is, the terminal only needs to process and calculate a small amount of data when realizing automatic driving, reducing the difficulty and the amount of calculation when realizing automatic driving.
  • the collision detection box is used to determine whether the virtual vehicle is located in the automatic driving area, which helps to simplify the process of determining the location of the virtual vehicle.
  • the driving controls are displayed in the user interface, and the driving controls are in a clickable state.
  • the terminal sets the driving controls in the user interface to a non-clickable state, or cancels the display of the driving controls.
  • the terminal sets the driving control to a clickable state, or restores the display of the driving control, so that the user can continue to manually control the virtual vehicle to travel.
  • virtual objects cannot use virtual props.
  • virtual objects cannot use virtual supply bottles, cannot use virtual attack props to attack other virtual objects in the virtual environment, and cannot be used. Virtual throwing props and so on.
  • the terminal does not display the item usage control.
  • the terminal displays the control for using the props. , So that users can use virtual props by triggering the prop use control.
  • the item use control is the use control corresponding to the virtual attacking props, such as the shooting control of a virtual rifle, or the use control corresponding to the virtual replenishment props, such as the use control of the virtual bandage, or the use control corresponding to the virtual throwing props
  • the throwing control of a virtual grenade, etc. does not limit the type of the prop using control.
  • the terminal when receiving a trigger operation on the use of the control on the prop, the terminal controls the virtual object to use the virtual prop. It should be noted that when virtual switching back to the manual driving mode, the terminal cancels the display of the prop use control.
  • the terminal cancels the display of the driving control 1004 in the user interface 1000, and displays the aiming control 1001 and the firing control 1002 in the user interface 1000.
  • the driving controls in the user interface are set to a non-clickable state, or the display of the driving controls is canceled, the user will not be able to manually control the virtual vehicle during the automatic driving process.
  • the user interface displays a driving mode switching control, and when a trigger operation on the driving mode switching control is received, the terminal switches the virtual vehicle to the manual driving mode, and The driving controls are set to a clickable state, or the driving controls are restored to display.
  • a driving mode switching control 1003 is displayed in the user interface 1000.
  • the terminal controls the virtual vehicle to exit the automatic driving Mode, and display the driving control 1004 in the user interface 1000 again (and cancel the display of the attack control at the same time).
  • the terminal sets the driving control to a non-clickable state, or cancels the display of the driving control, to avoid exiting the automatic driving mode due to the user accidentally touching the driving control; at the same time, the terminal displays the driving mode on the user interface Switch the control so that the user can exit the autopilot mode by triggering the control.
  • Step 1101 Manually control the virtual vehicle.
  • Step 1102 whether to enter the autonomous driving area. If it enters the automatic driving area, step 1103 is executed, and if it has not entered the automatic driving area, return to execute step 1101.
  • step 1103 a prompt message that the autopilot can be driven is displayed.
  • Step 1104 whether a marking operation on the map is received. If it is received, step 1105 is executed, and if it is not received, step 1103 is executed back to.
  • Step 1105 Display the destination corresponding to the marking operation on the map.
  • Step 1106 Whether a destination determination operation is received. If it is received, step 1107 is executed, and if it is not received, step 1105 is executed back to.
  • Step 1107 enter the automatic driving mode.
  • Step 1108 whether the automatic driving path is determined. If it has been determined, step 1109 is executed, and if it is not determined, then return to step 1107.
  • Step 1109 Control the virtual vehicle to travel according to the waypoints on the automatic driving path.
  • Step 1110 whether the destination has been reached. If it reaches, go to step 1111, if not, go back to step 1109.
  • Step 1111 Control the virtual vehicle to stop driving.
  • Fig. 12 is a structural block diagram of a device for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application, and the device includes:
  • the display module 1201 is configured to display a driving picture and map display controls.
  • the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
  • the receiving module 1202 is configured to receive a marking operation on the map display control in response to the virtual vehicle being located in the autonomous driving area in the virtual environment, where the marking operation refers to marking a location in the map display control The operation;
  • the control module 1203 is configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically drive to the destination.
  • control module 1203 is used for:
  • the virtual vehicle is switched to the automatic driving mode, and the virtual vehicle is controlled to drive to the destination according to the automatic driving path.
  • control module 1203 when controlling the virtual vehicle to drive to the destination according to the automatic driving route, is configured to:
  • the virtual vehicle is controlled to travel to the destination.
  • the autonomous driving path includes k waypoints, and k is an integer greater than or equal to 2;
  • control module 1203 When controlling the virtual vehicle to travel to the destination according to the waypoint sequence of at least two of the waypoints, the control module 1203 is configured to:
  • n is an integer greater than or equal to 1 and less than or equal to k-1;
  • the virtual vehicle is controlled to drive from the k-th waypoint to the destination according to a third driving direction, and the third driving direction is from the k-th waypoint to the destination.
  • control module 1203 when determining the destination according to the marking location indicated by the marking operation, is configured to:
  • the marking operation instruction indicating that the marked location is located outside the autonomous driving area
  • the location closest to the marked location in the automatic driving area is determined as the destination, or marking prompt information is displayed, so The mark prompt information is used to prompt to set the destination in the autonomous driving area.
  • control module 1203 when determining the automatic driving path according to the current location of the virtual vehicle and the destination, is configured to:
  • a path branch point in the virtual environment as a node, at least one candidate path between the current location and the destination is determined through a depth-first search algorithm, and the path branch node is a preset in the autonomous driving area Branch point
  • the shortest candidate path is determined as the automatic driving path.
  • the receiving module 1203 is used for:
  • the first collision detection box is a collision detection box corresponding to the virtual vehicle.
  • the second collision detection box is a collision detection box corresponding to the autonomous driving area.
  • the device further includes:
  • the first switching module is configured to switch the virtual vehicle to the manual driving mode in response to the virtual vehicle driving to the destination, and control the virtual vehicle to stop driving.
  • driving controls are displayed in the manual driving mode, and the driving controls are in a clickable state;
  • the device also includes:
  • the setting module is used to set the driving control to a non-clickable state in the automatic driving mode, or cancel the display of the driving control;
  • the driving control In response to the virtual vehicle traveling to the destination, the driving control is set to a clickable state, or the driving control is restored to be displayed.
  • the device further includes:
  • the second switching module is configured to display driving mode switching controls in the automatic driving mode
  • the virtual vehicle In response to a triggering operation on the driving mode switching control, the virtual vehicle is switched to the manual driving mode, and the driving control is set to a clickable state, or the driving control is restored to be displayed.
  • no prop use control is displayed in the manual driving mode, and the prop use control is used to control the virtual object to use virtual props;
  • the device also includes:
  • the prop control display module is used to display the prop use control in the automatic driving mode
  • the prop use module is configured to control the virtual object to use the virtual prop in response to a trigger operation on the prop use control.
  • the autonomous driving area includes a preset road in the virtual environment.
  • the virtual vehicle when the virtual vehicle drives to the autonomous driving area in the virtual environment in the manual driving mode, if it receives a marking operation on the map display control, the virtual vehicle will be loaded according to the marking operation.
  • the vehicle is switched to the automatic driving mode and the virtual vehicle is controlled to automatically drive to the destination. There is no need for the user to manually control the virtual vehicle. This simplifies the process of controlling the virtual vehicle to drive and reduces the user’s control of the virtual vehicle to drive in the virtual environment. Difficulty of operation.
  • the terminal determines from the automatic driving area the destination closest to the marked location, and then determines the automatic driving path based on the destination and the current location , To avoid driving abnormalities caused by the virtual vehicle automatically driving to the non-autonomous driving area.
  • the driving direction of the virtual vehicle is determined according to the waypoints on the automatic driving path, and then the driving direction is controlled according to the driving direction.
  • the virtual vehicle travels automatically, while realizing automatic driving, it reduces the difficulty and the amount of calculation when realizing automatic driving.
  • the collision detection box is used to determine whether the virtual vehicle is located in the automatic driving area, which helps to simplify the process of determining the location of the virtual vehicle.
  • the terminal sets the driving control to a non-clickable state, or cancels the display of the driving control, to avoid exiting the automatic driving mode due to the user accidentally touching the driving control; at the same time, the terminal displays the driving mode on the user interface Switch the control so that the user can exit the autopilot mode by triggering the control.
  • FIG. 13 shows a structural block diagram of a terminal 1300 according to an exemplary embodiment of the present application.
  • the terminal 1300 may be a portable mobile terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic Video expert compresses standard audio level 4) player.
  • the terminal 1300 may also be called user equipment, portable terminal and other names.
  • the terminal 1300 includes a processor 1301 and a memory 1302.
  • the processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1301 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish.
  • the processor 1301 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1301 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1301 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1302 may include one or more computer-readable storage media, which may be tangible and non-transitory.
  • the memory 1302 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1302 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1301 to implement the method provided in the embodiment of the present application.
  • the terminal 1300 may optionally further include: a peripheral device interface 1303 and at least one peripheral device.
  • the peripheral device includes: at least one of a radio frequency circuit 1304, a touch display screen 1305, a camera 1306, an audio circuit 1307, a positioning component 1308, and a power supply 1309.
  • the peripheral device interface 1303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1301 and the memory 1302.
  • the processor 1301, the memory 1302, and the peripheral device interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1301, the memory 1302, and the peripheral device interface 1303 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1304 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1304 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1304 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 1304 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 1304 may also include a circuit related to NFC (Near Field Communication), which is not limited in this application.
  • the touch screen 1305 is used to display UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the touch display screen 1305 also has the ability to collect touch signals on or above the surface of the touch display screen 1305.
  • the touch signal may be input to the processor 1301 as a control signal for processing.
  • the touch screen 1305 is used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the touch display screen 1305 may be a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 1300. Even the touch screen 1305 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the touch screen 1305 can be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light-emitting diode).
  • the camera assembly 1306 is used to collect images or videos.
  • the camera assembly 1306 includes a front camera and a rear camera.
  • the front camera is used to implement video calls or selfies
  • the rear camera is used to implement photos or videos.
  • the camera assembly 1306 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 1307 is used to provide an audio interface between the user and the terminal 1300.
  • the audio circuit 1307 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1301 for processing, or input to the radio frequency circuit 1304 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1301 or the radio frequency circuit 1304 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes.
  • the audio circuit 1307 may also include a headphone jack.
  • the positioning component 1308 is used to locate the current geographic location of the terminal 1300 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1308 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 1309 is used to supply power to various components in the terminal 1300.
  • the power source 1309 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery charged through a wired line
  • a wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 1300 further includes one or more sensors 1310.
  • the one or more sensors 1310 include, but are not limited to: an acceleration sensor 1311, a gyroscope sensor 1312, a pressure sensor 1313, a fingerprint sensor 1314, an optical sensor 1315, and a proximity sensor 1316.
  • the acceleration sensor 1311 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1300.
  • the acceleration sensor 1311 can be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 1301 may control the touch screen 1305 to display the user interface in a horizontal view or a vertical view according to the gravitational acceleration signal collected by the acceleration sensor 1311.
  • the acceleration sensor 1311 can also be used for the collection of game or user motion data.
  • the gyroscope sensor 1312 can detect the body direction and rotation angle of the terminal 1300, and the gyroscope sensor 1312 can cooperate with the acceleration sensor 1311 to collect the user's 3D actions on the terminal 1300.
  • the processor 1301 can implement the following functions according to the data collected by the gyroscope sensor 1312: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1313 may be disposed on the side frame of the terminal 1300 and/or the lower layer of the touch screen 1305.
  • the pressure sensor 1313 can detect the user's holding signal of the terminal 1300, and perform left and right hand recognition or shortcut operations according to the holding signal.
  • the pressure sensor 1313 is arranged on the lower layer of the touch display screen 1305, it can control the operability controls on the UI interface according to the user's pressure operation on the touch display screen 1305.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1314 is used to collect the user's fingerprint to identify the user's identity according to the collected fingerprint.
  • the processor 1301 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 1314 may be provided on the front, back or side of the terminal 1300.
  • the fingerprint sensor 1314 can be integrated with the physical button or the manufacturer logo.
  • the optical sensor 1315 is used to collect the ambient light intensity.
  • the processor 1301 may control the display brightness of the touch screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1305 is decreased.
  • the processor 1301 may also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
  • the proximity sensor 1316 also called a distance sensor, is usually arranged on the front of the terminal 1300.
  • the proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300.
  • the processor 1301 controls the touch screen 1305 to switch from the on-screen state to the off-screen state; when the proximity sensor 1316 detects When the distance between the user and the front of the terminal 1300 gradually increases, the processor 1301 controls the touch display screen 1305 to switch from the rest screen state to the bright screen state.
  • FIG. 13 does not constitute a limitation on the terminal 1300, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • the embodiment of the present application also provides a computer-readable storage medium, the readable storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the The code set or instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment described in any of the foregoing embodiments.
  • This application also provides a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method for driving a vehicle in a virtual environment provided in the above aspect. .
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Abstract

Provided are a method and apparatus for driving a traffic tool in a virtual environment, and a terminal and a storage medium. The method comprises: displaying a traveling picture and a map presentation control, wherein the traveling picture is a picture showing a virtual object driving a virtual traffic tool to travel in a virtual environment, and the virtual traffic tool is in a manual driving mode; in response to an automatic driving area, where the virtual traffic tool is located, in the virtual environment, receiving a marking operation on the map presentation control, wherein the marking operation refers to an operation of marking a location in the map presentation control; and in response to the marking operation, switching the virtual traffic tool to an automatic driving mode, and controlling the virtual traffic tool to automatically travel to a destination. In the present application, a terminal can control a virtual traffic tool to automatically travel to a destination, and a user does not need to manually control the virtual traffic tool, thereby simplifying the process of controlling the virtual traffic tool to travel, and reducing the operation difficulty of the user when controlling the virtual traffic tool to travel.

Description

在虚拟环境中驾驶载具的方法、装置、终端及存储介质Method, device, terminal and storage medium for driving vehicle in virtual environment
本申请实施例要求于2020年02月04日提交,申请号为202010080028.6、发明名称为“在虚拟环境中驾驶载具的方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。The embodiments of this application are required to be filed on February 4, 2020. The application number is 202010080028.6 and the invention title is the priority of the Chinese patent application "Method, device, terminal and storage medium for driving a vehicle in a virtual environment", all of which The content is incorporated in the embodiments of this application by reference.
技术领域Technical field
本申请涉及人机交互领域,特别涉及一种在虚拟环境中驾驶载具的方法、装置、终端及存储介质。This application relates to the field of human-computer interaction, and in particular to a method, device, terminal and storage medium for driving a vehicle in a virtual environment.
背景技术Background technique
第一人称射击类游戏(First-Person Shooting game,FPS)是一种基于三维虚拟环境的应用程序,用户可以操控虚拟环境中的虚拟对象进行行走、奔跑、攀爬、射击等动作,并且多个用户可以在线组队在同一个虚拟环境中协同完成某项任务。First-Person Shooting game (FPS) is an application based on a three-dimensional virtual environment. Users can manipulate virtual objects in the virtual environment to walk, run, climb, shoot, etc., and multiple users You can team up online to complete a task in the same virtual environment.
当需要控制虚拟对象从当前所处的地点前往虚拟环境中的另一地点,且两地之间的距离较远时,用户可以控制虚拟对象驾驶虚拟环境中设置的虚拟载具(比如汽车、飞机、摩托车等等),从而通过虚拟载具将虚拟对象送达目的地。其中,用户需要通过驾驶控件控制虚拟载具行驶,驾驶控件包括转向控件、加速控件、减速控件、刹车控件、喇叭控件、换挡控件等等。When it is necessary to control a virtual object from the current location to another location in the virtual environment, and the distance between the two places is relatively long, the user can control the virtual object to drive the virtual vehicle set in the virtual environment (such as car, airplane) , Motorcycles, etc.), so as to deliver the virtual object to the destination through the virtual vehicle. Among them, the user needs to control the driving of the virtual vehicle through driving controls. The driving controls include steering control, acceleration control, deceleration control, brake control, horn control, gear shift control, and so on.
由于驾驶控件较多,因此用户(尤其是首次使用虚拟载具的用户)控制虚拟对象驾驶载具的操作难度较高。Since there are many driving controls, it is difficult for users (especially users who use the virtual vehicle for the first time) to control the virtual object to drive the vehicle.
发明内容Summary of the invention
本申请实施例提供了一种在虚拟环境中驾驶载具的方法、装置、终端及存储介质,可以降低用户控制虚拟对象驾驶载具的操作难度。所述技术方案如下:The embodiments of the present application provide a method, a device, a terminal, and a storage medium for driving a vehicle in a virtual environment, which can reduce the operation difficulty for a user to control a virtual object driving a vehicle. The technical solution is as follows:
一方面,本申请实施例提供了一种在虚拟环境中驾驶载具的方法,所述方法应用于终端,所述方法包括:On the one hand, an embodiment of the present application provides a method for driving a vehicle in a virtual environment. The method is applied to a terminal, and the method includes:
显示行驶画面和地图展示控件,所述行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且所述虚拟载具处于手动驾驶模式;Displaying a driving picture and a map display control, the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,接收对所述地图展示控件的标记操作,所述标记操作指在所述地图展示控件中标记出地点的操作;In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, where the marking operation refers to an operation of marking a location in the map display control;
响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地。In response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination.
另一方面,本申请实施例提供了一种在虚拟环境中驾驶载具的装置,所述装置包括:On the other hand, an embodiment of the present application provides a device for driving a vehicle in a virtual environment, and the device includes:
显示模块,用于显示行驶画面和地图展示控件,所述行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且所述虚拟载具处于手动驾驶模式;A display module for displaying a driving picture and a map display control, the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
接收模块,用于响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,接收对所述地图展示控件的标记操作,所述标记操作指在所述地图展示控件中标记出地点的操作;The receiving module is configured to receive a marking operation on the map display control in response to the virtual vehicle being located in the autonomous driving area in the virtual environment, where the marking operation refers to marking a location in the map display control operate;
控制模块,用于响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地。The control module is configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically drive to the destination.
另一方面,本申请实施例提供了一种终端,所述终端包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述方面所述的在虚拟环境中驾驶载具的方法。On the other hand, an embodiment of the present application provides a terminal. The terminal includes a processor and a memory. The memory stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction, The at least one program, the code set or the instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment as described in the above aspect.
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述方面所述的在虚拟环境中驾驶载具的方法。In another aspect, a computer-readable storage medium is provided, and the computer-readable storage medium stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, The code set or instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment as described in the above aspect.
另一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面提供的在虚拟环境中驾驶载具的方法。In another aspect, a computer program product or computer program is provided. The computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method for driving a vehicle in a virtual environment provided in the above aspect.
采用本申请实施例提供的方法,当虚拟载具在手动驾驶模式下行驶至虚拟环境中的自动驾驶区域时,若接收到对地图展示控件的标记操作,则根据该标记操作将虚拟载具切换为自动驾驶模式,并控制虚拟载具自动行驶至目的 地,无需用户手动控制虚拟载具,从而简化了控制虚拟载具行驶的流程,降低了用户控制虚拟载具在虚拟环境中行驶的操作难度。Using the method provided by the embodiments of the present application, when the virtual vehicle is driven to the automatic driving area in the virtual environment in the manual driving mode, if a marking operation on the map display control is received, the virtual vehicle is switched according to the marking operation It is an automatic driving mode and controls the virtual vehicle to automatically drive to the destination without the user manually controlling the virtual vehicle. This simplifies the process of controlling the virtual vehicle and reduces the user's difficulty in controlling the virtual vehicle to drive in the virtual environment. .
附图说明Description of the drawings
图1示出了相关技术中手动控制虚拟载具行驶过程的界面示意图;Fig. 1 shows a schematic diagram of an interface for manually controlling the driving process of a virtual vehicle in the related art;
图2示出了本申请示例性实施例提供的在虚拟环境中驾驶载具过程的界面示意图;Fig. 2 shows a schematic interface diagram of a process of driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application;
图3示出了本申请一个示例性实施例提供的实施环境的示意图;Fig. 3 shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
图4示出了本申请一个示例性实施例提供的在虚拟环境中驾驶载具的方法的流程图;Fig. 4 shows a flowchart of a method for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application;
图5示出了本申请另一个示例性实施例提供的在虚拟环境中驾驶载具的方法的流程图;Fig. 5 shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application;
图6是一个示例性实施例提供的自动驾驶区域对应碰撞检测盒的示意图;Fig. 6 is a schematic diagram of a collision detection box corresponding to an autonomous driving area provided by an exemplary embodiment;
图7是一个示例性实施例提供的虚拟载具与自动驾驶区域各自对应碰撞检测盒发生碰撞时的示意图;FIG. 7 is a schematic diagram of a collision detection box corresponding to a virtual vehicle and an autonomous driving area provided by an exemplary embodiment;
图8是根据标记地点确定目的地过程的实施示意图;Figure 8 is a schematic diagram of the implementation of the process of determining the destination according to the marked location;
图9是根据自动驾驶路径上的路点控制虚拟载具自动行驶的实施示意图;Fig. 9 is a schematic diagram of the implementation of controlling the automatic driving of the virtual vehicle according to the waypoints on the automatic driving path;
图10是自动驾驶模式和手动驾驶模式下用户界面的界面示意图;FIG. 10 is a schematic diagram of the interface of the user interface in the automatic driving mode and the manual driving mode;
图11示出了本申请另一个示例性实施例提供的在虚拟环境中驾驶载具的方法的流程图;Fig. 11 shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application;
图12是本申请一个示例性实施例提供的在虚拟环境中驾驶载具的装置的结构框图;Fig. 12 is a structural block diagram of an apparatus for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application;
图13示出了本申请一个示例性实施例提供的终端的结构框图。Fig. 13 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the purpose, technical solutions, and advantages of the present application clearer, the implementation manners of the present application will be described in further detail below in conjunction with the accompanying drawings.
首先,对本申请实施例中涉及的名词进行介绍:First, introduce the terms involved in the embodiments of this application:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的环境,还可以是纯虚构的环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟 环境中的任意一种,本申请对此不加以限定。下述实施例以虚拟环境是三维虚拟环境来举例说明。Virtual environment: It is the virtual environment displayed (or provided) when the application is running on the terminal. The virtual environment may be a simulation environment of the real world, a semi-simulation and semi-fictional environment, or a purely fictitious environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. In the following embodiments, the virtual environment is a three-dimensional virtual environment as an example.
虚拟对象:是指虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在三维虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。可选地,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。Virtual object: refers to the movable object in the virtual environment. The movable objects may be virtual characters, virtual animals, cartoon characters, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional model created based on animation skeletal technology. Each virtual object has its own shape and volume in the three-dimensional virtual environment, and occupies a part of the space in the three-dimensional virtual environment.
虚拟载具:指虚拟环境中可以由虚拟对象驾驶并行使的载具,其可以是虚拟汽车、虚拟摩托车、虚拟飞机、虚拟自行车、虚拟坦克、虚拟船只等等。其中,虚拟载具可以随机设置在虚拟环境中,且每个虚拟载具在三维虚拟环境中具备自身的形状和体积占据三维虚拟环境中的一部分空间,并能够与三维虚拟环境中的其他虚拟物体(比如房屋、树木)发生碰撞。Virtual vehicle: refers to a vehicle that can be driven and exercised by virtual objects in a virtual environment. It can be a virtual car, a virtual motorcycle, a virtual plane, a virtual bicycle, a virtual tank, a virtual boat, and so on. Among them, the virtual vehicles can be randomly set in the virtual environment, and each virtual vehicle has its own shape and volume in the three-dimensional virtual environment, occupies a part of the space in the three-dimensional virtual environment, and can interact with other virtual objects in the three-dimensional virtual environment. (Such as houses, trees) collide.
第一人称射击游戏:是指用户能够以第一人称视角进行的射击游戏,游戏中的虚拟环境的画面是以第一虚拟对象的视角对虚拟环境进行观察的画面。在游戏中,至少两个虚拟对象在虚拟环境中进行单局对战模式,虚拟对象通过躲避其他虚拟对象发起的伤害和虚拟环境中存在的危险(比如,毒气圈、沼泽地等)来达到在虚拟环境中存活的目的,当虚拟对象在虚拟环境中的生命值为零时,虚拟对象在虚拟环境中的生命结束,最后存活在虚拟环境中的虚拟对象是获胜方。可选地,该对战以第一个客户端加入对战的时刻作为开始时刻,以最后一个客户端退出对战的时刻作为结束时刻,每个客户端可以控制虚拟环境中的一个或多个虚拟对象。可选地,该对战的竞技模式可以包括单人对战模式、双人小组对战模式或者多人大组对战模式,本申请实施例对对战模式不加以限定。First-person shooting game: refers to a shooting game that users can play from a first-person perspective. The screen of the virtual environment in the game is a screen that observes the virtual environment from the perspective of the first virtual object. In the game, at least two virtual objects play a single-game battle mode in the virtual environment. The virtual objects can avoid the damage initiated by other virtual objects and the dangers in the virtual environment (for example, gas circle, swamp, etc.) to achieve the virtual environment. The purpose of survival in the environment, when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment ends, and the virtual object that finally survives in the virtual environment is the winner. Optionally, the battle starts with the moment when the first client joins the battle, and the moment when the last client exits the battle as the end time. Each client can control one or more virtual objects in the virtual environment. Optionally, the competitive mode of the battle may include a single-player battle mode, a two-player team battle mode, or a multi-player team battle mode, and the embodiment of the present application does not limit the battle mode.
用户界面(User Interface,UI)控件:是指在应用程序的用户界面上能够看见的任何可视控件或元素,比如,图片、输入框、文本框、按钮、标签等控件,其中一些UI控件响应用户的操作,比如,用户触发匕首道具对应UI控件,控制虚拟对象将当前使用的枪支切换为匕首;比如,在驾驶载具时,用户界面显示有驾驶控件,用户可以通过触发驾驶控件,控制虚拟对象驾驶虚拟载具行驶。User Interface (UI) control: Refers to any visual control or element that can be seen on the user interface of the application, such as pictures, input boxes, text boxes, buttons, labels and other controls. Some of the UI controls respond The user’s operation, for example, the user triggers the dagger props corresponding to the UI control, and controls the virtual object to switch the currently used gun to the dagger; for example, when driving a vehicle, the user interface displays the driving control, and the user can control the virtual by triggering the driving control The subject is driving a virtual vehicle.
本申请中提供的方法可以应用于虚拟现实应用程序、三维地图程序、军事 仿真程序、第一人称射击游戏、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)等,下述实施例是以在游戏中的应用来举例说明。The method provided in this application can be applied to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games, multiplayer online battle Arena Games (MOBA), etc. The following embodiments are based on Take an example in the application of the game.
基于虚拟环境的游戏往往由一个或多个游戏世界的地图构成,游戏中的虚拟环境模拟现实世界的场景,用户可以操控游戏中的虚拟对象在虚拟环境中进行行走、跑步、跳跃、射击、格斗、驾驶、切换使用虚拟道具、使用虚拟道具伤害其他虚拟对象等动作,交互性较强,并且多个用户可以在线组队进行竞技游戏。Games based on virtual environments are often composed of one or more maps of the game world. The virtual environment in the game simulates the scene of the real world. Users can manipulate virtual objects in the game to walk, run, jump, shoot, and fight in the virtual environment. , Driving, switching to use virtual props, using virtual props to damage other virtual objects and other actions, which are highly interactive, and multiple users can team up for competitive games online.
如图1所示,其示出了相关技术中控制虚拟载具行驶过程的界面示意图。当用户控制虚拟对象驾驶虚拟载具(图1中的虚拟载具为虚拟汽车)过程中,用户界面100显示有行驶画面,且用户界面100上显示有地图展示控件101、驾驶控件(包括方向控件102、加速控件103、刹车控件104)以及载具油量标识105。用户可以通过地图展示控件101查看虚拟对象当前所处位置以及周围环境,可以通过方向控件102控制虚拟载具前进、后退和转向,可以通过加速控件103控制虚拟载具加速,可以通过刹车控件104控制虚拟载具快速停止,可以通过载具油量标识105知悉虚拟载具的剩余油量。As shown in Fig. 1, it shows a schematic diagram of an interface for controlling the driving process of a virtual vehicle in the related art. When the user controls the virtual object to drive the virtual vehicle (the virtual vehicle in FIG. 1 is a virtual car), the user interface 100 displays a driving screen, and the user interface 100 displays a map display control 101 and driving controls (including direction controls) 102. Acceleration control 103, brake control 104), and vehicle fuel quantity indicator 105. The user can view the current location of the virtual object and the surrounding environment through the map display control 101, the direction control 102 can control the forward, backward and turning of the virtual vehicle, the acceleration control 103 can control the acceleration of the virtual vehicle, and the brake control 104 can control The virtual vehicle stops quickly, and the remaining fuel amount of the virtual vehicle can be known through the vehicle fuel level indicator 105.
采用上述方法手动控制虚拟载具行驶时,用户需要根据虚拟载具当前所处环境的环境情况,对不同的驾驶控件进行操作,且需要手动选择行驶线路,对于新手用户而言,操作难度较高,若用户操作不当或者选择的行驶线路有误,行驶至目的地需要花费大量时间。When using the above method to manually control the virtual vehicle to drive, the user needs to operate different driving controls according to the environment of the virtual vehicle's current environment, and needs to manually select the driving route. For novice users, the operation is more difficult If the user does not operate properly or chooses the wrong route, it will take a lot of time to drive to the destination.
本申请实施例提供了一种在虚拟环境中驾驶载具的方法,如图2所示,其示出了本申请示例性实施例提供的在虚拟环境中驾驶载具过程的界面示意图。An embodiment of the present application provides a method for driving a vehicle in a virtual environment, as shown in FIG. 2, which shows a schematic interface diagram of a process of driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application.
在一种可能的实施方式中,当用户通过驾驶控件(包括如图1中所示的驾驶控件)控制虚拟载具在虚拟环境中行驶过程中,若虚拟载具处于自动驾驶区域,终端则在用户界面100中显示自动驾驶提示信息106,提示用户可以将虚拟载具切换至自动驾驶模式。进一步的,当接收到对地图展示控件101的触发操作时,用户界面100中显示放大后的地图展示控件101,并接收用户在地图展示控件101中标记的目的地107。完成目的地标记后,终端将虚拟载具切换为自动驾驶模式,并在自动驾驶模式下,控制虚拟载具自动行驶至目的地,无需用户手动触控驾驶控件。并且,切换至自动驾驶模式后,用户界面108还显示有驾驶模式切换控件108,用户可以通过点击驾驶模式切换控件108重新将 虚拟载具切换为手动驾驶模式,进而通过驾驶控件手动控制虚拟载具行驶。In a possible implementation, when the user controls the virtual vehicle to drive in the virtual environment through the driving controls (including the driving control as shown in FIG. 1), if the virtual vehicle is in the automatic driving area, the terminal is The automatic driving prompt message 106 is displayed in the user interface 100, prompting the user to switch the virtual vehicle to the automatic driving mode. Further, when a trigger operation on the map display control 101 is received, the enlarged map display control 101 is displayed in the user interface 100, and the destination 107 marked in the map display control 101 by the user is received. After the destination is marked, the terminal switches the virtual vehicle to the automatic driving mode, and in the automatic driving mode, controls the virtual vehicle to automatically drive to the destination without the user manually touching the driving controls. Moreover, after switching to the automatic driving mode, the user interface 108 also displays a driving mode switching control 108. The user can switch the virtual vehicle to manual driving mode again by clicking the driving mode switching control 108, and then manually control the virtual vehicle through the driving control Driving.
相较于相关技术中,需要用户在手动操作驾驶控件以控制虚拟载具,并需要在行驶过程中自主选择行驶路线,采用本申请实施例提供的方法,用户只需要控制虚拟载具行驶至自动驾驶区域,并通过地图展示控件设置自动驾驶目的地,终端即可自动确定行驶路线,并控制虚拟载具行驶,无需用户手动操作,简化了虚拟载具的控制流程,降低了虚拟载具的操作难度,有助于缩短虚拟载具到达目的所需的时间。Compared with the related technology, the user needs to manually operate the driving controls to control the virtual vehicle, and needs to autonomously select the driving route during the driving process. With the method provided by the embodiment of the present application, the user only needs to control the virtual vehicle to travel automatically. Driving area, and setting the destination of automatic driving through the map display control, the terminal can automatically determine the driving route and control the virtual vehicle to travel without manual operation by the user, which simplifies the control process of the virtual vehicle and reduces the operation of the virtual vehicle Difficulty, which helps to shorten the time required for the virtual vehicle to reach its goal.
请参考图3,其示出了本申请一个示例性实施例提供的实施环境的示意图。该实施环境中包括:第一终端120、服务器140和第二终端160。Please refer to FIG. 3, which shows a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
第一终端120安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。第一终端120是第一用户使用的终端,第一用户使用第一终端120控制位于虚拟环境中的第一虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、射击、投掷、切换虚拟道具、使用虚拟道具伤害其他虚拟对象中的至少一种。示意性的,第一虚拟对象是第一虚拟人物,比如仿真人物对象或动漫人物对象。The first terminal 120 installs and runs an application program supporting the virtual environment. The application program can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games. The first terminal 120 is a terminal used by the first user. The first user uses the first terminal 120 to control the first virtual object in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, shooting, throwing, switching virtual props, and using virtual props to damage other virtual objects. Illustratively, the first virtual object is a first virtual character, such as a simulated character object or an animation character object.
第一终端120通过无线网络或有线网络与服务器140相连。The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。示意性的,服务器140包括处理器144和存储器142,存储器142包括显示模块1421、接收模块1422和控制模块1423。服务器140用于为支持三维虚拟环境的应用程序提供后台服务。可选地,服务器140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。The server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, and the memory 142 includes a display module 1421, a receiving module 1422, and a control module 1423. The server 140 is used to provide background services for applications supporting the three-dimensional virtual environment. Optionally, the server 140 is responsible for the main calculation work, and the first terminal 120 and the second terminal 160 are responsible for the secondary calculation work; or, the server 140 is responsible for the secondary calculation work, and the first terminal 120 and the second terminal 160 are responsible for the main calculation work; Or, the server 140, the first terminal 120, and the second terminal 160 adopt a distributed computing architecture to perform collaborative computing.
第二终端160安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、FPS游戏、MOBA游戏、多人枪战类生存游戏中的任意一种。第二终端160是第二用户使用的终端,第二用户使用第二终端160控制位于虚拟环境中的第二虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、射 击、投掷、切换虚拟道具、使用虚拟道具伤害其他虚拟对象中的至少一种。示意性的,第二虚拟对象是第二虚拟人物,比如仿真人物对象或动漫人物对象。The second terminal 160 installs and runs an application program supporting the virtual environment. The application program can be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer gun battle survival games. The second terminal 160 is a terminal used by the second user. The second user uses the second terminal 160 to control the second virtual object in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, shooting, throwing, switching virtual props, and using virtual props to damage other virtual objects. Illustratively, the second virtual object is a second virtual character, such as a simulated character object or an animation character object.
可选地,第一虚拟人物和第二虚拟人物处于同一虚拟环境中。可选地,第一虚拟人物和第二虚拟人物可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。Optionally, the first virtual character and the second virtual character are in the same virtual environment. Optionally, the first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication permissions.
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同控制系统平台的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。第一终端120和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、数码播放器、膝上型便携计算机和台式计算机中的至少一种。以下实施例以终端包括智能手机来举例说明。Optionally, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application on different control system platforms. The first terminal 120 may generally refer to one of multiple terminals, and the second terminal 160 may generally refer to one of multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as examples. The device types of the first terminal 120 and the second terminal 160 are the same or different, and the device types include at least one of a smart phone, a tablet computer, an e-book reader, a digital player, a laptop computer, and a desktop computer. In the following embodiment, the terminal includes a smart phone as an example.
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。Those skilled in the art may know that the number of the aforementioned terminals may be more or less. For example, there may be only one terminal, or there may be dozens or hundreds of terminals, or more. The embodiments of the present application do not limit the number of terminals and device types.
请参考图4,其示出了本申请一个示例性实施例提供的在虚拟环境中驾驶载具的方法的流程图。本实施例以该方法用于图3所示实施环境中的第一终端120或第二终端160或该实施环境中的其它终端为例进行说明,该方法包括如下步骤。Please refer to FIG. 4, which shows a flowchart of a method for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application. In this embodiment, the method is used in the first terminal 120 or the second terminal 160 in the implementation environment shown in FIG. 3 or other terminals in the implementation environment as an example for description. The method includes the following steps.
步骤401,显示行驶画面和地图展示控件,行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且虚拟载具处于手动驾驶模式。Step 401: Display a driving picture and map display controls. The driving picture is a picture of a virtual object driving a virtual vehicle in a virtual environment, and the virtual vehicle is in a manual driving mode.
在一种可能的实施方式中,行驶画面和地图展示控件显示在用户界面中,且地图展示控件叠加显示在行驶画面上层。In a possible implementation manner, the driving screen and the map display control are displayed in the user interface, and the map display control is superimposed and displayed on the upper layer of the driving screen.
其中,该用户界面是支持虚拟环境的应用程序的界面,该用户界面上包括虚拟环境画面和各种功能对应的控件。本申请实施例中,该虚拟环境画面即为行驶画面。Wherein, the user interface is an interface of an application program supporting a virtual environment, and the user interface includes a virtual environment screen and controls corresponding to various functions. In the embodiment of the present application, the virtual environment picture is the driving picture.
可选地,虚拟环境画面是以虚拟对象的视角对虚拟环境进行观察的画面。视角是指以虚拟对象的第一人称视角或者第三人称视角在虚拟环境中进行观察时的观察角度。可选地,本申请的实施例中,视角是在虚拟环境中通过摄像机模型对虚拟对象进行观察时的角度。Optionally, the virtual environment screen is a screen for observing the virtual environment from the perspective of the virtual object. The angle of view refers to the viewing angle when the virtual object is observed in the virtual environment from the first person perspective or the third person perspective. Optionally, in the embodiment of the present application, the angle of view is the angle when the virtual object is observed through the camera model in the virtual environment.
可选地,摄像机模型在虚拟环境中对虚拟对象进行自动跟随,即,当虚拟对象在虚拟环境中的位置发生改变时,摄像机模型跟随虚拟对象在虚拟环境中的位置同时发生改变,且该摄像机模型在虚拟环境中始终处于虚拟对象的预设距离范围内。可选地,在自动跟随过程中,摄像头模型和虚拟对象的相对位置不发生变化。Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model follows the position of the virtual object in the virtual environment and changes at the same time, and the camera The model is always within the preset distance range of the virtual object in the virtual environment. Optionally, during the automatic following process, the relative position of the camera model and the virtual object does not change.
摄像机模型是指在虚拟环境中位于虚拟对象周围的三维模型,当采用第一人称视角时,该摄像机模型位于虚拟对象的头部附近或者位于虚拟对象的头部;当采用第三人称视角时,该摄像机模型可以位于虚拟对象的后方并与虚拟对象进行绑定,也可以位于与虚拟对象相距预设距离的任意位置,通过该摄像机模型可以从不同角度对位于虚拟环境中的虚拟对象进行观察,可选地,该第三人称视角为第一人称的过肩视角时,摄像机模型位于虚拟对象(比如虚拟人物的头肩部)的后方。可选地,除第一人称视角和第三人称视角外,视角还包括其他视角,比如俯视视角;当采用俯视视角时,该摄像机模型可以位于虚拟对象头部的上空,俯视视角是以从空中俯视的角度进行观察虚拟环境的视角。可选地,该摄像机模型在虚拟环境中不会进行实际显示,即,在用户界面显示的虚拟环境中不显示该摄像机模型。The camera model refers to the three-dimensional model located around the virtual object in the virtual environment. When the first-person perspective is adopted, the camera model is located near the head of the virtual object or the head of the virtual object; when the third-person perspective is adopted, the camera The model can be located behind the virtual object and bound with the virtual object, or can be located at any position with a preset distance from the virtual object. The camera model can be used to observe the virtual object in the virtual environment from different angles, optional Specifically, when the third-person perspective is the over-the-shoulder perspective of the first person, the camera model is located behind the virtual object (such as the head and shoulders of the virtual character). Optionally, in addition to the first-person perspective and the third-person perspective, the perspective includes other perspectives, such as a top-view perspective; when a top-down perspective is adopted, the camera model can be located above the head of the virtual object, and the top-view perspective is viewed from the air Angle of view to observe the virtual environment. Optionally, the camera model is not actually displayed in the virtual environment, that is, the camera model is not displayed in the virtual environment displayed on the user interface.
以该摄像机模型位于与虚拟对象相距预设距离的任意位置为例进行说明,可选地,一个虚拟对象对应一个摄像机模型,该摄像机模型可以以虚拟对象为旋转中心进行旋转,如:以虚拟对象的任意一点为旋转中心对摄像机模型进行旋转,摄像机模型在旋转过程中的不仅在角度上有转动,还在位移上有偏移,旋转时摄像机模型与该旋转中心之间的距离保持不变,即,将摄像机模型在以该旋转中心作为球心的球体表面进行旋转,其中,虚拟对象的任意一点可以是虚拟对象的头部、躯干、或者虚拟对象周围的任意一点,本申请实施例对此不加以限定。可选地,摄像机模型在对虚拟对象进行观察时,该摄像机模型的视角的中心指向为该摄像机模型所在球面的点指向球心的方向。Take the camera model at any position at a preset distance from the virtual object as an example for description. Optionally, a virtual object corresponds to a camera model, and the camera model can be rotated with the virtual object as the center of rotation, such as: virtual object Any point of is the center of rotation to rotate the camera model. During the rotation, the camera model not only rotates in angle, but also shifts in displacement. When rotating, the distance between the camera model and the center of rotation remains unchanged. That is, the camera model is rotated on the surface of the sphere with the center of rotation as the center of the sphere, where any point of the virtual object can be the head, torso, or any point around the virtual object. Not limited. Optionally, when the camera model observes the virtual object, the center of the angle of view of the camera model points to the direction where the point on the spherical surface where the camera model is located points to the center of the sphere.
示意性的,如图2所示,该行驶画面为采用第三人称视角在虚拟环境中进行观察时的画面。当然,在其他可能的实施方式中,行驶画面也可以是采用第一人称视角在虚拟环境中进行观察时的画面,本实施例对此并不进行限定。Schematically, as shown in FIG. 2, the driving picture is a picture when observing in a virtual environment with a third-person perspective. Of course, in other possible implementation manners, the driving picture may also be a picture when observed in a virtual environment using a first-person perspective, which is not limited in this embodiment.
可选地,行驶画面中还显示由虚拟环境中的其他元素,包括山川、平地、河流、湖泊、海洋、沙漠、天空、植物、建筑中的至少一种元素。Optionally, other elements in the virtual environment are also displayed in the driving picture, including at least one element of mountains, flatlands, rivers, lakes, oceans, deserts, sky, plants, and buildings.
地图展示控件是用于对虚拟环境中全部或部分区域的地图情况进行展示 的控件。可选的,地图展示控件中展示的地图画面是以俯视角度对虚拟环境进行观察时的画面。The map display control is a control used to display the map of all or part of the area in the virtual environment. Optionally, the map screen displayed in the map display control is the screen when observing the virtual environment from a bird's-eye view.
地图展示控件中除了对虚拟环境进行展示外,还显示有当前虚拟对象的对象标识。可选的,该对象标识显示在地图展示控件所展示地图的中央,且当虚拟对象在虚拟环境中的位置发生变化时,地图展示控件所展示的地图也相应发生变化。In addition to displaying the virtual environment, the map display control also displays the object identifier of the current virtual object. Optionally, the object identifier is displayed in the center of the map displayed by the map display control, and when the position of the virtual object in the virtual environment changes, the map displayed by the map display control also changes accordingly.
可选的,用户界面中除了显示行驶画面和地图展示控件外,还显示有手动驾驶模式下用于控制虚拟载具的驾驶控件。其中,不同虚拟载具对应的驾驶控件的类型和数量可能不同,比如,当虚拟对象驾驶虚拟汽车时,用户界面显示的驾驶控件可以包括方向控件、加速控件和刹车控件;当虚拟对象驾驶虚拟摩托车时,用户界面显示的驾驶控件可以包括方向控件、加速控件、刹车控件、抬车头控件和压车头控件。本申请实施例并不对用户界面中驾驶控件的类型以及分布位置进行限定。Optionally, in addition to displaying the driving screen and map display controls, the user interface also displays driving controls for controlling the virtual vehicle in the manual driving mode. Among them, the type and number of driving controls corresponding to different virtual vehicles may be different. For example, when a virtual object drives a virtual car, the driving controls displayed on the user interface may include direction controls, acceleration controls, and brake controls; when the virtual object drives a virtual motorcycle When driving, the driving controls displayed on the user interface may include a direction control, an acceleration control, a brake control, a head-up control, and a head-down control. The embodiments of the present application do not limit the types and distribution positions of the driving controls in the user interface.
示意性的,如图2所示,用户界面100中包括方向控件102、加速控件103和刹车控件104。Illustratively, as shown in FIG. 2, the user interface 100 includes a direction control 102, an acceleration control 103 and a brake control 104.
步骤402,响应于虚拟载具位于虚拟环境中的自动驾驶区域,接收对地图展示控件的标记操作,标记操作指在地图展示控件中标记出地点的操作。Step 402: In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, the marking operation refers to an operation of marking a location in the map display control.
本申请实施例中,虚拟载具并非在虚拟环境中的任何区域都可以进行自动驾驶,而是仅能够在自动驾驶区域进行自动驾驶。在一种可能的实施方式中,虚拟环境中预先设置有自动驾驶区域,当虚拟载具位于自动驾驶区域时,用户才能够通过地图展示控件设置自动驾驶的目的地。In the embodiment of the present application, the virtual vehicle is not capable of performing automatic driving in any area in the virtual environment, but is only capable of performing automatic driving in the automatic driving area. In a possible implementation manner, an autonomous driving area is preset in the virtual environment, and when the virtual vehicle is located in the autonomous driving area, the user can set the destination of the automatic driving through the map display control.
可选的,在自动驾驶区域包括虚拟环境中的预设道路,即用户需要手动控制虚拟载具行驶至预设道路后,才能设置自动驾驶的目的地。当然,处于预设道路外,虚拟环境中其它环境简单的区域(即包含环境元素较少的区域)也可以被设置为自动驾驶区域,本实施例并不对自动驾驶区域的具体类型进行限定。Optionally, the automatic driving area includes a preset road in the virtual environment, that is, the user needs to manually control the virtual vehicle to drive to the preset road before setting the destination of the automatic driving. Of course, outside the preset road, other simple environment areas in the virtual environment (that is, areas containing fewer environmental elements) can also be set as automatic driving areas, and this embodiment does not limit the specific types of automatic driving areas.
可选的,终端实时检测虚拟载具是否位于自动驾驶区域,若检测到虚拟载具位于自动驾驶区域,则在用户界面显示提示信息,提示用户可以通过地图展示控件设置自动驾驶的目的地,进而进入自动驾驶模式。Optionally, the terminal detects in real time whether the virtual vehicle is located in the autonomous driving area, and if it detects that the virtual vehicle is located in the autonomous driving area, a prompt message is displayed on the user interface, prompting the user to set the destination of automatic driving through the map display control, and then Enter autopilot mode.
在一种可能的实施方式中,当接收到对地图展示控件的查看操作时,终端显示放大后的地图,并进一步接收对地图展示控件的标记操作,其中,该标记操作可以是对地图上某一区域的点击操作,相应的,点击操作对应的点击位置 即为标记出的地点的位置。In a possible implementation manner, when receiving a viewing operation on the map display control, the terminal displays the enlarged map, and further receives a marking operation on the map display control, where the marking operation may be a certain operation on the map. A click operation of a region, correspondingly, the click position corresponding to the click operation is the position of the marked location.
当然,当虚拟载具位于自动驾驶区域外,或,虚拟对象未驾驶载具时,用户也可以对地图展示控件进行标记操作,但是该标记操作指示的地点并非用于控制虚拟载具进行自动驾驶,而是仅具有地点标记功能,以指示标记地点与虚拟对象当前所处位置的相对方位。Of course, when the virtual vehicle is located outside the autonomous driving area, or when the virtual object is not driving the vehicle, the user can also mark the map display controls, but the location indicated by the mark operation is not used to control the virtual vehicle for automatic driving , But only has a location marking function to indicate the relative position of the marked location and the current location of the virtual object.
需要说明的是,当终端控制的虚拟对象为虚拟载具的驾驶者时,用户才能够执行标记操作,相应的,若虚拟独享为虚拟载具的乘坐者,用户将无法执行标记操作(即不具备设置自动驾驶的权限)。It should be noted that when the virtual object controlled by the terminal is the driver of the virtual vehicle, the user can perform the marking operation. Correspondingly, if the virtual exclusive is the occupant of the virtual vehicle, the user will not be able to perform the marking operation (ie Does not have the authority to set up automatic driving).
步骤403,响应于标记操作,将虚拟载具切换为自动驾驶模式,并控制虚拟载具自动行驶至目的地。Step 403: In response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination.
进一步的,终端根据标记操作将虚拟载具切换为自动驾驶模式,并确定出自动驾驶的目的地,从而控制虚拟载具自动行驶至目的地。其中,虚拟载具由当前地点行驶至目的地的行驶路径由终端自动规划。Further, the terminal switches the virtual vehicle to the automatic driving mode according to the marking operation, and determines the destination of the automatic driving, thereby controlling the virtual vehicle to automatically drive to the destination. Among them, the driving path of the virtual vehicle from the current location to the destination is automatically planned by the terminal.
在一种可能的实施方式中,虚拟环境中的所有虚拟载具均支持自动驾驶模式。In a possible implementation manner, all virtual vehicles in the virtual environment support the automatic driving mode.
在另一种可能的实施方式中,虚拟环境中的预设虚拟载具支持自动驾驶模式。相应的,当虚拟载具为预设虚拟载具时,终端响应于标记操作,将虚拟载具切换为自动驾驶模式。其中,该预设虚拟载具可以包括虚拟汽车、虚拟坦克和虚拟船只,但不包括虚拟自行车和虚拟摩托车。In another possible implementation manner, the preset virtual vehicle in the virtual environment supports an automatic driving mode. Correspondingly, when the virtual vehicle is the preset virtual vehicle, the terminal switches the virtual vehicle to the automatic driving mode in response to the marking operation. Among them, the preset virtual vehicle may include virtual cars, virtual tanks, and virtual ships, but does not include virtual bicycles and virtual motorcycles.
可选的,自动驾驶模式下,终端显示模式提示信息,提示用户虚拟载具当前处于自动驾驶模式。Optionally, in the automatic driving mode, the terminal displays mode prompt information to remind the user that the virtual vehicle is currently in the automatic driving mode.
在一种可能的实施方式中,自动驾驶模式下,用户无法手动控制虚拟载具;或者,用户仍旧可以通过驾驶控件重新手动控制虚拟载具,且手动控制虚拟载具后,虚拟载具将退出自动驾驶模式。In a possible implementation, in the automatic driving mode, the user cannot manually control the virtual vehicle; or, the user can still manually control the virtual vehicle through the driving controls, and after the virtual vehicle is manually controlled, the virtual vehicle will exit Autopilot mode.
需要说明的是,若在自动驾驶模式下再次接收到对地图的标记操作,终端即根据该标记操作更新目的地,并控制虚拟载具自动行驶至更新后的目的地。It should be noted that if the marking operation on the map is received again in the automatic driving mode, the terminal will update the destination according to the marking operation and control the virtual vehicle to automatically drive to the updated destination.
综上所述,本申请实施例中,当虚拟载具在手动驾驶模式下行驶至虚拟环境中的自动驾驶区域时,若接收到对地图展示控件的标记操作,则根据该标记操作将虚拟载具切换为自动驾驶模式,并控制虚拟载具自动行驶至目的地,无需用户手动控制虚拟载具,从而简化了控制虚拟载具行驶的流程,降低了用户控制虚拟载具在虚拟环境中行驶的操作难度。In summary, in the embodiment of the present application, when the virtual vehicle drives to the autonomous driving area in the virtual environment in the manual driving mode, if it receives a marking operation on the map display control, the virtual vehicle will be loaded according to the marking operation. The vehicle is switched to the automatic driving mode and the virtual vehicle is controlled to automatically drive to the destination. There is no need for the user to manually control the virtual vehicle. This simplifies the process of controlling the virtual vehicle to drive and reduces the user’s control of the virtual vehicle to drive in the virtual environment. Difficulty of operation.
并且,相较于手动控制虚拟载具时,用户需要频繁进行控制操作,相应的,终端需要高频检测控制操作,并对控制操作进行响应(比如需要对触控屏上接收到的触控操作进行检测和响应),导致自动驾驶过程中终端数据处理量较大,进而增加了终端功耗;采用本申请实施例提供的方案,终端能够基于标记自动行驶至目的地,自动驾驶过程中无需用户进行控制操作,从而降低虚拟载具驾驶过程中终端进行控制操作检测以及响应的频率,进而降低终端数据处理量,有助于降低终端功耗。Moreover, compared to the manual control of the virtual vehicle, the user needs to perform frequent control operations. Accordingly, the terminal needs high-frequency detection and control operations and responds to the control operations (such as the need to respond to the touch operations received on the touch screen). Perform detection and response), resulting in a large amount of terminal data processing during the automatic driving process, which in turn increases the power consumption of the terminal; using the solution provided by the embodiment of this application, the terminal can automatically drive to the destination based on the mark, and the user is not required during the automatic driving process. Perform control operations, thereby reducing the frequency of the terminal's control operation detection and response during the virtual vehicle driving process, thereby reducing the amount of terminal data processing, and helping to reduce the power consumption of the terminal.
此外,终端只需要通过服务器将目的地发送至其他终端,其他终端即可根据该目的地还原出自动驾驶中的虚拟载具,无需通过服务器将实时控制数据和位置数据转发至其他终端,降低了服务器的数据转发量,减轻了服务器的数据转发压力。In addition, the terminal only needs to send the destination to other terminals through the server, and other terminals can restore the virtual vehicle in automatic driving according to the destination, without the need to forward real-time control data and location data to other terminals through the server, which reduces The data forwarding volume of the server reduces the data forwarding pressure of the server.
不同于现实中车辆的自动驾驶功能(需要通过车辆识别、车道识别等复杂的图像识别技术),本申请实施例中,为了降低虚拟载具实现自动驾驶功能的难度和运算量,虚拟载具仅能够在自动驾驶区域(比如预设道路)内进行自动驾驶,即虚拟载具的自动驾驶路径位于自动驾驶区域内。下面采用示意性的实施例对实现自动驾驶功能的过程进行说明。Different from the automatic driving function of a real vehicle (which requires complex image recognition technologies such as vehicle recognition and lane recognition), in the embodiments of this application, in order to reduce the difficulty and computational complexity of the virtual vehicle to realize the automatic driving function, the virtual vehicle only The automatic driving can be carried out in an automatic driving area (such as a preset road), that is, the automatic driving path of the virtual vehicle is located in the automatic driving area. In the following, an illustrative embodiment is used to describe the process of realizing the automatic driving function.
请参考图5,其示出了本申请另一个示例性实施例提供的在虚拟环境中驾驶载具的方法的流程图。本实施例以该方法用于图3所示实施环境中的第一终端120或第二终端160或该实施环境中的其它终端为例进行说明,该方法包括如下步骤。Please refer to FIG. 5, which shows a flowchart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application. In this embodiment, the method is used in the first terminal 120 or the second terminal 160 in the implementation environment shown in FIG. 3 or other terminals in the implementation environment as an example for description. The method includes the following steps.
步骤501,显示行驶画面和地图展示控件,行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且虚拟载具处于手动驾驶模式。Step 501: Display a driving picture and map display controls. The driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode.
步骤501的实施方式可以参考上述步骤401,本实施例在此不再赘述。For the implementation of step 501, reference may be made to step 401 above, and details are not described herein again in this embodiment.
步骤502,响应于虚拟载具位于虚拟环境中的自动驾驶区域,接收对地图展示控件的标记操作,标记操作指在地图中标记出地点的操作。Step 502: In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, the marking operation refers to an operation of marking a location on the map.
关于确定虚拟载具是否位于自动驾驶区域的方式,在一种可能的实施方式中,除了为虚拟环境中的虚拟物体(比如虚拟载具、虚拟房屋、虚拟路障、虚拟树木等等)设置碰撞检测盒外,本申请实施例中,虚拟环境内的自动驾驶区域也设置有碰撞检测盒,该碰撞检测盒用于检测虚拟环境中其他虚拟物体进入自动驾驶区域。Regarding the method of determining whether the virtual vehicle is located in the autonomous driving area, in a possible implementation manner, in addition to setting collision detection for virtual objects in the virtual environment (such as virtual vehicles, virtual houses, virtual roadblocks, virtual trees, etc.) Outside the box, in the embodiment of the present application, the automatic driving area in the virtual environment is also provided with a collision detection box, and the collision detection box is used to detect that other virtual objects in the virtual environment enter the automatic driving area.
示意性的,如图6所示,当自动驾驶区域为虚拟环境中的预设道路时,各段预设道路均对应各自的碰撞检测盒61(图中虚线区域即为碰撞检测盒的范围)。Schematically, as shown in FIG. 6, when the autonomous driving area is a preset road in the virtual environment, each preset road corresponds to its own collision detection box 61 (the dotted area in the figure is the range of the collision detection box) .
可选的,响应于第一碰撞检测盒与第二碰撞检测盒发生碰撞,终端确定虚拟载具位于自动驾驶区域,其中,第一碰撞检测盒是虚拟载具对应的碰撞检测盒,第二碰撞检测盒是自动驾驶区域对应的碰撞检测盒。Optionally, in response to a collision between the first collision detection box and the second collision detection box, the terminal determines that the virtual vehicle is located in the autonomous driving area, where the first collision detection box is a collision detection box corresponding to the virtual vehicle, and the second collision detection box The detection box is the collision detection box corresponding to the autonomous driving area.
示意性的,如图7所示,当虚拟汽车对应的第一碰撞检测盒71与虚拟道路对应的第二碰撞检测盒72发生碰撞时,终端确定虚拟取车位于自动驾驶区域。Schematically, as shown in FIG. 7, when the first collision detection box 71 corresponding to the virtual car collides with the second collision detection box 72 corresponding to the virtual road, the terminal determines that the virtual car pickup is located in the automatic driving area.
当然,除了上述确定虚拟载具是否位于自动驾驶区域的方式外,在其他可能的实施方式中,终端还可以虚拟载具在虚拟环境中所处位置的坐标,以及自动驾驶区域对应的区域坐标范围,确定虚拟载具是否位于自动驾驶区域(当坐标位于区域坐标范围内时,确定虚拟载具位于自动驾驶区域),本实施例对此并不进行限定。Of course, in addition to the above method of determining whether the virtual vehicle is located in the autonomous driving area, in other possible implementation manners, the terminal may also coordinate the location of the virtual vehicle in the virtual environment and the area coordinate range corresponding to the autonomous driving area. , It is determined whether the virtual vehicle is located in the automatic driving area (when the coordinates are within the range of the area coordinates, it is determined that the virtual vehicle is located in the automatic driving area), which is not limited in this embodiment.
进一步的,当虚拟载具位于自动驾驶区域时,终端接收对地图展示控件的标记操作,其中,接收标记操作的过程可以参考上述步骤402,本实施例在此不再赘述。Further, when the virtual vehicle is located in the autonomous driving area, the terminal receives the marking operation on the map display control. For the process of receiving the marking operation, reference may be made to the above step 402, which will not be repeated in this embodiment.
步骤503,根据标记操作指示的标记地点,确定目的地,目的地位于自动驾驶区域。Step 503: Determine the destination according to the marked location indicated by the marking operation, and the destination is located in the autonomous driving area.
本实施例中,由于虚拟载具仅能够在自动驾驶区域内实现自动驾驶,因此为了避免虚拟载具根据标记操作指示的标记地点,行驶至自动驾驶区域以外的区域,造成虚拟载具行驶异常(比如与虚拟环境中的障碍物发生碰撞),在一种可能的实施方式中,终端根据标记操作指示的标记地点,确定出位于自动驾驶区域范围内的目的地。In this embodiment, since the virtual vehicle can only achieve automatic driving in the automatic driving area, in order to avoid the virtual vehicle from driving to an area outside the automatic driving area according to the marked location indicated by the marking operation, causing the virtual vehicle to drive abnormally ( For example, a collision with an obstacle in a virtual environment). In a possible implementation manner, the terminal determines a destination within the autonomous driving area according to the marked location indicated by the marking operation.
可选的,当标记操作指示的标记地点位于自动驾驶区域时,终端将标记地点确定为目的地。其中,终端可以根据标记地点的地点坐标以及自动驾驶区域的区域坐标范围,确定标记地点是否位于自动驾驶区域,本实施例并不对具体确定方式进行限定。Optionally, when the marked location indicated by the marking operation is located in the autonomous driving area, the terminal determines the marked location as the destination. The terminal may determine whether the marked location is located in the automatic driving area according to the location coordinates of the marked location and the area coordinate range of the automatic driving area. This embodiment does not limit the specific determination method.
可选的,当标记操作指示的标记地点位于自动驾驶区域外时,终端将自动驾驶区域内与标记地点最近的地点确定为目的地。Optionally, when the marked location indicated by the marking operation is located outside the automatic driving area, the terminal determines the location closest to the marked location in the automatic driving area as the destination.
为了降低用户的学习成本,当标记地点位于自动驾驶区域外时,终端自动 将自动驾驶区域内与标记地点距离最近的地点确定为目的地,以便后续基于该目的地进行自动驾驶。In order to reduce the user's learning cost, when the marked location is outside the automated driving area, the terminal automatically determines the closest location in the automated driving area to the marked location as the destination, so that subsequent automated driving can be carried out based on the destination.
示意性的,如图8所示,当自动驾驶区域为虚拟环境中的预设道路时,若用户在地图上标记的标记地点81位于预设道路之外时,终端将预设道路上距离标记地点81最近的地点确定为目的地82。Schematically, as shown in FIG. 8, when the autonomous driving area is a preset road in the virtual environment, if the marked location 81 marked on the map by the user is outside the preset road, the terminal will mark the distance on the preset road The nearest location of the location 81 is determined as the destination 82.
除了根据标记地点自动确定目的地外,在其他可能的实施方式中,当标记操作指示的标记地点位于自动驾驶区域外时,终端还可以显示标记提示信息,该标记提示信息用于提示在自动驾驶区域内设置目的地,直至标记操作指示的标记地点位于自动驾驶区域内。In addition to automatically determining the destination based on the marked location, in other possible implementations, when the marked location indicated by the marking operation is outside the autonomous driving area, the terminal may also display mark prompt information, which is used to prompt the automatic driving Set the destination in the area until the marked location indicated by the marking operation is in the autonomous driving area.
步骤504,根据虚拟载具所处的当前地点以及目的地,确定自动驾驶路径,自动驾驶路径位于自动驾驶区域。Step 504: Determine an automatic driving path according to the current location and destination of the virtual vehicle, and the automatic driving path is located in the automatic driving area.
进一步的,根据虚拟道具所处的当前地点以及确定出的目的地,终端确定自动驾驶区域内的自动驾驶路径。Further, according to the current location of the virtual item and the determined destination, the terminal determines the automatic driving path in the automatic driving area.
由于以当前地点为起点,以目的地为终点的路径可能不止一条,比如,当自动驾驶区域为预设道路时,从当前地点行驶至目的地时,可以选择不同的岔路,因此为了缩短虚拟载具的行驶时间,可选的,该自动驾驶路径是当前地点行驶至目的地的最短路径。Since the current location is the starting point, there may be more than one path ending at the destination. For example, when the autonomous driving area is a preset road, different fork roads can be selected when driving from the current location to the destination. Therefore, in order to shorten the virtual load Optionally, the automatic driving path is the shortest path from the current location to the destination.
针对确定最短路径的方式,在一种可能的实施方式中,终端通过深度优先搜索(Depth First Search)算法,以路径分支点为节点,确定出至少一条候选路径(每个节点仅遍历一次),从而根据各条候选路径的长度,将最短的候选路径确定为自动驾驶路径,其中,路径分支点为自动驾驶区域中预设的分支点。当然,终端还可以通过其他图算法确定出候选路径,本实施例对此不作限定。Regarding the method of determining the shortest path, in a possible implementation manner, the terminal uses a depth-first search (Depth First Search) algorithm to determine at least one candidate path (each node is traversed only once) by using path branch points as nodes. Therefore, according to the length of each candidate path, the shortest candidate path is determined as the automatic driving path, where the path branch point is the preset branch point in the automatic driving area. Of course, the terminal may also determine the candidate path through other graph algorithms, which is not limited in this embodiment.
在其他可能的实施方式中,终端通过图算法确定出至少一条候选路径后,在地图上显示各条候选路径,并根据用户的选择操作,确定出自动驾驶路径,本实施例对此不作限定。In other possible implementation manners, after the terminal determines at least one candidate route through the graph algorithm, it displays each candidate route on the map, and determines the automatic driving route according to the user's selection operation, which is not limited in this embodiment.
示意性的,如图9所示,终端确定出自动驾驶路径91。Schematically, as shown in FIG. 9, the terminal determines an automatic driving path 91.
步骤505,将虚拟载具切换为自动驾驶模式,并控制虚拟载具按照自动驾驶路径行驶至目的地。In step 505, the virtual vehicle is switched to the automatic driving mode, and the virtual vehicle is controlled to drive to the destination according to the automatic driving path.
为了降低实现自动驾驶的难度和运算量,在一种可能的实施方式中,自动驾驶区域内预先设置有路点,相应的,终端即根据自动驾驶路径上的路点,控制虚拟载具自动行驶至目的地。In order to reduce the difficulty and computational complexity of realizing automatic driving, in a possible implementation manner, waypoints are preset in the automatic driving area. Accordingly, the terminal controls the virtual vehicle to drive automatically according to the waypoints on the automatic driving path. To the destination.
可选的,本步骤包括如下子步骤。Optionally, this step includes the following sub-steps.
一、确定自动驾驶路径上的至少两个路点,路点预先设置在自动驾驶区域中。1. Determine at least two waypoints on the automated driving path, and the waypoints are preset in the automated driving area.
示意性的,如图9所示,虚拟环境中的预设道路(自动驾驶区域)上设置有若干路点92,且自动驾驶路径91上的路点包括:K、G、D、E、F。Schematically, as shown in FIG. 9, a number of waypoints 92 are set on a preset road (autonomous driving area) in the virtual environment, and the waypoints on the autonomous driving path 91 include: K, G, D, E, F .
二、根据至少两个路点的路点顺序,控制虚拟载具行驶至目的地。2. Control the virtual vehicle to drive to the destination according to the waypoint sequence of at least two waypoints.
路点顺序指自动驾驶路径上,由当前地点达到目的地时所经过路点的先后顺序,图9中,该路点顺序即为K→G→D→E→F。The order of waypoints refers to the order of the waypoints passed by from the current location to the destination on the automatic driving path. In Figure 9, the order of the waypoints is K→G→D→E→F.
在一种可能的实施方式中,当自动驾驶路径上包括k个路点时,终端根据路点顺序控制虚拟载具行驶至目的地包括如下步骤。In a possible implementation manner, when k waypoints are included on the automated driving path, the terminal controlling the virtual vehicle to travel to the destination according to the order of the waypoints includes the following steps.
一、控制虚拟载具按照第一行驶方向从当前地点行驶至第1个路点,第一行驶方向由当前地点指向第1个路点。1. Control the virtual vehicle to drive from the current location to the first waypoint according to the first direction of travel, and the first direction of travel is directed from the current location to the first waypoint.
可选的,若虚拟载具所处的当前地点未设置路点,终端则根据当前起点与自动驾驶路径上的第1个路点确定第一行驶方向,从而控制虚拟载具按照第一行驶方向行驶至第1个路点。Optionally, if no waypoint is set at the current location of the virtual vehicle, the terminal determines the first driving direction according to the current starting point and the first waypoint on the automatic driving path, thereby controlling the virtual vehicle to follow the first driving direction Drive to the first waypoint.
示意性的,如图9所示,由于虚拟载具所处的当前地点未设置路点,因此终端首先控制虚拟道具行驶至路点K(第1个路点)。Schematically, as shown in FIG. 9, since no waypoint is set at the current location of the virtual vehicle, the terminal first controls the virtual prop to drive to waypoint K (the first waypoint).
当然,若虚拟载具所处的当前地点设置有路点,终端直接执行步骤二。Of course, if a waypoint is set at the current location of the virtual vehicle, the terminal directly executes step two.
二、控制虚拟载具按照第二行驶方向从第n个路点行驶至第n+1个路点,第二行驶方向由第n个路点指向第n+1个路点,n为大于等于1且小于等于k-1的整数。2. Control the virtual vehicle to drive from the nth waypoint to the n+1th waypoint according to the second driving direction. The second driving direction is from the nth waypoint to the n+1th waypoint, and n is greater than or equal to An integer of 1 and less than or equal to k-1.
在一种可能的实施方式中,自动驾驶区域上相邻路点之间路径为直线(或者近似直线),且不包含障碍物,因此当虚拟载具行驶至第n个路点时,终端即根据第n个路点与第n+1个路点确定第二行驶方向,从而控制虚拟载具按照第二行驶方向行驶至第n+1个路点。通过循环本步骤,虚拟载具行驶至第k个路点(即自动驾驶路径上的最后一个路点)。In a possible implementation manner, the path between adjacent waypoints in the autonomous driving area is a straight line (or an approximate straight line) and does not contain obstacles. Therefore, when the virtual vehicle travels to the nth waypoint, the terminal is The second driving direction is determined according to the nth waypoint and the n+1th waypoint, thereby controlling the virtual vehicle to travel to the n+1th waypoint according to the second driving direction. By looping this step, the virtual vehicle travels to the k-th waypoint (that is, the last waypoint on the automatic driving path).
示意性的,如图9所示,终端控制虚拟载具依次通过路点K、G、D、E、F。Schematically, as shown in FIG. 9, the terminal controls the virtual vehicle to pass through waypoints K, G, D, E, and F in sequence.
三、控制虚拟载具按照第三行驶方向从第k个路点行驶至目的地,第三行驶方向由第k个路点指向目的地。3. Control the virtual vehicle to drive from the k-th waypoint to the destination according to the third driving direction, and the third driving direction is from the k-th waypoint to the destination.
可选的,若目的地处未设置路点,终端则根据第k个路点与目的地确定第 三行驶方向,从而控制虚拟载具按照第三行驶方向行驶至目的地。Optionally, if no waypoint is set at the destination, the terminal determines the third driving direction according to the k-th waypoint and the destination, so as to control the virtual vehicle to drive to the destination in the third driving direction.
示意性的,如图9所示,由于目的地位于路点F和I之间(未设置路点),因此终端根据路点F指向目的地的方向,控制虚拟道具自动行驶至目的地。Schematically, as shown in FIG. 9, since the destination is located between waypoints F and I (no waypoint is set), the terminal controls the virtual item to automatically drive to the destination according to the direction in which waypoint F points to the destination.
步骤506,响应于虚拟载具行驶至目的地,将虚拟载具切换为手动驾驶模式,并控制虚拟载具停止行驶。Step 506: In response to the virtual vehicle driving to the destination, the virtual vehicle is switched to the manual driving mode, and the virtual vehicle is controlled to stop driving.
在一种可能的实施方式中,通过上述步骤控制虚拟载具行驶至目的地后,终端将虚拟载具自动切换为手动驾驶模式,并控制虚拟载具在目的地处停止。In a possible implementation manner, after controlling the virtual vehicle to drive to the destination through the above steps, the terminal automatically switches the virtual vehicle to the manual driving mode and controls the virtual vehicle to stop at the destination.
可选的,由于用户标注的标注地点可能与目的地并非完全重合,因此终端将虚拟载具切换至手动驾驶模式后,可以自动在地图展示控件上显示标注地点,以便用户根据该标注地点与目的地之间的相对位置关系,手动控制虚拟载具行驶至标注地点。Optionally, since the marked location marked by the user may not completely coincide with the destination, the terminal can automatically display the marked location on the map display control after the virtual vehicle is switched to manual driving mode, so that the user can follow the marked location and purpose The relative position relationship between the ground, the virtual vehicle is manually controlled to drive to the marked location.
可选的,当标注地点与目的地不同时,终端自动控制虚拟载具转向至目的地所在的方位。Optionally, when the marked location is different from the destination, the terminal automatically controls the virtual vehicle to turn to the direction of the destination.
本实施例中,当用户手动设置的标注地点位于自动驾驶区域之外时,终端从自动驾驶区域内确定出于标注地点距离最近的目的地,进而根据该目的地与当前地点确定出自动驾驶路径,避免虚拟载具自动行驶至非自动驾驶区域造成的驾驶异常。In this embodiment, when the marked location manually set by the user is outside the automatic driving area, the terminal determines from the automatic driving area the destination closest to the marked location, and then determines the automatic driving path based on the destination and the current location , To avoid driving abnormalities caused by the virtual vehicle automatically driving to the non-autonomous driving area.
另外,本实施例中,通过在自动驾驶区域上设置路点,从而在确定出自动驾驶路径后,终端能够根据自动驾驶路径上的路点,确定出虚拟载具的行驶方向,进而根据该行驶方向控制虚拟载具自动行驶,即在实现自动驾驶时终端只需要对少量数据进行处理和运算,降低实现自动驾驶时的难度和运算量。In addition, in this embodiment, by setting waypoints on the automatic driving area, after determining the automatic driving path, the terminal can determine the driving direction of the virtual vehicle according to the waypoints on the automatic driving path, and then according to the driving direction. The direction control virtual vehicle automatically drives, that is, the terminal only needs to process and calculate a small amount of data when realizing automatic driving, reducing the difficulty and the amount of calculation when realizing automatic driving.
同时,本实施例中,通过为自动驾驶区域设置碰撞检测盒,从而利用该碰撞检测盒确定虚拟载具是否位于自动驾驶区域内,有助于简化虚拟载具所处位置的确定流程。At the same time, in this embodiment, by setting a collision detection box for the automatic driving area, the collision detection box is used to determine whether the virtual vehicle is located in the automatic driving area, which helps to simplify the process of determining the location of the virtual vehicle.
在一种可能的实施方式中,手动驾驶模式下,用户界面中显示有驾驶控件,且驾驶控件处于可点击状态,为了避免自动驾驶过程中因用户误触驾驶控件导致退出自动驾驶模式,自动驾驶模式下,终端将用户界面中的驾驶控件设置为不可点击状态,或者,取消显示驾驶控件。In a possible implementation, in the manual driving mode, the driving controls are displayed in the user interface, and the driving controls are in a clickable state. In order to prevent the user from accidentally touching the driving controls and exiting the automatic driving mode during the automatic driving process, the automatic driving In the mode, the terminal sets the driving controls in the user interface to a non-clickable state, or cancels the display of the driving controls.
相应的,当虚拟载具行驶至目的地时,终端将驾驶控件设置为可点击状态,或者,恢复显示驾驶控件,以便用户继续手动控制虚拟载具行驶。Correspondingly, when the virtual vehicle travels to the destination, the terminal sets the driving control to a clickable state, or restores the display of the driving control, so that the user can continue to manually control the virtual vehicle to travel.
可选的,手动驾驶模式下,为了模拟出真实的驾驶场景,虚拟对象无法使用虚拟道具,比如,虚拟对象无法使用虚拟补给瓶、无法使用虚拟攻击道具攻击虚拟环境中的其他虚拟对象、无法使用虚拟投掷道具等等。相应的,终端不显示道具使用控件。Optionally, in manual driving mode, in order to simulate a real driving scene, virtual objects cannot use virtual props. For example, virtual objects cannot use virtual supply bottles, cannot use virtual attack props to attack other virtual objects in the virtual environment, and cannot be used. Virtual throwing props and so on. Correspondingly, the terminal does not display the item usage control.
而本申请实施例中,由于自动驾驶模式下虚拟载具能够在虚拟环境中自动行驶,无需用户手动控制,因此为了使用户能够在自动驾驶载具的过程中使用虚拟道具,终端显示道具使用控件,以便用户通过触发道具使用控件使用虚拟道具。In the embodiment of the present application, since the virtual vehicle can automatically drive in the virtual environment in the automatic driving mode without manual control by the user, in order to enable the user to use the virtual props during the automatic driving of the vehicle, the terminal displays the control for using the props. , So that users can use virtual props by triggering the prop use control.
可选的,该道具使用控件是虚拟攻击道具对应的使用控件,比如虚拟步枪的射击控件,或者,虚拟补给道具对应的使用控件,比如虚拟绷带的使用控件,或者,虚拟投掷道具对应的使用控件,比如虚拟手雷的投掷控件等等,本申请实施例并不对道具使用控件的类型进行限定。相应的,当接收到对道具使用控件的触发操作时,终端控制虚拟对象使用虚拟道具。需要说明的是,当虚拟切换回手动驾驶模式时,终端取消显示道具使用控件。Optionally, the item use control is the use control corresponding to the virtual attacking props, such as the shooting control of a virtual rifle, or the use control corresponding to the virtual replenishment props, such as the use control of the virtual bandage, or the use control corresponding to the virtual throwing props For example, the throwing control of a virtual grenade, etc., the embodiment of the present application does not limit the type of the prop using control. Correspondingly, when receiving a trigger operation on the use of the control on the prop, the terminal controls the virtual object to use the virtual prop. It should be noted that when virtual switching back to the manual driving mode, the terminal cancels the display of the prop use control.
示意性的,如图10所示,自动驾驶模式下终端取消在用户界面1000中显示驾驶控件1004,并在用户界面1000中显示瞄准控件1001和开火控件1002。Illustratively, as shown in FIG. 10, in the automatic driving mode, the terminal cancels the display of the driving control 1004 in the user interface 1000, and displays the aiming control 1001 and the firing control 1002 in the user interface 1000.
若在自动驾驶模式下,将用户界面中的驾驶控件设置为不可点击状态,或者,取消显示驾驶控件,用户将无法在自动驾驶过程中手动控制虚拟载具。在实际情况下,当被虚拟环境中的其他虚拟对象攻击时,用户往往需要改变行驶路线,以避免攻击。因此在一种可能的实施方式中,自动驾驶模式下,用户界面显示有驾驶模式切换控件,当接收到对驾驶模式切换控件的触发操作时,终端将虚拟载具切换为手动驾驶模式,并将驾驶控件设置为可点击状态,或者,恢复显示驾驶控件。If in the automatic driving mode, the driving controls in the user interface are set to a non-clickable state, or the display of the driving controls is canceled, the user will not be able to manually control the virtual vehicle during the automatic driving process. In actual situations, when being attacked by other virtual objects in the virtual environment, the user often needs to change the driving route to avoid the attack. Therefore, in a possible implementation manner, in the automatic driving mode, the user interface displays a driving mode switching control, and when a trigger operation on the driving mode switching control is received, the terminal switches the virtual vehicle to the manual driving mode, and The driving controls are set to a clickable state, or the driving controls are restored to display.
示意性的,如图10所示,自动驾驶模式下,用户界面1000中显示有驾驶模式切换控件1003,当接收到对驾驶模式切换控件1003的点击操作时,终端即控制虚拟载具退出自动驾驶模式,并重新在用户界面1000中显示驾驶控件1004(同时取消显示攻击控件)。Schematically, as shown in FIG. 10, in the automatic driving mode, a driving mode switching control 1003 is displayed in the user interface 1000. When a click operation on the driving mode switching control 1003 is received, the terminal controls the virtual vehicle to exit the automatic driving Mode, and display the driving control 1004 in the user interface 1000 again (and cancel the display of the attack control at the same time).
本实施例中,自动驾驶模式下,终端通过将驾驶控件设置为不可点击状态,或者,取消显示驾驶控件,避免因用户误触驾驶控件导致退出自动驾驶模式;同时,终端在用户界面显示驾驶模式切换控件,以便用户通过触发该控件以退出自动驾驶模式。In this embodiment, in the automatic driving mode, the terminal sets the driving control to a non-clickable state, or cancels the display of the driving control, to avoid exiting the automatic driving mode due to the user accidentally touching the driving control; at the same time, the terminal displays the driving mode on the user interface Switch the control so that the user can exit the autopilot mode by triggering the control.
结合上述各个实施例,在一个示意性的例子中,控制虚拟载具自动驾驶的流程如图11所示。In combination with the foregoing embodiments, in an illustrative example, the process of controlling the automatic driving of the virtual vehicle is shown in FIG. 11.
步骤1101,手动控制虚拟载具。Step 1101: Manually control the virtual vehicle.
步骤1102,是否进入自动驾驶区域。若进入自动驾驶区域,则执行步骤1103,若未进入自动驾驶区域,则返回执行步骤1101。Step 1102, whether to enter the autonomous driving area. If it enters the automatic driving area, step 1103 is executed, and if it has not entered the automatic driving area, return to execute step 1101.
步骤1103,显示可自动驾驶提示信息。In step 1103, a prompt message that the autopilot can be driven is displayed.
步骤1104,是否接收到对地图的标记操作。若接收到,则执行步骤1105,若未接收到,则返回执行步骤1103。Step 1104, whether a marking operation on the map is received. If it is received, step 1105 is executed, and if it is not received, step 1103 is executed back to.
步骤1105,在地图中显示标记操作对应的目的地。Step 1105: Display the destination corresponding to the marking operation on the map.
步骤1106,是否接收到目的地确定操作。若接收到,则执行步骤1107,若未接收到,则返回执行步骤1105。Step 1106: Whether a destination determination operation is received. If it is received, step 1107 is executed, and if it is not received, step 1105 is executed back to.
步骤1107,进入自动驾驶模式。Step 1107, enter the automatic driving mode.
步骤1108,是否确定出自动驾驶路径。若已确定出,则执行步骤1109,若未确定出,则返回执行步骤1107。Step 1108, whether the automatic driving path is determined. If it has been determined, step 1109 is executed, and if it is not determined, then return to step 1107.
步骤1109,根据自动驾驶路径上的路点控制虚拟载具行驶。Step 1109: Control the virtual vehicle to travel according to the waypoints on the automatic driving path.
步骤1110,是否达到目的地。若达到,则执行步骤1111,若未达到,则返回执行步骤1109。Step 1110, whether the destination has been reached. If it reaches, go to step 1111, if not, go back to step 1109.
步骤1111,控制虚拟载具停止行驶。Step 1111: Control the virtual vehicle to stop driving.
图12是本申请一个示例性实施例提供的在虚拟环境中驾驶载具的装置的结构框图,该装置包括:Fig. 12 is a structural block diagram of a device for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application, and the device includes:
显示模块1201,用于显示行驶画面和地图展示控件,所述行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且所述虚拟载具处于手动驾驶模式;The display module 1201 is configured to display a driving picture and map display controls. The driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
接收模块1202,用于响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,接收对所述地图展示控件的标记操作,所述标记操作指在所述地图展示控件中标记出地点的操作;The receiving module 1202 is configured to receive a marking operation on the map display control in response to the virtual vehicle being located in the autonomous driving area in the virtual environment, where the marking operation refers to marking a location in the map display control The operation;
控制模块1203,用于响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地。The control module 1203 is configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically drive to the destination.
可选的,控制模块1203,用于:Optionally, the control module 1203 is used for:
根据所述标记操作指示的标记地点,确定所述目的地,所述目的地位于所述自动驾驶区域;Determine the destination according to the marked location indicated by the marking operation, where the destination is located in the autonomous driving area;
根据所述虚拟载具所处的当前地点以及所述目的地,确定自动驾驶路径,所述自动驾驶路径位于所述自动驾驶区域;Determine an automatic driving path according to the current location of the virtual vehicle and the destination, where the automatic driving path is located in the automatic driving area;
将所述虚拟载具切换为所述自动驾驶模式,并控制所述虚拟载具按照所述自动驾驶路径行驶至所述目的地。The virtual vehicle is switched to the automatic driving mode, and the virtual vehicle is controlled to drive to the destination according to the automatic driving path.
可选的,控制所述虚拟载具按照所述自动驾驶路径行驶至所述目的地时,控制模块1203,用于:Optionally, when controlling the virtual vehicle to drive to the destination according to the automatic driving route, the control module 1203 is configured to:
确定所述自动驾驶路径上的至少两个路点,所述路点预先设置在所述自动驾驶区域中;Determining at least two waypoints on the automatic driving path, the waypoints being preset in the automatic driving area;
根据至少两个所述路点的路点顺序,控制所述虚拟载具行驶至所述目的地。According to the waypoint sequence of at least two of the waypoints, the virtual vehicle is controlled to travel to the destination.
可选的,自动驾驶路径上包括k个路点,k为大于等于2的整数;Optionally, the autonomous driving path includes k waypoints, and k is an integer greater than or equal to 2;
所述根据至少两个所述路点的路点顺序,控制所述虚拟载具行驶至所述目的地时,控制模块1203,用于:When controlling the virtual vehicle to travel to the destination according to the waypoint sequence of at least two of the waypoints, the control module 1203 is configured to:
控制所述虚拟载具按照第一行驶方向从所述当前地点行驶至第1个路点,所述第一行驶方向由所述当前地点指向所述第1个路点;Controlling the virtual vehicle to drive from the current location to the first waypoint according to a first driving direction, and the first driving direction is directed from the current location to the first waypoint;
控制所述虚拟载具按照第二行驶方向从第n个路点行驶至第n+1个路点,所述第二行驶方向由所述第n个路点指向所述第n+1个路点,n为大于等于1且小于等于k-1的整数;Control the virtual vehicle to drive from the nth waypoint to the n+1th waypoint according to the second driving direction, and the second driving direction is from the nth waypoint to the n+1th way Point, n is an integer greater than or equal to 1 and less than or equal to k-1;
控制所述虚拟载具按照第三行驶方向从第k个路点行驶至所述目的地,所述第三行驶方向由所述第k个路点指向所述目的地。The virtual vehicle is controlled to drive from the k-th waypoint to the destination according to a third driving direction, and the third driving direction is from the k-th waypoint to the destination.
可选的,根据所述标记操作指示的标记地点,确定所述目的地时,控制模块1203,用于:Optionally, when determining the destination according to the marking location indicated by the marking operation, the control module 1203 is configured to:
响应于所述标记操作指示的所述标记地点位于所述自动驾驶区域,将所述标记地点确定为所述目的地;In response to the marked location indicated by the marking operation being located in the autonomous driving area, determining the marked location as the destination;
响应于所述标记操作指示的所述标记地点位于所述自动驾驶区域外,将所述自动驾驶区域内与所述标记地点最近的地点确定为所述目的地,或,显示标记提示信息,所述标记提示信息用于提示在所述自动驾驶区域内设置所述目的地。In response to the marking operation instruction indicating that the marked location is located outside the autonomous driving area, the location closest to the marked location in the automatic driving area is determined as the destination, or marking prompt information is displayed, so The mark prompt information is used to prompt to set the destination in the autonomous driving area.
可选的,根据所述虚拟载具所处的当前地点以及所述目的地,确定自动驾驶路径时,控制模块1203,用于:Optionally, when determining the automatic driving path according to the current location of the virtual vehicle and the destination, the control module 1203 is configured to:
以所述虚拟环境中的路径分支点为节点,通过深度优先搜索算法确定所述当前地点与所述目的地之间的至少一条候选路径,所述路径分支节点为所述自动驾驶区域中预设的分支点;Using a path branch point in the virtual environment as a node, at least one candidate path between the current location and the destination is determined through a depth-first search algorithm, and the path branch node is a preset in the autonomous driving area Branch point
将最短候选路径确定为所述自动驾驶路径。The shortest candidate path is determined as the automatic driving path.
可选的,接收模块1203,用于:Optionally, the receiving module 1203 is used for:
响应于第一碰撞检测盒与第二碰撞检测盒发生碰撞,确定所述虚拟载具位于所述自动驾驶区域,所述第一碰撞检测盒是所述虚拟载具对应的碰撞检测盒,所述第二碰撞检测盒是所述自动驾驶区域对应的碰撞检测盒。In response to a collision between the first collision detection box and the second collision detection box, it is determined that the virtual vehicle is located in the autonomous driving area, and the first collision detection box is a collision detection box corresponding to the virtual vehicle. The second collision detection box is a collision detection box corresponding to the autonomous driving area.
可选的,所述装置还包括:Optionally, the device further includes:
第一切换模块,用于响应于所述虚拟载具行驶至所述目的地,将所述虚拟载具切换为所述手动驾驶模式,并控制所述虚拟载具停止行驶。The first switching module is configured to switch the virtual vehicle to the manual driving mode in response to the virtual vehicle driving to the destination, and control the virtual vehicle to stop driving.
可选的,所述手动驾驶模式下显示有驾驶控件,且所述驾驶控件处于可点击状态;Optionally, driving controls are displayed in the manual driving mode, and the driving controls are in a clickable state;
所述装置还包括:The device also includes:
设置模块,用于在所述自动驾驶模式下,将驾驶控件设置为不可点击状态,或者,取消显示所述驾驶控件;The setting module is used to set the driving control to a non-clickable state in the automatic driving mode, or cancel the display of the driving control;
响应于所述虚拟载具行驶至所述目的地,将所述驾驶控件设置为可点击状态,或者,恢复显示所述驾驶控件。In response to the virtual vehicle traveling to the destination, the driving control is set to a clickable state, or the driving control is restored to be displayed.
可选的,所述装置还包括:Optionally, the device further includes:
第二切换模块,用于在所述自动驾驶模式下,显示驾驶模式切换控件;The second switching module is configured to display driving mode switching controls in the automatic driving mode;
响应于对所述驾驶模式切换控件的触发操作,将所述虚拟载具切换为所述手动驾驶模式,并将所述驾驶控件设置为可点击状态,或者,恢复显示所述驾驶控件。In response to a triggering operation on the driving mode switching control, the virtual vehicle is switched to the manual driving mode, and the driving control is set to a clickable state, or the driving control is restored to be displayed.
可选的,所述手动驾驶模式下不显示道具使用控件,所述道具使用控件用于控制所述虚拟对象使用虚拟道具;Optionally, no prop use control is displayed in the manual driving mode, and the prop use control is used to control the virtual object to use virtual props;
所述装置还包括:The device also includes:
道具控件显示模块,用于自动驾驶模式下,显示所述道具使用控件;The prop control display module is used to display the prop use control in the automatic driving mode;
道具使用模块,用于响应于对所述道具使用控件的触发操作,控制所述虚拟对象使用所述虚拟道具。The prop use module is configured to control the virtual object to use the virtual prop in response to a trigger operation on the prop use control.
可选的,所述自动驾驶区域包括所述虚拟环境中的预设道路。Optionally, the autonomous driving area includes a preset road in the virtual environment.
综上所述,本申请实施例中,当虚拟载具在手动驾驶模式下行驶至虚拟环 境中的自动驾驶区域时,若接收到对地图展示控件的标记操作,则根据该标记操作将虚拟载具切换为自动驾驶模式,并控制虚拟载具自动行驶至目的地,无需用户手动控制虚拟载具,从而简化了控制虚拟载具行驶的流程,降低了用户控制虚拟载具在虚拟环境中行驶的操作难度。In summary, in the embodiment of the present application, when the virtual vehicle drives to the autonomous driving area in the virtual environment in the manual driving mode, if it receives a marking operation on the map display control, the virtual vehicle will be loaded according to the marking operation. The vehicle is switched to the automatic driving mode and the virtual vehicle is controlled to automatically drive to the destination. There is no need for the user to manually control the virtual vehicle. This simplifies the process of controlling the virtual vehicle to drive and reduces the user’s control of the virtual vehicle to drive in the virtual environment. Difficulty of operation.
本实施例中,当用户手动设置的标注地点位于自动驾驶区域之外时,终端从自动驾驶区域内确定出于标注地点距离最近的目的地,进而根据该目的地与当前地点确定出自动驾驶路径,避免虚拟载具自动行驶至非自动驾驶区域造成的驾驶异常。In this embodiment, when the marked location manually set by the user is outside the automatic driving area, the terminal determines from the automatic driving area the destination closest to the marked location, and then determines the automatic driving path based on the destination and the current location , To avoid driving abnormalities caused by the virtual vehicle automatically driving to the non-autonomous driving area.
另外,本实施例中,通过在自动驾驶区域上设置路点,从而在确定出自动驾驶路径后,根据自动驾驶路径上的路点,确定出虚拟载具的行驶方向,进而根据该行驶方向控制虚拟载具自动行驶,在实现自动驾驶的同时,降低实现自动驾驶时的难度和运算量。In addition, in this embodiment, by setting waypoints on the automatic driving area, after the automatic driving path is determined, the driving direction of the virtual vehicle is determined according to the waypoints on the automatic driving path, and then the driving direction is controlled according to the driving direction. The virtual vehicle travels automatically, while realizing automatic driving, it reduces the difficulty and the amount of calculation when realizing automatic driving.
同时,本实施例中,通过为自动驾驶区域设置碰撞检测盒,从而利用该碰撞检测盒确定虚拟载具是否位于自动驾驶区域内,有助于简化虚拟载具所处位置的确定流程。At the same time, in this embodiment, by setting a collision detection box for the automatic driving area, the collision detection box is used to determine whether the virtual vehicle is located in the automatic driving area, which helps to simplify the process of determining the location of the virtual vehicle.
本实施例中,自动驾驶模式下,终端通过将驾驶控件设置为不可点击状态,或者,取消显示驾驶控件,避免因用户误触驾驶控件导致退出自动驾驶模式;同时,终端在用户界面显示驾驶模式切换控件,以便用户通过触发该控件以退出自动驾驶模式。In this embodiment, in the automatic driving mode, the terminal sets the driving control to a non-clickable state, or cancels the display of the driving control, to avoid exiting the automatic driving mode due to the user accidentally touching the driving control; at the same time, the terminal displays the driving mode on the user interface Switch the control so that the user can exit the autopilot mode by triggering the control.
请参考图13,其示出了本申请一个示例性实施例提供的终端1300的结构框图。该终端1300可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器。终端1300还可能被称为用户设备、便携式终端等其他名称。Please refer to FIG. 13, which shows a structural block diagram of a terminal 1300 according to an exemplary embodiment of the present application. The terminal 1300 may be a portable mobile terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic Video expert compresses standard audio level 4) player. The terminal 1300 may also be called user equipment, portable terminal and other names.
通常,终端1300包括有:处理器1301和存储器1302。Generally, the terminal 1300 includes a processor 1301 and a memory 1302.
处理器1301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。 处理器1301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1301 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). accomplish. The processor 1301 may also include a main processor and a coprocessor. The main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen. In some embodiments, the processor 1301 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
存储器1302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1302中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1301所执行以实现本申请实施例提供的方法。The memory 1302 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1302 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1302 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1301 to implement the method provided in the embodiment of the present application.
在一些实施例中,终端1300还可选包括有:外围设备接口1303和至少一个外围设备。具体地,外围设备包括:射频电路1304、触摸显示屏1305、摄像头1306、音频电路1307、定位组件1308和电源1309中的至少一种。In some embodiments, the terminal 1300 may optionally further include: a peripheral device interface 1303 and at least one peripheral device. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1304, a touch display screen 1305, a camera 1306, an audio circuit 1307, a positioning component 1308, and a power supply 1309.
外围设备接口1303可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1301和存储器1302。在一些实施例中,处理器1301、存储器1302和外围设备接口1303被集成在同一芯片或电路板上;在一些其他实施例中,处理器1301、存储器1302和外围设备接口1303中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。The peripheral device interface 1303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1301 and the memory 1302. In some embodiments, the processor 1301, the memory 1302, and the peripheral device interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1301, the memory 1302, and the peripheral device interface 1303 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
射频电路1304用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1304通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1304将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1304包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1304可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1304还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。The radio frequency circuit 1304 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1304 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on. The radio frequency circuit 1304 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network. In some embodiments, the radio frequency circuit 1304 may also include a circuit related to NFC (Near Field Communication), which is not limited in this application.
触摸显示屏1305用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。触摸显示屏1305还具有采集在触摸显示屏1305的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1301进行处理。触摸显示屏1305用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,触摸显示屏1305可以为一个,设置终端1300的前面板;在另一些实施例中,触摸显示屏1305可以为至少两个,分别设置在终端1300的不同表面或呈折叠设计;在再一些实施例中,触摸显示屏1305可以是柔性显示屏,设置在终端1300的弯曲表面上或折叠面上。甚至,触摸显示屏1305还可以设置成非矩形的不规则图形,也即异形屏。触摸显示屏1305可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。The touch screen 1305 is used to display UI (User Interface). The UI can include graphics, text, icons, videos, and any combination thereof. The touch display screen 1305 also has the ability to collect touch signals on or above the surface of the touch display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. The touch screen 1305 is used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards. In some embodiments, there may be one touch display screen 1305, and the front panel of the terminal 1300 may be provided; in other embodiments, there may be at least two touch display screens 1305, which are respectively arranged on different surfaces of the terminal 1300 or have a folding design. In still other embodiments, the touch display screen 1305 may be a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 1300. Even the touch screen 1305 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen. The touch screen 1305 can be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light-emitting diode).
摄像头组件1306用于采集图像或视频。可选地,摄像头组件1306包括前置摄像头和后置摄像头。通常,前置摄像头用于实现视频通话或自拍,后置摄像头用于实现照片或视频的拍摄。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能,主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能。在一些实施例中,摄像头组件1306还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。The camera assembly 1306 is used to collect images or videos. Optionally, the camera assembly 1306 includes a front camera and a rear camera. Generally, the front camera is used to implement video calls or selfies, and the rear camera is used to implement photos or videos. In some embodiments, there are at least two rear cameras, each of which is a main camera, a depth-of-field camera, and a wide-angle camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, and the main camera and the wide-angle camera are fused Realize panoramic shooting and VR (Virtual Reality, virtual reality) shooting functions. In some embodiments, the camera assembly 1306 may also include a flash. The flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
音频电路1307用于提供用户和终端1300之间的音频接口。音频电路1307可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1301进行处理,或者输入至射频电路1304以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1300的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1301或射频电路1304的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1307还可以包括耳机插孔。The audio circuit 1307 is used to provide an audio interface between the user and the terminal 1300. The audio circuit 1307 may include a microphone and a speaker. The microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1301 for processing, or input to the radio frequency circuit 1304 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 1300. The microphone can also be an array microphone or an omnidirectional collection microphone. The speaker is used to convert the electrical signal from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes. In some embodiments, the audio circuit 1307 may also include a headphone jack.
定位组件1308用于定位终端1300的当前地理位置,以实现导航或LBS (Location Based Service,基于位置的服务)。定位组件1308可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。The positioning component 1308 is used to locate the current geographic location of the terminal 1300 to implement navigation or LBS (Location Based Service, location-based service). The positioning component 1308 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
电源1309用于为终端1300中的各个组件进行供电。电源1309可以是交流电、直流电、一次性电池或可充电电池。当电源1309包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。The power supply 1309 is used to supply power to various components in the terminal 1300. The power source 1309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 1309 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. A wired rechargeable battery is a battery charged through a wired line, and a wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charging technology.
在一些实施例中,终端1300还包括有一个或多个传感器1310。该一个或多个传感器1310包括但不限于:加速度传感器1311、陀螺仪传感器1312、压力传感器1313、指纹传感器1314、光学传感器1315以及接近传感器1316。In some embodiments, the terminal 1300 further includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: an acceleration sensor 1311, a gyroscope sensor 1312, a pressure sensor 1313, a fingerprint sensor 1314, an optical sensor 1315, and a proximity sensor 1316.
加速度传感器1311可以检测以终端1300建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1311可以用于检测重力加速度在三个坐标轴上的分量。处理器1301可以根据加速度传感器1311采集的重力加速度信号,控制触摸显示屏1305以横向视图或纵向视图进行用户界面的显示。加速度传感器1311还可以用于游戏或者用户的运动数据的采集。The acceleration sensor 1311 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1300. For example, the acceleration sensor 1311 can be used to detect the components of gravitational acceleration on three coordinate axes. The processor 1301 may control the touch screen 1305 to display the user interface in a horizontal view or a vertical view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 can also be used for the collection of game or user motion data.
陀螺仪传感器1312可以检测终端1300的机体方向及转动角度,陀螺仪传感器1312可以与加速度传感器1311协同采集用户对终端1300的3D动作。处理器1301根据陀螺仪传感器1312采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。The gyroscope sensor 1312 can detect the body direction and rotation angle of the terminal 1300, and the gyroscope sensor 1312 can cooperate with the acceleration sensor 1311 to collect the user's 3D actions on the terminal 1300. The processor 1301 can implement the following functions according to the data collected by the gyroscope sensor 1312: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
压力传感器1313可以设置在终端1300的侧边框和/或触摸显示屏1305的下层。当压力传感器1313设置在终端1300的侧边框时,可以检测用户对终端1300的握持信号,根据该握持信号进行左右手识别或快捷操作。当压力传感器1313设置在触摸显示屏1305的下层时,可以根据用户对触摸显示屏1305的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。The pressure sensor 1313 may be disposed on the side frame of the terminal 1300 and/or the lower layer of the touch screen 1305. When the pressure sensor 1313 is arranged on the side frame of the terminal 1300, it can detect the user's holding signal of the terminal 1300, and perform left and right hand recognition or shortcut operations according to the holding signal. When the pressure sensor 1313 is arranged on the lower layer of the touch display screen 1305, it can control the operability controls on the UI interface according to the user's pressure operation on the touch display screen 1305. The operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
指纹传感器1314用于采集用户的指纹,以根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1301授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1314可以被设置终端1300的正面、背面或侧面。当 终端1300上设置有物理按键或厂商Logo时,指纹传感器1314可以与物理按键或厂商Logo集成在一起。The fingerprint sensor 1314 is used to collect the user's fingerprint to identify the user's identity according to the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 1301 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings. The fingerprint sensor 1314 may be provided on the front, back or side of the terminal 1300. When the terminal 1300 is provided with a physical button or a manufacturer logo, the fingerprint sensor 1314 can be integrated with the physical button or the manufacturer logo.
光学传感器1315用于采集环境光强度。在一个实施例中,处理器1301可以根据光学传感器1315采集的环境光强度,控制触摸显示屏1305的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1305的显示亮度;当环境光强度较低时,调低触摸显示屏1305的显示亮度。在另一个实施例中,处理器1301还可以根据光学传感器1315采集的环境光强度,动态调整摄像头组件1306的拍摄参数。The optical sensor 1315 is used to collect the ambient light intensity. In an embodiment, the processor 1301 may control the display brightness of the touch screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1305 is decreased. In another embodiment, the processor 1301 may also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
接近传感器1316,也称距离传感器,通常设置在终端1300的正面。接近传感器1316用于采集用户与终端1300的正面之间的距离。在一个实施例中,当接近传感器1316检测到用户与终端1300的正面之间的距离逐渐变小时,由处理器1301控制触摸显示屏1305从亮屏状态切换为息屏状态;当接近传感器1316检测到用户与终端1300的正面之间的距离逐渐变大时,由处理器1301控制触摸显示屏1305从息屏状态切换为亮屏状态。The proximity sensor 1316, also called a distance sensor, is usually arranged on the front of the terminal 1300. The proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300. In one embodiment, when the proximity sensor 1316 detects that the distance between the user and the front of the terminal 1300 gradually decreases, the processor 1301 controls the touch screen 1305 to switch from the on-screen state to the off-screen state; when the proximity sensor 1316 detects When the distance between the user and the front of the terminal 1300 gradually increases, the processor 1301 controls the touch display screen 1305 to switch from the rest screen state to the bright screen state.
本领域技术人员可以理解,图13中示出的结构并不构成对终端1300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。Those skilled in the art can understand that the structure shown in FIG. 13 does not constitute a limitation on the terminal 1300, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
本申请实施例还提供一种计算机可读存储介质,该可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述任一实施例所述的在虚拟环境中驾驶载具的方法。The embodiment of the present application also provides a computer-readable storage medium, the readable storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the The code set or instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment described in any of the foregoing embodiments.
本申请还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面提供的在虚拟环境中驾驶载具的方法。。This application also provides a computer program product or computer program. The computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method for driving a vehicle in a virtual environment provided in the above aspect. .
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘 或光盘等。A person of ordinary skill in the art can understand that all or part of the steps in the above embodiments can be implemented by hardware, or by a program to instruct relevant hardware. The program can be stored in a computer-readable storage medium. The storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above are only optional embodiments of this application and are not intended to limit this application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in the protection of this application. Within range.

Claims (15)

  1. 一种在虚拟环境中驾驶载具的方法,其特征在于,所述方法应用于终端,所述方法包括:A method for driving a vehicle in a virtual environment, characterized in that the method is applied to a terminal, and the method includes:
    显示行驶画面和地图展示控件,所述行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且所述虚拟载具处于手动驾驶模式;Displaying a driving picture and a map display control, the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
    响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,接收对所述地图展示控件的标记操作,所述标记操作指在所述地图展示控件中标记出地点的操作;In response to the virtual vehicle being located in the autonomous driving area in the virtual environment, receiving a marking operation on the map display control, where the marking operation refers to an operation of marking a location in the map display control;
    响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地。In response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination.
  2. 根据权利要求1所述的方法,其特征在于,所述响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地,包括:The method according to claim 1, wherein the switching the virtual vehicle to an automatic driving mode in response to the marking operation and controlling the virtual vehicle to automatically drive to a destination comprises:
    根据所述标记操作指示的标记地点,确定所述目的地,所述目的地位于所述自动驾驶区域;Determine the destination according to the marked location indicated by the marking operation, where the destination is located in the autonomous driving area;
    根据所述虚拟载具所处的当前地点以及所述目的地,确定自动驾驶路径,所述自动驾驶路径位于所述自动驾驶区域;Determine an automatic driving path according to the current location of the virtual vehicle and the destination, where the automatic driving path is located in the automatic driving area;
    将所述虚拟载具切换为所述自动驾驶模式,并控制所述虚拟载具按照所述自动驾驶路径行驶至所述目的地。The virtual vehicle is switched to the automatic driving mode, and the virtual vehicle is controlled to drive to the destination according to the automatic driving path.
  3. 根据权利要求2所述的方法,其特征在于,所述控制所述虚拟载具按照所述自动驾驶路径行驶至所述目的地,包括:The method according to claim 2, wherein the controlling the virtual vehicle to travel to the destination according to the automatic driving path comprises:
    确定所述自动驾驶路径上的至少两个路点,所述路点预先设置在所述自动驾驶区域中;Determining at least two waypoints on the automatic driving path, the waypoints being preset in the automatic driving area;
    根据至少两个所述路点的路点顺序,控制所述虚拟载具行驶至所述目的地。According to the waypoint sequence of at least two of the waypoints, the virtual vehicle is controlled to travel to the destination.
  4. 根据权利要求3所述的方法,其特征在于,所述自动驾驶路径上包括k个路点,k为大于等于2的整数;The method according to claim 3, wherein the autonomous driving path includes k waypoints, and k is an integer greater than or equal to 2;
    所述根据至少两个所述路点的路点顺序,控制所述虚拟载具行驶至所述目的地,包括:The controlling the virtual vehicle to travel to the destination according to the waypoint sequence of the at least two waypoints includes:
    控制所述虚拟载具按照第一行驶方向从所述当前地点行驶至第1个路点,所述第一行驶方向由所述当前地点指向所述第1个路点;Controlling the virtual vehicle to drive from the current location to the first waypoint according to a first driving direction, and the first driving direction is directed from the current location to the first waypoint;
    控制所述虚拟载具按照第二行驶方向从第n个路点行驶至第n+1个路点,所述第二行驶方向由所述第n个路点指向所述第n+1个路点,n为大于等于1且小于等于k-1的整数;Control the virtual vehicle to drive from the nth waypoint to the n+1th waypoint according to the second driving direction, and the second driving direction is from the nth waypoint to the n+1th way Point, n is an integer greater than or equal to 1 and less than or equal to k-1;
    控制所述虚拟载具按照第三行驶方向从第k个路点行驶至所述目的地,所述第三行驶方向由所述第k个路点指向所述目的地。The virtual vehicle is controlled to drive from the k-th waypoint to the destination according to a third driving direction, and the third driving direction is from the k-th waypoint to the destination.
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述标记操作指示的标记地点,确定所述目的地,包括:The method according to claim 2, wherein the determining the destination according to the marking location indicated by the marking operation comprises:
    响应于所述标记操作指示的所述标记地点位于所述自动驾驶区域,将所述标记地点确定为所述目的地;In response to the marked location indicated by the marking operation being located in the autonomous driving area, determining the marked location as the destination;
    响应于所述标记操作指示的所述标记地点位于所述自动驾驶区域外,将所述自动驾驶区域内与所述标记地点最近的地点确定为所述目的地,或,显示标记提示信息,所述标记提示信息用于提示在所述自动驾驶区域内设置所述目的地。In response to the marking operation instruction indicating that the marked location is located outside the autonomous driving area, the location closest to the marked location in the automatic driving area is determined as the destination, or marking prompt information is displayed, so The mark prompt information is used to prompt to set the destination in the autonomous driving area.
  6. 根据权利要求2所述的方法,其特征在于,所述根据所述虚拟载具所处的当前地点以及所述目的地,确定自动驾驶路径,包括:The method according to claim 2, wherein the determining an automatic driving path according to the current location of the virtual vehicle and the destination comprises:
    以所述虚拟环境中的路径分支点为节点,通过深度优先搜索算法确定所述当前地点与所述目的地之间的至少一条候选路径,所述路径分支节点为所述自动驾驶区域中预设的分支点;Using a path branch point in the virtual environment as a node, at least one candidate path between the current location and the destination is determined through a depth-first search algorithm, and the path branch node is a preset in the autonomous driving area Branch point
    将最短候选路径确定为所述自动驾驶路径。The shortest candidate path is determined as the automatic driving path.
  7. 根据权利要求1至6任一所述的方法,其特征在于,所述响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,包括:The method according to any one of claims 1 to 6, wherein the response to the virtual vehicle being located in an autonomous driving area in the virtual environment comprises:
    响应于第一碰撞检测盒与第二碰撞检测盒发生碰撞,确定所述虚拟载具位于所述自动驾驶区域,所述第一碰撞检测盒是所述虚拟载具对应的碰撞检测盒,所述第二碰撞检测盒是所述自动驾驶区域对应的碰撞检测盒。In response to a collision between the first collision detection box and the second collision detection box, it is determined that the virtual vehicle is located in the autonomous driving area, and the first collision detection box is a collision detection box corresponding to the virtual vehicle. The second collision detection box is a collision detection box corresponding to the autonomous driving area.
  8. 根据权利要求1至6任一所述的方法,其特征在于,所述响应于所述标 记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地之后,所述方法还包括:The method according to any one of claims 1 to 6, wherein in response to the marking operation, the virtual vehicle is switched to an automatic driving mode, and the virtual vehicle is controlled to automatically drive to the destination After that, the method further includes:
    响应于所述虚拟载具行驶至所述目的地,将所述虚拟载具切换为所述手动驾驶模式,并控制所述虚拟载具停止行驶。In response to the virtual vehicle traveling to the destination, the virtual vehicle is switched to the manual driving mode, and the virtual vehicle is controlled to stop traveling.
  9. 根据权利要求1至6任一所述的方法,其特征在于,所述手动驾驶模式下显示有驾驶控件,且所述驾驶控件处于可点击状态;The method according to any one of claims 1 to 6, wherein driving controls are displayed in the manual driving mode, and the driving controls are in a clickable state;
    所述方法还包括:The method also includes:
    所述自动驾驶模式下,将所述驾驶控件设置为不可点击状态,或者,取消显示所述驾驶控件;In the automatic driving mode, setting the driving control to a non-clickable state, or canceling the display of the driving control;
    响应于所述虚拟载具行驶至所述目的地,将所述驾驶控件设置为可点击状态,或者,恢复显示所述驾驶控件。In response to the virtual vehicle traveling to the destination, the driving control is set to a clickable state, or the driving control is restored to be displayed.
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:The method according to claim 9, wherein the method further comprises:
    所述自动驾驶模式下,显示驾驶模式切换控件;In the automatic driving mode, display driving mode switching controls;
    响应于对所述驾驶模式切换控件的触发操作,将所述虚拟载具切换为所述手动驾驶模式,并将所述驾驶控件设置为可点击状态,或者,恢复显示所述驾驶控件。In response to a triggering operation on the driving mode switching control, the virtual vehicle is switched to the manual driving mode, and the driving control is set to a clickable state, or the driving control is restored to be displayed.
  11. 根据权利要求1至6任一所述的方法,其特征在于,所述手动驾驶模式下不显示道具使用控件,所述道具使用控件用于控制所述虚拟对象使用虚拟道具;The method according to any one of claims 1 to 6, wherein no props use controls are displayed in the manual driving mode, and the props use controls are used to control the virtual object to use virtual props;
    所述响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式之后,所述方法还包括:After switching the virtual vehicle to the automatic driving mode in response to the marking operation, the method further includes:
    自动驾驶模式下,显示所述道具使用控件;In the automatic driving mode, display the control for using the props;
    响应于对所述道具使用控件的触发操作,控制所述虚拟对象使用所述虚拟道具。In response to a triggering operation on the prop use control, controlling the virtual object to use the virtual prop.
  12. 根据权利要求1至6任一所述的方法,其特征在于,所述自动驾驶区域包括所述虚拟环境中的预设道路。The method according to any one of claims 1 to 6, wherein the autonomous driving area includes a preset road in the virtual environment.
  13. 一种在虚拟环境中驾驶载具的装置,其特征在于,所述装置包括:A device for driving a vehicle in a virtual environment, characterized in that the device comprises:
    显示模块,用于显示行驶画面和地图展示控件,所述行驶画面是虚拟对象驾驶虚拟载具在虚拟环境中行驶的画面,且所述虚拟载具处于手动驾驶模式;A display module for displaying a driving picture and a map display control, the driving picture is a picture of a virtual object driving a virtual vehicle driving in a virtual environment, and the virtual vehicle is in a manual driving mode;
    接收模块,用于响应于所述虚拟载具位于所述虚拟环境中的自动驾驶区域,接收对所述地图展示控件的标记操作,所述标记操作指在所述地图展示控件中标记出地点的操作;The receiving module is configured to receive a marking operation on the map display control in response to the virtual vehicle being located in the autonomous driving area in the virtual environment, where the marking operation refers to marking a location in the map display control operate;
    控制模块,用于响应于所述标记操作,将所述虚拟载具切换为自动驾驶模式,并控制所述虚拟载具自动行驶至目的地。The control module is configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically drive to the destination.
  14. 一种终端,其特征在于,所述终端包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至12任一项所述的在虚拟环境中驾驶载具的方法。A terminal, characterized in that the terminal comprises: a processor and a memory, the memory stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, The code set or instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment according to any one of claims 1 to 12.
  15. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至12任一项所述的在虚拟环境中驾驶载具的方法。A computer-readable storage medium, wherein the storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the code set or The instruction set is loaded and executed by the processor to implement the method for driving a vehicle in a virtual environment according to any one of claims 1 to 12.
PCT/CN2020/128377 2020-02-04 2020-11-12 Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium WO2021155694A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227008387A KR20220046651A (en) 2020-02-04 2020-11-12 Method and apparatus, and terminal and storage medium for driving a means of transportation in a virtual environment
JP2022520700A JP7374313B2 (en) 2020-02-04 2020-11-12 Methods, devices, terminals and programs for driving vehicles in virtual environments

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010080028.6A CN111228804B (en) 2020-02-04 2020-02-04 Method, device, terminal and storage medium for driving vehicle in virtual environment
CN202010080028.6 2020-02-04

Publications (1)

Publication Number Publication Date
WO2021155694A1 true WO2021155694A1 (en) 2021-08-12

Family

ID=70878167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128377 WO2021155694A1 (en) 2020-02-04 2020-11-12 Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium

Country Status (4)

Country Link
JP (1) JP7374313B2 (en)
KR (1) KR20220046651A (en)
CN (1) CN111228804B (en)
WO (1) WO2021155694A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346362A (en) * 2022-06-10 2022-11-15 斑马网络技术有限公司 Driving data processing method and device, electronic equipment and storage medium
WO2023071739A1 (en) * 2021-10-26 2023-05-04 腾讯科技(深圳)有限公司 Coordinate axis display method and apparatus used in virtual environment, terminal and medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111228804B (en) * 2020-02-04 2021-05-14 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for driving vehicle in virtual environment
CN111760275A (en) * 2020-07-08 2020-10-13 网易(杭州)网络有限公司 Game control method and device and electronic equipment
CN112156474B (en) * 2020-09-25 2023-01-24 努比亚技术有限公司 Carrier control method, carrier control equipment and computer readable storage medium
CN112386912B (en) * 2021-01-21 2021-04-23 博智安全科技股份有限公司 Ground reconnaissance and visibility adjudication method, terminal device and computer-readable storage medium
CN113041619B (en) * 2021-04-26 2023-03-14 腾讯科技(深圳)有限公司 Control method, device, equipment and medium for virtual vehicle
US11760368B2 (en) * 2021-10-19 2023-09-19 Cyngn, Inc. System and method of same-loop adaptive simulation for autonomous driving
CN114011073B (en) * 2021-11-05 2023-07-14 腾讯科技(深圳)有限公司 Method, apparatus, device and computer readable storage medium for controlling carrier
CN115105077A (en) * 2022-06-22 2022-09-27 中国人民解放军空军特色医学中心 System for evaluating individual characteristics of flight personnel

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070168288A1 (en) * 2006-01-13 2007-07-19 Trails.Com, Inc. Method and system for dynamic digital rights bundling
US20160059729A1 (en) * 2010-02-18 2016-03-03 Sony Corporation Information processing apparatus, motor-driven movable body, and discharge control method
CN108245888A (en) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 Virtual object control method, device and computer equipment
CN109011575A (en) * 2018-07-04 2018-12-18 苏州玩友时代科技股份有限公司 A kind of automatic method for searching, device and equipment
CN110681156A (en) * 2019-10-10 2020-01-14 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment and storage medium in virtual world
CN111228804A (en) * 2020-02-04 2020-06-05 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for driving vehicle in virtual environment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8456475B2 (en) * 2003-06-30 2013-06-04 Microsoft Corporation Motion line switching in a virtual environment
CN101241507B (en) * 2008-01-17 2011-09-14 腾讯科技(深圳)有限公司 Map road-seeking method and system
JP5036573B2 (en) 2008-01-22 2012-09-26 日本信号株式会社 Network generation device for total minimum cost route search, generation method, and route search device using this network
CN104931037B (en) * 2014-03-18 2018-12-25 厦门高德软件有限公司 A kind of navigation hint information generating method and device
JP6311429B2 (en) 2014-04-18 2018-04-18 株式会社デンソー Automatic operation plan display device and program for automatic operation plan display device
WO2018079764A1 (en) * 2016-10-31 2018-05-03 学 秋田 Portable terminal device, network game system, and race game processing method
CN106730841B (en) * 2017-01-17 2020-10-27 网易(杭州)网络有限公司 Path finding method and device
JP6962076B2 (en) 2017-09-01 2021-11-05 株式会社デンソー Vehicle driving control device and its control method
CN110559662B (en) * 2019-09-12 2021-01-26 腾讯科技(深圳)有限公司 Visual angle switching method, device, terminal and medium in virtual environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070168288A1 (en) * 2006-01-13 2007-07-19 Trails.Com, Inc. Method and system for dynamic digital rights bundling
US20160059729A1 (en) * 2010-02-18 2016-03-03 Sony Corporation Information processing apparatus, motor-driven movable body, and discharge control method
CN108245888A (en) * 2018-02-09 2018-07-06 腾讯科技(深圳)有限公司 Virtual object control method, device and computer equipment
CN109011575A (en) * 2018-07-04 2018-12-18 苏州玩友时代科技股份有限公司 A kind of automatic method for searching, device and equipment
CN110681156A (en) * 2019-10-10 2020-01-14 腾讯科技(深圳)有限公司 Virtual role control method, device, equipment and storage medium in virtual world
CN111228804A (en) * 2020-02-04 2020-06-05 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for driving vehicle in virtual environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071739A1 (en) * 2021-10-26 2023-05-04 腾讯科技(深圳)有限公司 Coordinate axis display method and apparatus used in virtual environment, terminal and medium
CN115346362A (en) * 2022-06-10 2022-11-15 斑马网络技术有限公司 Driving data processing method and device, electronic equipment and storage medium
CN115346362B (en) * 2022-06-10 2024-04-09 斑马网络技术有限公司 Driving data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
JP2022551112A (en) 2022-12-07
CN111228804B (en) 2021-05-14
JP7374313B2 (en) 2023-11-06
KR20220046651A (en) 2022-04-14
CN111228804A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2021155694A1 (en) Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium
CN110115838B (en) Method, device, equipment and storage medium for generating mark information in virtual environment
CN111265869B (en) Virtual object detection method, device, terminal and storage medium
CN111035918B (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN110665230B (en) Virtual role control method, device, equipment and medium in virtual world
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
KR20210053990A (en) Virtual vehicle drifting method and device in a virtual world, and storage medium
CN112121422B (en) Interface display method, device, equipment and storage medium
CN108786110B (en) Method, device and storage medium for displaying sighting telescope in virtual environment
WO2021147468A1 (en) Method and apparatus for virtual character control in virtual environment, and device and medium
CN111338534A (en) Virtual object game method, device, equipment and medium
CN110681156B (en) Virtual role control method, device, equipment and storage medium in virtual world
CN112604305B (en) Virtual object control method, device, terminal and storage medium
TWI802978B (en) Method and apparatus for adjusting position of widget in application, device, and storage medium
WO2020156252A1 (en) Method, device, and apparatus for constructing building in virtual environment, and storage medium
CN113577765B (en) User interface display method, device, equipment and storage medium
CN111589127A (en) Control method, device and equipment of virtual role and storage medium
JP2024509064A (en) Location mark display method, device, equipment and computer program
CN113590070A (en) Navigation interface display method, navigation interface display device, terminal and storage medium
US20220184506A1 (en) Method and apparatus for driving vehicle in virtual environment, terminal, and storage medium
CN110812841B (en) Method, device, equipment and medium for judging virtual surface in virtual world
CN113289336A (en) Method, apparatus, device and medium for tagging items in a virtual environment
CN112755517A (en) Virtual object control method, device, terminal and storage medium
CN112915541A (en) Jumping point searching method, device, equipment and storage medium
CN112274936A (en) Method, device, equipment and storage medium for supplementing sub-props of virtual props

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20917555

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227008387

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022520700

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20917555

Country of ref document: EP

Kind code of ref document: A1