CN111228804A - Method, device, terminal and storage medium for driving vehicle in virtual environment - Google Patents
Method, device, terminal and storage medium for driving vehicle in virtual environment Download PDFInfo
- Publication number
- CN111228804A CN111228804A CN202010080028.6A CN202010080028A CN111228804A CN 111228804 A CN111228804 A CN 111228804A CN 202010080028 A CN202010080028 A CN 202010080028A CN 111228804 A CN111228804 A CN 111228804A
- Authority
- CN
- China
- Prior art keywords
- virtual
- virtual vehicle
- driving
- vehicle
- destination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/803—Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5378—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8017—Driving on land or water; Flying
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Radar, Positioning & Navigation (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a method, a device, a terminal and a storage medium for driving a vehicle in a virtual environment, and relates to the field of computers. The method comprises the following steps: displaying a user interface, wherein the user interface comprises a driving picture and a map, the driving picture is a picture of driving of the virtual vehicle driven by the virtual object in the virtual environment, and the virtual vehicle is in a manual driving mode; receiving a marking operation on a map in response to the virtual vehicle being located in an automatic driving area in the virtual environment, wherein the marking operation refers to an operation of marking a place in the map; and responding to the marking operation, switching the virtual vehicle into an automatic driving mode, and controlling the virtual vehicle to automatically travel to the destination. In the embodiment of the application, the terminal can control the virtual vehicle to automatically travel to the destination without manually controlling the virtual vehicle by a user, so that the process of controlling the virtual vehicle to travel is simplified, and the operation difficulty of controlling the virtual vehicle to travel by the user is reduced.
Description
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method, an apparatus, a terminal, and a storage medium for driving a vehicle in a virtual environment.
Background
A First-Person shooter game (FPS) is an application program based on a three-dimensional virtual environment, and a user can control a virtual object in the virtual environment to perform actions such as walking, running, climbing, Shooting and the like, and a plurality of users can form a team on line to cooperatively complete a certain task in the same virtual environment.
When the virtual object needs to be controlled to go from the current location to another location in the virtual environment and the distance between the two locations is long, the user can control the virtual object to drive a virtual vehicle (such as an automobile, an airplane, a motorcycle, and the like) arranged in the virtual environment, so that the virtual object is delivered to the destination through the virtual vehicle. The user needs to control the virtual vehicle to run through driving controls, and the driving controls include a steering control, an accelerating control, a decelerating control, a braking control, a horn control, a gear shifting control and the like.
Due to more driving controls, the operation difficulty of controlling the virtual object to drive the vehicle by the user (especially the user who uses the virtual vehicle for the first time) is higher.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for driving a vehicle in a virtual environment, which can reduce the operation difficulty of controlling the vehicle to be driven by a virtual object by a user. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for driving a vehicle in a virtual environment, where the method includes:
displaying a user interface, wherein the user interface comprises a driving picture and a map, the driving picture is a picture of driving of a virtual vehicle driven by a virtual object in a virtual environment, and the virtual vehicle is in a manual driving mode;
receiving a marking operation on the map in response to the virtual vehicle being located in an autopilot area in the virtual environment, the marking operation being an operation of marking a place in the map;
and responding to the marking operation, switching the virtual vehicle into an automatic driving mode, and controlling the virtual vehicle to automatically travel to a destination.
In another aspect, an embodiment of the present application provides an apparatus for driving a vehicle in a virtual environment, where the apparatus includes:
the display module is used for displaying a user interface, the user interface comprises a driving picture and a map, the driving picture is a picture of a virtual object driving a virtual carrier to drive in a virtual environment, and the virtual carrier is in a manual driving mode;
a receiving module, configured to receive a marking operation on the map in response to the virtual vehicle being located in an autopilot area in the virtual environment, where the marking operation refers to an operation of marking a place in the map;
and the control module is used for responding to the marking operation, switching the virtual vehicle into an automatic driving mode and controlling the virtual vehicle to automatically travel to a destination.
On the other hand, an embodiment of the present application provides a terminal, where the terminal includes: a processor and a memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of driving a vehicle in a virtual environment as described in the above aspect.
In another aspect, a computer-readable storage medium is provided having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the method of driving a vehicle in a virtual environment as described in the above aspect.
In another aspect, a computer program product is provided, which, when run on a computer, causes the computer to perform the method of driving a vehicle in a virtual environment as described in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the virtual vehicle runs to an automatic driving area in a virtual environment in a manual driving mode, if a marking operation on a map is received, the virtual vehicle is switched to the automatic driving mode according to the marking operation, the virtual vehicle is controlled to automatically run to a destination, and a user does not need to manually control the virtual vehicle, so that the running flow of the virtual vehicle is simplified, and the operation difficulty of the user in controlling the virtual vehicle to run is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic interface diagram illustrating a related art manual control of a virtual vehicle driving process;
FIG. 2 is a schematic interface diagram illustrating a process of driving a vehicle in a virtual environment according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 4 illustrates a flow chart of a method for driving a vehicle in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 5 illustrates a flow chart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of an autopilot zone corresponding crash detection cartridge provided by an exemplary embodiment;
FIG. 7 is a schematic diagram of a virtual vehicle and an autopilot zone in response to a crash detection cartridge in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram of an implementation of a process for determining a destination based on a marked location;
FIG. 9 is a schematic diagram illustrating an implementation of controlling the virtual vehicle to automatically travel according to a waypoint on the autopilot path;
FIG. 10 is an interface schematic of a user interface in an automatic driving mode and a manual driving mode;
FIG. 11 illustrates a flow chart of a method for driving a vehicle in a virtual environment provided by another exemplary embodiment of the present application;
fig. 12 is a block diagram illustrating an apparatus for driving a vehicle in a virtual environment according to an exemplary embodiment of the present disclosure;
fig. 13 shows a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual carrier: refers to a vehicle in a virtual environment that can be driven and exercised by a virtual object, which may be a virtual car, a virtual motorcycle, a virtual airplane, a virtual bicycle, a virtual tank, a virtual ship, etc. The virtual vehicles can be randomly arranged in the virtual environment, each virtual vehicle has the shape and the volume of the virtual vehicle in the three-dimensional virtual environment to occupy a part of space in the three-dimensional virtual environment, and the virtual vehicles can collide with other virtual objects (such as houses and trees) in the three-dimensional virtual environment.
First person shooter game: the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
User Interface (UI) controls: the gun type gun is characterized by comprising any visual control or element which can be seen on a user interface of an application program, such as controls of pictures, input boxes, text boxes, buttons, labels and the like, wherein some UI controls respond to the operation of a user, for example, the user triggers a UI control corresponding to a dagger prop, and controls a virtual object to switch a gun currently used into a dagger; for example, when the vehicle is driven, the driving control is displayed on the user interface, and the user can control the virtual object to drive the virtual vehicle to run by triggering the driving control.
The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, a first person shooter game, a Multiplayer Online Battle arena games (MOBA), and the like, and the following embodiments are exemplified by applications in games.
The game based on the virtual environment is often composed of one or more maps of game world, the virtual environment in the game simulates the scene of the real world, the user can control the virtual object in the game to walk, run, jump, shoot, fight, drive, switch to use the virtual prop, use the virtual prop to hurt other virtual objects and other actions in the virtual environment, the interactivity is strong, and a plurality of users can form a team on line to play a competitive game.
As shown in fig. 1, it shows a schematic interface diagram for controlling a driving process of a virtual vehicle in the related art. When a user controls a virtual object to drive a virtual vehicle (the virtual vehicle in fig. 1 is a virtual automobile), a driving picture is displayed on the user interface 100, and a map 101, driving controls (including a direction control 102, an acceleration control 103, and a brake control 104), and a vehicle fuel amount identifier 105 are displayed on the user interface 100. The user can check the current position and the surrounding environment of the virtual object through the map 101, can control the virtual vehicle to move forward, move backward and turn to through the direction control 102, can control the virtual vehicle to accelerate through the acceleration control 103, can control the virtual vehicle to stop fast through the brake control 104, and can know the remaining oil quantity of the virtual vehicle through the vehicle oil quantity identification 105.
When the method is adopted to manually control the virtual vehicle to run, a user needs to operate different driving controls according to the environmental condition of the current environment of the virtual vehicle, and needs to manually select a running route.
An embodiment of the present application provides a method for driving a vehicle in a virtual environment, as shown in fig. 2, which illustrates an interface schematic diagram of a process for driving a vehicle in a virtual environment according to an exemplary embodiment of the present application.
In one possible embodiment, when the user controls the virtual vehicle to travel in the virtual environment through the driving controls (including the driving control shown in fig. 1), if the virtual vehicle is in the autopilot area, the terminal displays the autopilot prompting message 106 in the user interface 100, prompting the user to switch the virtual vehicle to the autopilot mode. Further, when a trigger operation for the map 101 is received, the enlarged map 101 is displayed in the user interface 100, and the destination 107 marked in the map 101 by the user is received. After the destination marking is finished, the terminal switches the virtual vehicle into an automatic driving mode, and controls the virtual vehicle to automatically drive to the destination in the automatic driving mode without manually touching a driving control by a user. After the driving mode is switched to the automatic driving mode, the user interface 108 further displays a driving mode switching control 108, and the user can switch the virtual vehicle to the manual driving mode again by clicking the driving mode switching control 108, so that the driving of the virtual vehicle is manually controlled through the driving control.
Compared with the prior art, the method provided by the embodiment of the application requires that the user manually operate the driving control to control the virtual vehicle and autonomously select the driving route in the driving process, the user only needs to control the virtual vehicle to drive to the automatic driving area and set the automatic driving destination through the map, the terminal can automatically determine the driving route and control the virtual vehicle to drive, manual operation of the user is not needed, the control flow of the virtual vehicle is simplified, the operation difficulty of the virtual vehicle is reduced, and the time required by the virtual vehicle to reach the destination is favorably shortened.
Referring to fig. 3, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. The implementation environment comprises: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, switching virtual props, using virtual props to injure other virtual objects. Illustratively, the first virtual object is a first virtual character, such as a simulated character object or an animated character object.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 including a display module 1421, a receiving module 1422, and a control module 1423. The server 140 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The second terminal 160 is a terminal used by a second user who uses the second terminal 160 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, switching virtual props, using virtual props to injure other virtual objects. Illustratively, the second virtual object is a second virtual character, such as a simulated character object or an animated character object.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, a digital player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Referring to fig. 4, a flowchart of a method for driving a vehicle in a virtual environment according to an exemplary embodiment of the present application is shown. The embodiment is described by taking the method as an example for the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or other terminals in the implementation environment, and the method includes the following steps.
The user interface is an interface of an application program supporting a virtual environment, and the user interface comprises a virtual environment picture and controls corresponding to various functions. In the embodiment of the present application, the virtual environment picture is a driving picture.
Alternatively, the virtual environment screen is a screen that observes the virtual environment from the perspective of the virtual object. The perspective refers to an observation angle when observing in the virtual environment at a first person perspective or a third person perspective of the virtual object. Optionally, in an embodiment of the present application, the viewing angle is an angle when a virtual object is observed by a camera model in a virtual environment.
Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model changes while following the position of the virtual object in the virtual environment, and the camera model is always within the preset distance range of the virtual object in the virtual environment. Optionally, the relative positions of the camera model and the virtual object do not change during the automatic following process.
The camera model refers to a three-dimensional model located around a virtual object in a virtual environment, and when a first-person perspective is adopted, the camera model is located near or at the head of the virtual object; when the third person perspective is adopted, the camera model may be located behind and bound to the virtual object, or may be located at any position away from the virtual object by a preset distance, and the virtual object located in the virtual environment may be observed from different angles by the camera model, and optionally, when the third person perspective is the over-shoulder perspective of the first person, the camera model is located behind the virtual object (for example, the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be located overhead of the virtual object head when a top view is employed, which is a view of viewing the virtual environment from an overhead top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment displayed by the user interface.
Taking the camera model as an example, which is located at an arbitrary position away from the virtual object by a preset distance, optionally, one virtual object corresponds to one camera model, and the camera model can rotate around the virtual object as a rotation center, for example: the camera model is rotated with any point of the virtual object as a rotation center, the camera model not only rotates in angle but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model is rotated on the surface of a sphere with the rotation center as a sphere center, wherein any point of the virtual object may be a head, a trunk or any point around the virtual object, which is not limited in the embodiment of the present application. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in a direction in which a point of the spherical surface on which the camera model is located points at the center of the sphere.
Schematically, as shown in fig. 2, the driving screen is a screen when viewed in a virtual environment from a third perspective. Of course, in other possible embodiments, the driving screen may be a screen observed in a virtual environment using a first-person perspective, which is not limited in this embodiment.
Optionally, other elements in the virtual environment including at least one element of a mountain, a flat ground, a river, a lake, an ocean, a desert, a sky, a plant, and a building are also displayed in the driving picture.
Optionally, the user interface includes a driving control for controlling the virtual vehicle in a manual driving mode in addition to displaying the driving screen. The driving controls corresponding to different virtual vehicles may be different in type and number, for example, when a virtual object drives a virtual car, the driving controls displayed on the user interface may include a direction control, an acceleration control, and a brake control; when the virtual object drives the virtual motorcycle, the driving controls displayed by the user interface may include a direction control, an acceleration control, a brake control, a head-up control, and a head-down control. The embodiment of the application does not limit the type and the distribution position of the driving control in the user interface.
Illustratively, as shown in FIG. 2, a direction control 102, an acceleration control 103, and a brake control 104 are included in the user interface 100.
In the embodiment of the application, the virtual vehicle can not be automatically driven in any area in the virtual environment, but only in the automatic driving area. In one possible embodiment, an automatic driving area is preset in the virtual environment, and when the virtual vehicle is located in the automatic driving area, the user can set the automatic driving destination through a map.
Optionally, the automatic driving area includes a preset road in the virtual environment, that is, the user needs to manually control the virtual vehicle to travel to the preset road, and then the automatic driving destination can be set. Of course, other simple environment areas (i.e. areas containing fewer environment elements) in the virtual environment may also be set as the autopilot area outside the preset road, and the specific type of the autopilot area is not limited in this embodiment.
Optionally, the terminal detects whether the virtual vehicle is located in the automatic driving area in real time, and if the virtual vehicle is detected to be located in the automatic driving area, prompt information is displayed on a user interface to prompt a user to set an automatic driving destination through a map, so that the user enters an automatic driving mode.
In a possible implementation manner, when a viewing operation on a map is received, the terminal displays the enlarged map on the user interface, and further receives a marking operation on the map, where the marking operation may be a click operation on a certain area on the map, and correspondingly, a click position corresponding to the click operation is a position of a marked place.
Of course, when the virtual vehicle is located outside the automatic driving area, the user may perform the marking operation on the map, but the point indicated by the marking operation is not used for controlling the virtual vehicle to perform automatic driving.
It should be noted that, when the virtual object controlled by the terminal is the driver of the virtual vehicle, the user can perform the marking operation, and accordingly, if the virtual object is the occupant of the virtual vehicle, the user cannot perform the marking operation (i.e., does not have the authority to set the automatic driving).
And 403, responding to the marking operation, switching the virtual vehicle into an automatic driving mode, and controlling the virtual vehicle to automatically travel to the destination.
Further, the terminal switches the virtual vehicle into an automatic driving mode according to the marking operation and determines an automatic driving destination, so that the virtual vehicle is controlled to automatically drive to the destination. The driving path of the virtual carrier from the current position to the destination is automatically planned by the terminal.
In one possible implementation, all virtual vehicles in the virtual environment support an autonomous driving mode.
In another possible implementation, a preset virtual vehicle in the virtual environment supports an autonomous driving mode. Correspondingly, when the virtual vehicle is a preset virtual vehicle, the terminal responds to the marking operation and switches the virtual vehicle into an automatic driving mode. Wherein the preset virtual vehicle may include a virtual automobile, a virtual tank and a virtual ship, but not include a virtual bicycle and a virtual motorcycle.
Optionally, in the automatic driving mode, the user interface displays mode prompt information to prompt the user that the virtual vehicle is currently in the automatic driving mode.
In one possible embodiment, in the automatic driving mode, the user cannot manually control the virtual vehicle; or, the user can still manually control the virtual vehicle again through the driving control, and after the virtual vehicle is manually controlled, the automatic virtual vehicle exits the automatic driving mode.
When the marking operation on the map is received again in the automatic driving mode, the terminal updates the destination according to the marking operation and controls the virtual vehicle to automatically travel to the updated destination.
In summary, in the embodiment of the present application, when the virtual vehicle travels to the automatic driving area in the virtual environment in the manual driving mode, if the marking operation on the map is received, the virtual vehicle is switched to the automatic driving mode according to the marking operation, and the virtual vehicle is controlled to automatically travel to the destination, without manually controlling the virtual vehicle by the user, so that the flow of controlling the virtual vehicle to travel is simplified, and the difficulty of controlling the virtual vehicle to travel by the user is reduced.
Unlike the automatic driving function of a real vehicle (which needs to use complex image recognition technologies such as vehicle recognition and lane recognition), in the embodiment of the present application, in order to reduce the difficulty and computation amount of the virtual vehicle for implementing the automatic driving function, the virtual vehicle can only be automatically driven in an automatic driving area (such as a preset road), that is, an automatic driving path of the virtual vehicle is located in the automatic driving area. The following describes a process for implementing the automatic driving function using an exemplary embodiment.
Referring to fig. 5, a flowchart of a method for driving a vehicle in a virtual environment according to another exemplary embodiment of the present application is shown. The embodiment is described by taking the method as an example for the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or other terminals in the implementation environment, and the method includes the following steps.
The implementation of step 501 may refer to step 401, which is not described herein again.
Regarding the manner of determining whether the virtual vehicle is located in the autopilot region, in one possible implementation, in addition to providing a collision detection box for a virtual object (such as a virtual vehicle, a virtual house, a virtual roadblock, a virtual tree, etc.) in the virtual environment, in the present embodiment, the autopilot region within the virtual environment is also provided with a collision detection box for detecting that other virtual objects in the virtual environment enter the autopilot region.
Illustratively, as shown in fig. 6, when the automatic driving area is a preset road in the virtual environment, each preset road corresponds to a respective collision detection box 61 (the dotted line area in the figure is the range of the collision detection box).
Optionally, in response to a collision between the first collision detection box and the second collision detection box, the terminal determines that the virtual vehicle is located in the autopilot zone, where the first collision detection box is a collision detection box corresponding to the virtual vehicle, and the second collision detection box is a collision detection box corresponding to the autopilot zone.
Illustratively, as shown in fig. 7, when a first crash detection box 71 corresponding to a virtual car collides with a second crash detection box 72 corresponding to a virtual road, the terminal determines that the virtual pickup is located in the automatic driving area.
Of course, in addition to the above-mentioned manner of determining whether the virtual vehicle is located in the autopilot area, in other possible embodiments, the terminal may further determine whether the virtual vehicle is located in the autopilot area (when the coordinate is located in the area coordinate range, it is determined that the virtual vehicle is located in the autopilot area) by using the coordinate of the position of the virtual vehicle in the virtual environment and the area coordinate range corresponding to the autopilot area, which is not limited in this embodiment.
Further, when the virtual vehicle is located in the autopilot area, the terminal receives a marking operation on the map, wherein the step 402 is referred to in the process of receiving the marking operation, which is not described herein again.
In this embodiment, since the virtual vehicle can only achieve automatic driving in the automatic driving area, in order to avoid that the virtual vehicle travels to an area outside the automatic driving area according to the mark point indicated by the mark operation, and causes abnormal traveling of the virtual vehicle (for example, collision with an obstacle in the virtual environment), in one possible implementation, the terminal determines a destination within the automatic driving area according to the mark point indicated by the mark operation.
Alternatively, when the mark location indicated by the marking operation is located in the automatic driving area, the terminal determines the mark location as the destination. The terminal may determine whether the marked location is located in the automatic driving area according to the location coordinate of the marked location and the area coordinate range of the automatic driving area, and the specific determination manner is not limited in this embodiment.
Optionally, when the marking point indicated by the marking operation is located outside the automatic driving area, the terminal determines a point closest to the marking point within the automatic driving area as the destination.
In order to reduce the learning cost of the user, when the marked point is located outside the automatic driving area, the terminal automatically determines a point which is closest to the marked point in the automatic driving area as a destination so as to perform automatic driving based on the destination.
Illustratively, as shown in fig. 8, when the automatic driving area is a preset road in the virtual environment, if a marked point 81 marked on the map by the user is located outside the preset road, the terminal determines a point on the preset road closest to the marked point 81 as a destination 82.
In addition to automatically determining the destination according to the marking location, in other possible embodiments, when the marking location indicated by the marking operation is located outside the automatic driving area, the terminal may display marking prompt information for prompting setting of the marking location within the automatic driving area until the marking location indicated by the marking operation is located within the automatic driving area.
Further, the terminal determines an automatic driving path in the automatic driving area according to the current location where the virtual prop is located and the determined destination.
Since there may be more than one route with the current location as a starting point and the destination as an end point, for example, when the autonomous driving area is a preset road, different branches may be selected when the vehicle travels from the current location to the destination, and therefore, in order to shorten the travel time of the virtual vehicle, the autonomous driving route is optionally the shortest route traveled from the current location to the destination.
For the way of determining the shortest path, in a possible implementation manner, the terminal determines at least one candidate path (each node is traversed only once) by using a path branch point as a node through a Depth First Search (Depth First Search) algorithm, so that the shortest candidate path is determined as an automatic driving path according to the length of each candidate path. Of course, the terminal may also determine the candidate path through other graph algorithms, which is not limited in this embodiment.
In other possible embodiments, after determining at least one candidate route through a graph algorithm, the terminal displays each candidate route on a map, and determines an automatic driving route according to a selection operation of a user, which is not limited in this embodiment.
Illustratively, as shown in fig. 9, the terminal determines an automated driving route 91.
And 505, switching the virtual vehicle into an automatic driving mode, and controlling the virtual vehicle to travel to the destination according to the automatic driving path.
In order to reduce the difficulty and the calculation amount of the automatic driving, in one possible embodiment, waypoints are preset in an automatic driving area, and correspondingly, the terminal controls the virtual vehicle to automatically drive to a destination according to the waypoints on the automatic driving path.
Optionally, this step includes the following substeps.
Firstly, at least two waypoints on an automatic driving path are determined, and the waypoints are preset in an automatic driving area.
Schematically, as shown in fig. 9, a plurality of waypoints 92 are provided on a preset road (an automatic driving area) in the virtual environment, and the waypoints on the automatic driving path 91 include: K. g, D, E, F are provided.
And secondly, controlling the virtual vehicle to travel to the destination according to the sequence of the waypoints of the at least two waypoints.
The waypoint sequence refers to the sequence of waypoints passed by the current location when reaching the destination on the automatic driving route, and in fig. 9, the waypoint sequence is K → G → D → E → F.
In one possible embodiment, when k waypoints are included on the automatic driving path, the terminal controls the virtual vehicles to travel to the destination according to the waypoint sequence and comprises the following steps.
The method comprises the steps of firstly, controlling a virtual vehicle to travel from a current position to a 1 st road point according to a first traveling direction, wherein the first traveling direction points to the 1 st road point from the current position.
Optionally, if no waypoint is set at the current location of the virtual vehicle, the terminal determines the first driving direction according to the current starting point and the 1 st waypoint on the automatic driving path, so as to control the virtual vehicle to drive to the 1 st waypoint according to the first driving direction.
Schematically, as shown in fig. 9, since no waypoint is set at the current location where the virtual vehicle is located, the terminal first controls the virtual item to travel to waypoint K (1 st waypoint).
Of course, if the current location of the virtual vehicle is provided with the waypoint, the terminal directly executes the step two.
And secondly, controlling the virtual vehicle to drive from the nth road point to the (n + 1) th road point according to a second driving direction, wherein the second driving direction points to the (n + 1) th road point from the nth road point, and n is an integer which is greater than or equal to 1 and less than or equal to k-1.
In a possible embodiment, the path between adjacent waypoints on the automatic driving area is a straight line (or an approximate straight line) and does not include an obstacle, so when the virtual vehicle travels to the nth waypoint, the terminal determines the second traveling direction according to the nth waypoint and the (n + 1) th waypoint, thereby controlling the virtual vehicle to travel to the (n + 1) th waypoint according to the second traveling direction. By cycling through this step, the virtual vehicle travels to the kth waypoint (i.e., the last waypoint on the autonomous driving path).
Illustratively, as shown in fig. 9, the terminal controls the virtual vehicles to sequentially pass through waypoints K, G, D, E, F.
And thirdly, controlling the virtual vehicle to travel from the kth road point to the destination according to a third traveling direction, wherein the third traveling direction points to the destination from the kth road point.
Optionally, if no waypoint is set at the destination, the terminal determines a third traveling direction according to the kth waypoint and the destination, so as to control the virtual vehicle to travel to the destination according to the third traveling direction.
Schematically, as shown in fig. 9, since the destination is located between waypoints F and I (waypoints are not set), the terminal controls the virtual item to automatically travel to the destination according to the direction of the waypoint F pointing to the destination.
In one possible embodiment, after controlling the virtual vehicle to travel to the destination through the above steps, the terminal automatically switches the virtual vehicle to the manual driving mode and controls the virtual vehicle to stop at the destination.
Optionally, since the marked location marked by the user may not completely coincide with the destination, after the terminal switches the virtual vehicle to the manual driving mode, the marked location may be automatically displayed on the map, so that the user manually controls the virtual vehicle to travel to the marked location according to the relative position relationship between the marked location and the destination.
Optionally, when the marked location is different from the destination, the terminal automatically controls the virtual vehicle to steer to the direction of the destination.
In this embodiment, when the marked location manually set by the user is located outside the automatic driving area, the terminal determines the destination closest to the marked location from within the automatic driving area, and further determines the automatic driving path according to the destination and the current location, so as to avoid abnormal driving caused by the fact that the virtual vehicle automatically travels to a non-automatic driving area.
In addition, in this embodiment, by setting waypoints on the automatic driving area, after the automatic driving path is determined, the driving direction of the virtual vehicle is determined according to the waypoints on the automatic driving path, and then the virtual vehicle is controlled to automatically drive according to the driving direction, so that the difficulty and the calculation amount when the automatic driving is realized are reduced while the automatic driving is realized.
Meanwhile, in the embodiment, the collision detection box is arranged in the automatic driving area, so that whether the virtual vehicle is located in the automatic driving area is determined by using the collision detection box, and the determination process of the position of the virtual vehicle is facilitated to be simplified.
In a possible implementation manner, in the manual driving mode, a driving control is displayed in the user interface and is in a clickable state, so as to avoid exiting from the automatic driving mode due to the fact that a user touches the driving control by mistake in the automatic driving process, in the automatic driving mode, the terminal sets the driving control in the user interface to be in a non-clickable state, or cancels display of the driving control.
Correspondingly, when the virtual vehicle runs to the destination, the terminal sets the driving control to be in a clickable state, or resumes displaying the driving control, so that the user continues to manually control the virtual vehicle to run.
Optionally, in the automatic driving mode, in order to enable the user to use the virtual item to attack other virtual objects in the virtual environment, the terminal displays an attack control on the user interface, so that the user can attack by triggering the attack control.
Illustratively, as shown in fig. 10, the terminal in the autopilot mode cancels the display of the driving control 1004 in the user interface 1000 and displays the aiming control 1001 and the firing control 1002 in the user interface 1000.
If the driving control in the user interface is set to be in the non-clickable state in the automatic driving mode, or the display of the driving control is cancelled, the user cannot manually control the virtual vehicle in the automatic driving process. In practical situations, when attacked by other virtual objects in the virtual environment, users often need to change the driving route to avoid the attack. In a possible implementation manner, in the automatic driving mode, the user interface displays a driving mode switching control, and when a triggering operation on the driving mode switching control is received, the terminal switches the virtual vehicle to the manual driving mode and sets the driving control to be in a clickable state, or resumes to display the driving control.
Illustratively, as shown in fig. 10, in the automatic driving mode, a driving mode switching control 1003 is displayed in the user interface 1000, and when a click operation on the driving mode switching control 1003 is received, the terminal controls the virtual vehicle to exit the automatic driving mode, and displays the driving control 1004 in the user interface 1000 again (simultaneously, cancels the display of the attack control).
In the embodiment, in the automatic driving mode, the terminal sets the driving control to be in a non-clickable state, or cancels display of the driving control, so that the phenomenon that the user quits the automatic driving mode due to mistaken touch of the driving control is avoided; meanwhile, the terminal displays a driving mode switching control on the user interface, so that the user can exit the automatic driving mode by triggering the control.
In connection with the above embodiments, in an illustrative example, a flow for controlling the automatic driving of the virtual vehicle is shown in fig. 11.
Step 1101, manually controlling the virtual vehicle.
Step 1102, whether to enter an autopilot zone. If the vehicle enters the automatic driving area, step 1103 is executed, and if the vehicle does not enter the automatic driving area, the flow returns to step 1101.
Step 1103, displaying the automatic driving prompting information.
And step 1104, whether a marking operation on the map is received. If yes, go to step 1105, and if not, go back to step 1103.
Step 1105, displaying the destination corresponding to the marking operation in the map.
Step 1106, whether a destination determination operation is received. If yes, go to step 1107, and if not, go back to step 1105.
Step 1107, enter autonomous driving mode.
Step 1108, whether an autonomous driving path is determined. If so, go to step 1109, otherwise, go back to step 1107.
And step 1109, controlling the virtual vehicle to run according to the road points on the automatic driving path.
Step 1110, whether the destination is reached. If yes, go to step 1111, otherwise go back to step 1109.
And step 1111, controlling the virtual vehicle to stop running.
Fig. 12 is a block diagram illustrating an apparatus for driving a vehicle in a virtual environment according to an exemplary embodiment of the present application, where the apparatus may be disposed at the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or at another terminal in the implementation environment, and the apparatus includes:
the display module 1201 is configured to display a user interface, where the user interface includes a driving picture and a map, the driving picture is a picture of a virtual object driving a virtual vehicle in a virtual environment, and the virtual vehicle is in a manual driving mode;
a receiving module 1202, configured to receive a marking operation on the map in response to that the virtual vehicle is located in an autopilot area in the virtual environment, where the marking operation refers to an operation of marking a place in the map;
a control module 1203, configured to switch the virtual vehicle to an automatic driving mode in response to the marking operation, and control the virtual vehicle to automatically travel to a destination.
Optionally, the control module 1203 is configured to:
determining the destination according to a marking place indicated by the marking operation, wherein the destination is positioned in the automatic driving area;
determining an automatic driving path according to the current location of the virtual vehicle and the destination, wherein the automatic driving path is located in the automatic driving area;
and switching the virtual vehicle to the automatic driving mode, and controlling the virtual vehicle to travel to the destination according to the automatic driving path.
Optionally, when the virtual vehicle is controlled to travel to the destination according to the automatic driving route, the control module 1203 is configured to:
determining at least two waypoints on the automatic driving path, wherein the waypoints are preset in the automatic driving area;
and controlling the virtual vehicle to travel to the destination according to the sequence of the waypoints of at least two waypoints.
Optionally, the automatic driving path includes k waypoints, where k is an integer greater than or equal to 2;
when the virtual vehicle is controlled to travel to the destination according to the waypoint sequence of at least two waypoints, the control module 1203 is configured to:
controlling the virtual vehicle to travel from the current location to a 1 st waypoint according to a first travel direction, wherein the first travel direction is pointed to the 1 st waypoint by the current location;
controlling the virtual vehicle to travel from the nth waypoint to the (n + 1) th waypoint according to a second traveling direction, wherein the second traveling direction is pointed to the (n + 1) th waypoint by the nth waypoint, and n is an integer which is greater than or equal to 1 and less than or equal to k-1;
and controlling the virtual vehicle to travel from the k-th road point to the destination according to a third traveling direction, wherein the third traveling direction is pointed to the destination by the k-th road point.
Optionally, when the destination is determined according to the mark location indicated by the mark operation, the control module 1203 is configured to:
determining the marked location as the destination in response to the marked location indicated by the marking operation being located in the automatic driving area;
and in response to the marking place indicated by the marking operation being located outside the automatic driving area, determining a place in the automatic driving area, which is closest to the marking place, as the destination, or displaying marking prompt information, wherein the marking prompt information is used for prompting the setting of the marking place in the automatic driving area.
Optionally, the receiving module 1203 is configured to:
determining that the virtual vehicle is located in the autopilot zone in response to a collision between a first collision detection cartridge and a second collision detection cartridge, the first collision detection cartridge being a collision detection cartridge corresponding to the virtual vehicle, the second collision detection cartridge being a collision detection cartridge corresponding to the autopilot zone.
Optionally, the apparatus further comprises:
the first switching module is used for responding to the virtual vehicle running to the destination, switching the virtual vehicle into the manual driving mode, and controlling the virtual vehicle to stop running.
Optionally, in the manual driving mode, a driving control is displayed in the user interface, and the driving control is in a clickable state;
the device further comprises:
the setting module is used for setting a driving control in the user interface to be in a non-clickable state or canceling to display the driving control in the automatic driving mode;
and responding to the virtual vehicle running to the destination, setting the driving control to be in a clickable state, or resuming to display the driving control.
Optionally, the apparatus further comprises:
the second switching module is used for displaying a driving mode switching control on the user interface in the automatic driving mode;
and responding to the triggering operation of the driving mode switching control, switching the virtual vehicle into the manual driving mode, setting the driving control into a clickable state, or resuming to display the driving control.
Optionally, the automatic driving area includes a preset road in the virtual environment.
In summary, in the embodiment of the present application, when the virtual vehicle travels to the automatic driving area in the virtual environment in the manual driving mode, if the marking operation on the map is received, the virtual vehicle is switched to the automatic driving mode according to the marking operation, and the virtual vehicle is controlled to automatically travel to the destination, without manually controlling the virtual vehicle by the user, so that the flow of controlling the virtual vehicle to travel is simplified, and the difficulty of controlling the virtual vehicle to travel by the user is reduced.
In this embodiment, when the marked location manually set by the user is located outside the automatic driving area, the terminal determines the destination closest to the marked location from within the automatic driving area, and further determines the automatic driving path according to the destination and the current location, so as to avoid abnormal driving caused by the fact that the virtual vehicle automatically travels to a non-automatic driving area.
In addition, in this embodiment, by setting waypoints on the automatic driving area, after the automatic driving path is determined, the driving direction of the virtual vehicle is determined according to the waypoints on the automatic driving path, and then the virtual vehicle is controlled to automatically drive according to the driving direction, so that the difficulty and the calculation amount when the automatic driving is realized are reduced while the automatic driving is realized.
Meanwhile, in the embodiment, the collision detection box is arranged in the automatic driving area, so that whether the virtual vehicle is located in the automatic driving area is determined by using the collision detection box, and the determination process of the position of the virtual vehicle is facilitated to be simplified.
In the embodiment, in the automatic driving mode, the terminal sets the driving control to be in a non-clickable state, or cancels display of the driving control, so that the phenomenon that the user quits the automatic driving mode due to mistaken touch of the driving control is avoided; meanwhile, the terminal displays a driving mode switching control on the user interface, so that the user can exit the automatic driving mode by triggering the control.
Referring to fig. 13, a block diagram of a terminal 1300 according to an exemplary embodiment of the present application is shown. The terminal 1300 may be a portable mobile terminal such as: smart phones, tablet computers, MP3 players (Moving picture Experts Group Audio Layer III, mpeg Audio Layer IV), MP4 players (Moving picture Experts Group Audio Layer IV, mpeg Audio Layer 4). Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
The memory 1302 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement a method as provided by embodiments of the present application.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display 1305 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display 1305 also has the capability to collect touch signals on or over the surface of the touch display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. The touch display 1305 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, touch display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, touch display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, touch display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even more, the touch screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 is used to provide an audio interface between the user and the terminal 1300. The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 to implement navigation or LBS (location based Service). The positioning component 1308 can be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, or the galileo System of russia.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying touch display 1305. When the pressure sensor 1313 is provided on the side frame of the terminal 1300, a user's grip signal on the terminal 1300 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 1313 is disposed on the lower layer of the touch display 1305, it is possible to control an operability control on the UI interface according to a pressure operation of the user on the touch display 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user to identify the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front face of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the touch display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the touch display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually becomes larger.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The present embodiments also provide a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement the method for driving a vehicle in a virtual environment according to any of the above embodiments.
The present application further provides a computer program product, which when run on a server, causes a computer to execute the method for driving a vehicle in a virtual environment provided by the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (13)
1. A method of driving a vehicle in a virtual environment, the method comprising:
displaying a user interface, wherein the user interface comprises a driving picture and a map, the driving picture is a picture of driving of a virtual vehicle driven by a virtual object in a virtual environment, and the virtual vehicle is in a manual driving mode;
receiving a marking operation on the map in response to the virtual vehicle being located in an autopilot area in the virtual environment, the marking operation being an operation of marking a place in the map;
and responding to the marking operation, switching the virtual vehicle into an automatic driving mode, and controlling the virtual vehicle to automatically travel to a destination.
2. The method of claim 1, wherein said switching the virtual vehicle to an autonomous driving mode and controlling the virtual vehicle to travel automatically to a destination in response to the marking operation comprises:
determining the destination according to a marking place indicated by the marking operation, wherein the destination is positioned in the automatic driving area;
determining an automatic driving path according to the current location of the virtual vehicle and the destination, wherein the automatic driving path is located in the automatic driving area;
and switching the virtual vehicle to the automatic driving mode, and controlling the virtual vehicle to travel to the destination according to the automatic driving path.
3. The method of claim 2, wherein said controlling the virtual vehicle to travel to the destination according to the autonomous driving path comprises:
determining at least two waypoints on the automatic driving path, wherein the waypoints are preset in the automatic driving area;
and controlling the virtual vehicle to travel to the destination according to the sequence of the waypoints of at least two waypoints.
4. The method of claim 3, wherein the autonomous driving path includes k waypoints, k being an integer greater than or equal to 2;
the controlling the virtual vehicle to travel to the destination according to the waypoint sequence of at least two waypoints comprises:
controlling the virtual vehicle to travel from the current location to a 1 st waypoint according to a first travel direction, wherein the first travel direction is pointed to the 1 st waypoint by the current location;
controlling the virtual vehicle to travel from the nth waypoint to the (n + 1) th waypoint according to a second traveling direction, wherein the second traveling direction is pointed to the (n + 1) th waypoint by the nth waypoint, and n is an integer which is greater than or equal to 1 and less than or equal to k-1;
and controlling the virtual vehicle to travel from the k-th road point to the destination according to a third traveling direction, wherein the third traveling direction is pointed to the destination by the k-th road point.
5. The method of claim 2, wherein determining the destination based on the marked location indicated by the marking operation comprises:
determining the marked location as the destination in response to the marked location indicated by the marking operation being located in the automatic driving area;
and in response to the marking place indicated by the marking operation being located outside the automatic driving area, determining a place in the automatic driving area, which is closest to the marking place, as the destination, or displaying marking prompt information, wherein the marking prompt information is used for prompting the setting of the marking place in the automatic driving area.
6. The method of any of claims 1 to 5, wherein said responding to said virtual vehicle being located in an autopilot zone in said virtual environment comprises:
determining that the virtual vehicle is located in the autopilot zone in response to a collision between a first collision detection cartridge and a second collision detection cartridge, the first collision detection cartridge being a collision detection cartridge corresponding to the virtual vehicle, the second collision detection cartridge being a collision detection cartridge corresponding to the autopilot zone.
7. The method according to any one of claims 1 to 5, wherein after the switching the virtual vehicle to the automatic driving mode and controlling the virtual vehicle to automatically travel to the destination in response to the marking operation, the method further comprises:
and responding to the virtual vehicle running to the destination, switching the virtual vehicle to the manual driving mode, and controlling the virtual vehicle to stop running.
8. The method according to any one of claims 1 to 5, wherein in the manual driving mode, a driving control is displayed in the user interface, and the driving control is in a clickable state;
the method further comprises the following steps:
in the automatic driving mode, setting a driving control in the user interface to be in a non-clickable state, or canceling to display the driving control;
and responding to the virtual vehicle running to the destination, setting the driving control to be in a clickable state, or resuming to display the driving control.
9. The method of claim 8, further comprising:
in the automatic driving mode, displaying a driving mode switching control on the user interface;
and responding to the triggering operation of the driving mode switching control, switching the virtual vehicle into the manual driving mode, setting the driving control into a clickable state, or resuming to display the driving control.
10. The method of any of claims 1 to 5, wherein the autopilot zone comprises a predetermined roadway in the virtual environment.
11. An apparatus for driving a vehicle in a virtual environment, the apparatus comprising:
the display module is used for displaying a user interface, the user interface comprises a driving picture and a map, the driving picture is a picture of a virtual object driving a virtual carrier to drive in a virtual environment, and the virtual carrier is in a manual driving mode;
a receiving module, configured to receive a marking operation on the map in response to the virtual vehicle being located in an autopilot area in the virtual environment, where the marking operation refers to an operation of marking a place in the map;
and the control module is used for responding to the marking operation, switching the virtual vehicle into an automatic driving mode and controlling the virtual vehicle to automatically travel to a destination.
12. A terminal, characterized in that the terminal comprises: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of driving a vehicle in a virtual environment according to any one of claims 1 to 10.
13. A computer readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of driving a vehicle in a virtual environment according to any one of claims 1 to 10.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010080028.6A CN111228804B (en) | 2020-02-04 | 2020-02-04 | Method, device, terminal and storage medium for driving vehicle in virtual environment |
PCT/CN2020/128377 WO2021155694A1 (en) | 2020-02-04 | 2020-11-12 | Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium |
KR1020227008387A KR102680606B1 (en) | 2020-02-04 | 2020-11-12 | Method and apparatus for driving a means of transportation in a virtual environment, and terminal and storage medium |
JP2022520700A JP7374313B2 (en) | 2020-02-04 | 2020-11-12 | Methods, devices, terminals and programs for driving vehicles in virtual environments |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010080028.6A CN111228804B (en) | 2020-02-04 | 2020-02-04 | Method, device, terminal and storage medium for driving vehicle in virtual environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111228804A true CN111228804A (en) | 2020-06-05 |
CN111228804B CN111228804B (en) | 2021-05-14 |
Family
ID=70878167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080028.6A Active CN111228804B (en) | 2020-02-04 | 2020-02-04 | Method, device, terminal and storage medium for driving vehicle in virtual environment |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7374313B2 (en) |
KR (1) | KR102680606B1 (en) |
CN (1) | CN111228804B (en) |
WO (1) | WO2021155694A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111760275A (en) * | 2020-07-08 | 2020-10-13 | 网易(杭州)网络有限公司 | Game control method and device and electronic equipment |
CN112156474A (en) * | 2020-09-25 | 2021-01-01 | 努比亚技术有限公司 | Carrier control method, carrier control equipment and computer-readable storage medium |
CN112386912A (en) * | 2021-01-21 | 2021-02-23 | 博智安全科技股份有限公司 | Ground reconnaissance and visibility adjudication method, terminal device and computer-readable storage medium |
WO2021155694A1 (en) * | 2020-02-04 | 2021-08-12 | 腾讯科技(深圳)有限公司 | Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium |
CN114011073A (en) * | 2021-11-05 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Method, device and equipment for controlling vehicle and computer readable storage medium |
CN115105077A (en) * | 2022-06-22 | 2022-09-27 | 中国人民解放军空军特色医学中心 | System for evaluating individual characteristics of flight personnel |
WO2022227934A1 (en) * | 2021-04-26 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual vehicle control method and apparatus, device, medium, and program product |
US20230117019A1 (en) * | 2021-10-19 | 2023-04-20 | Cyngn, Inc. | System and method of same-loop adaptive simulation for autonomous driving |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114344899A (en) * | 2021-10-26 | 2022-04-15 | 腾讯科技(深圳)有限公司 | Coordinate axis display method, device, terminal and medium applied to virtual environment |
CN115346362B (en) * | 2022-06-10 | 2024-04-09 | 斑马网络技术有限公司 | Driving data processing method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101241507A (en) * | 2008-01-17 | 2008-08-13 | 腾讯科技(深圳)有限公司 | Map road-seeking method and system |
US20090051690A1 (en) * | 2003-06-30 | 2009-02-26 | Microsoft Corporation | Motion line switching in a virtual environment |
CN104931037A (en) * | 2014-03-18 | 2015-09-23 | 厦门高德软件有限公司 | Navigation prompting information generation method and device |
CN106730841A (en) * | 2017-01-17 | 2017-05-31 | 网易(杭州)网络有限公司 | A kind of method for searching and device |
WO2018079764A1 (en) * | 2016-10-31 | 2018-05-03 | 学 秋田 | Portable terminal device, network game system, and race game processing method |
CN110559662A (en) * | 2019-09-12 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Visual angle switching method, device, terminal and medium in virtual environment |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8713696B2 (en) * | 2006-01-13 | 2014-04-29 | Demand Media, Inc. | Method and system for dynamic digital rights bundling |
JP5036573B2 (en) | 2008-01-22 | 2012-09-26 | 日本信号株式会社 | Network generation device for total minimum cost route search, generation method, and route search device using this network |
JP5482280B2 (en) * | 2010-02-18 | 2014-05-07 | ソニー株式会社 | Information processing apparatus, electric vehicle, and discharge management method |
JP6311429B2 (en) | 2014-04-18 | 2018-04-18 | 株式会社デンソー | Automatic operation plan display device and program for automatic operation plan display device |
JP6962076B2 (en) | 2017-09-01 | 2021-11-05 | 株式会社デンソー | Vehicle driving control device and its control method |
CN108245888A (en) * | 2018-02-09 | 2018-07-06 | 腾讯科技(深圳)有限公司 | Virtual object control method, device and computer equipment |
CN109011575B (en) * | 2018-07-04 | 2019-07-02 | 苏州玩友时代科技股份有限公司 | A kind of automatic method for searching, device and equipment |
CN110681156B (en) * | 2019-10-10 | 2021-10-29 | 腾讯科技(深圳)有限公司 | Virtual role control method, device, equipment and storage medium in virtual world |
CN111228804B (en) * | 2020-02-04 | 2021-05-14 | 腾讯科技(深圳)有限公司 | Method, device, terminal and storage medium for driving vehicle in virtual environment |
-
2020
- 2020-02-04 CN CN202010080028.6A patent/CN111228804B/en active Active
- 2020-11-12 JP JP2022520700A patent/JP7374313B2/en active Active
- 2020-11-12 KR KR1020227008387A patent/KR102680606B1/en active IP Right Grant
- 2020-11-12 WO PCT/CN2020/128377 patent/WO2021155694A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090051690A1 (en) * | 2003-06-30 | 2009-02-26 | Microsoft Corporation | Motion line switching in a virtual environment |
CN101241507A (en) * | 2008-01-17 | 2008-08-13 | 腾讯科技(深圳)有限公司 | Map road-seeking method and system |
CN104931037A (en) * | 2014-03-18 | 2015-09-23 | 厦门高德软件有限公司 | Navigation prompting information generation method and device |
WO2018079764A1 (en) * | 2016-10-31 | 2018-05-03 | 学 秋田 | Portable terminal device, network game system, and race game processing method |
CN106730841A (en) * | 2017-01-17 | 2017-05-31 | 网易(杭州)网络有限公司 | A kind of method for searching and device |
CN110559662A (en) * | 2019-09-12 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Visual angle switching method, device, terminal and medium in virtual environment |
Non-Patent Citations (2)
Title |
---|
《绝地求生》大更新:急救包可移动使用,车辆自动驾驶: "小菜鸡说实事", 《HTTPS://BAIJIAHAO.BAIDU.COM/S?ID=1639466239536661969&WFR=SPIDER&FOR=PC》 * |
游侠网: "孤岛惊魂5自动驾驶怎么使用 孤岛惊魂5自动驾驶使用技巧分析", 《HTTPS://GL.ALI213.NET/HTML/2018-5/235173.HTML》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021155694A1 (en) * | 2020-02-04 | 2021-08-12 | 腾讯科技(深圳)有限公司 | Method and apparatus for driving traffic tool in virtual environment, and terminal and storage medium |
CN111760275A (en) * | 2020-07-08 | 2020-10-13 | 网易(杭州)网络有限公司 | Game control method and device and electronic equipment |
CN112156474A (en) * | 2020-09-25 | 2021-01-01 | 努比亚技术有限公司 | Carrier control method, carrier control equipment and computer-readable storage medium |
CN112386912A (en) * | 2021-01-21 | 2021-02-23 | 博智安全科技股份有限公司 | Ground reconnaissance and visibility adjudication method, terminal device and computer-readable storage medium |
CN112386912B (en) * | 2021-01-21 | 2021-04-23 | 博智安全科技股份有限公司 | Ground reconnaissance and visibility adjudication method, terminal device and computer-readable storage medium |
WO2022227934A1 (en) * | 2021-04-26 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual vehicle control method and apparatus, device, medium, and program product |
US20230117019A1 (en) * | 2021-10-19 | 2023-04-20 | Cyngn, Inc. | System and method of same-loop adaptive simulation for autonomous driving |
US11760368B2 (en) * | 2021-10-19 | 2023-09-19 | Cyngn, Inc. | System and method of same-loop adaptive simulation for autonomous driving |
CN114011073A (en) * | 2021-11-05 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Method, device and equipment for controlling vehicle and computer readable storage medium |
CN114011073B (en) * | 2021-11-05 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Method, apparatus, device and computer readable storage medium for controlling carrier |
CN115105077A (en) * | 2022-06-22 | 2022-09-27 | 中国人民解放军空军特色医学中心 | System for evaluating individual characteristics of flight personnel |
Also Published As
Publication number | Publication date |
---|---|
KR20220046651A (en) | 2022-04-14 |
CN111228804B (en) | 2021-05-14 |
KR102680606B1 (en) | 2024-07-03 |
WO2021155694A1 (en) | 2021-08-12 |
JP2022551112A (en) | 2022-12-07 |
JP7374313B2 (en) | 2023-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111228804B (en) | Method, device, terminal and storage medium for driving vehicle in virtual environment | |
CN111265869B (en) | Virtual object detection method, device, terminal and storage medium | |
CN110115838B (en) | Method, device, equipment and storage medium for generating mark information in virtual environment | |
CN110694261B (en) | Method, terminal and storage medium for controlling virtual object to attack | |
CN110448891B (en) | Method, device and storage medium for controlling virtual object to operate remote virtual prop | |
CN111589142B (en) | Virtual object control method, device, equipment and medium | |
CN111408133B (en) | Interactive property display method, device, terminal and storage medium | |
CN110665230B (en) | Virtual role control method, device, equipment and medium in virtual world | |
CN110917619B (en) | Interactive property control method, device, terminal and storage medium | |
CN110755841B (en) | Method, device and equipment for switching props in virtual environment and readable storage medium | |
CN110507994B (en) | Method, device, equipment and storage medium for controlling flight of virtual aircraft | |
CN110613938B (en) | Method, terminal and storage medium for controlling virtual object to use virtual prop | |
CN112121422B (en) | Interface display method, device, equipment and storage medium | |
CN110694273A (en) | Method, device, terminal and storage medium for controlling virtual object to use prop | |
CN111035918A (en) | Reconnaissance interface display method and device based on virtual environment and readable storage medium | |
CN108786110B (en) | Method, device and storage medium for displaying sighting telescope in virtual environment | |
CN110681156B (en) | Virtual role control method, device, equipment and storage medium in virtual world | |
CN111338534A (en) | Virtual object game method, device, equipment and medium | |
CN111672106B (en) | Virtual scene display method and device, computer equipment and storage medium | |
CN111013137B (en) | Movement control method, device, equipment and storage medium in virtual scene | |
CN114130023A (en) | Virtual object switching method, device, equipment, medium and program product | |
CN110812841B (en) | Method, device, equipment and medium for judging virtual surface in virtual world | |
CN113041619A (en) | Control method, device, equipment and medium for virtual vehicle | |
CN112755517A (en) | Virtual object control method, device, terminal and storage medium | |
US20220184506A1 (en) | Method and apparatus for driving vehicle in virtual environment, terminal, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40023651 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |