WO2022227958A1 - 虚拟载具的显示方法、装置、设备以及存储介质 - Google Patents

虚拟载具的显示方法、装置、设备以及存储介质 Download PDF

Info

Publication number
WO2022227958A1
WO2022227958A1 PCT/CN2022/082663 CN2022082663W WO2022227958A1 WO 2022227958 A1 WO2022227958 A1 WO 2022227958A1 CN 2022082663 W CN2022082663 W CN 2022082663W WO 2022227958 A1 WO2022227958 A1 WO 2022227958A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
target
parts
vehicle
display
Prior art date
Application number
PCT/CN2022/082663
Other languages
English (en)
French (fr)
Inventor
黄晓权
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022227958A1 publication Critical patent/WO2022227958A1/zh
Priority to US17/987,302 priority Critical patent/US20230072503A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/69Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • the present application relates to the field of computer technology, and in particular, to a display method, apparatus, device, and storage medium for a virtual vehicle.
  • Embodiments of the present application provide a display method, apparatus, device, and storage medium for a virtual vehicle, which can improve the efficiency of human-computer interaction.
  • the technical solution is as follows:
  • a method for displaying a virtual vehicle comprising:
  • a first target vehicle is displayed in the virtual scene, where the first target vehicle is a virtual vehicle synthesized from the plurality of virtual parts.
  • a display device for a virtual vehicle comprising:
  • an area display module used for displaying the parts display area in the virtual scene in response to the part display instruction, where the parts display area is used to display the virtual parts possessed by the controlled virtual object;
  • control display module configured to display a synthetic control in the virtual scene when a plurality of virtual parts displayed in the parts display area meet the target conditions
  • a vehicle display module configured to display a first target vehicle in the virtual scene in response to a triggering operation on the synthesis control, where the first target vehicle is a virtual vehicle synthesized by the plurality of virtual parts Tool.
  • a computer device comprising one or more processors and one or more memories, wherein the one or more memories store at least one computer program, the computer program being executed by the One or more processors are loaded and executed to implement the display method of the virtual vehicle.
  • a computer-readable storage medium is provided, and at least one computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by a processor to implement the display method of the virtual vehicle.
  • a computer program product or computer program comprising program code stored in a computer-readable storage medium from which a processor of a computer device reads With the program code, the processor executes the program code, so that the computer device executes the above-mentioned display method of the virtual vehicle.
  • the technical solutions provided by the embodiments of the present application can intuitively display the virtual parts already owned by the controlled virtual object by displaying the parts display area in the virtual scene. , that is, when the virtual parts already owned by the controlled virtual object meet the target conditions, the synthesis control is displayed, so that by triggering the synthesis control, the multiple virtual parts can be synthesized into a virtual vehicle, so that the terminal can be in the virtual scene. Display the virtual vehicle. Since the display of virtual parts is intuitive and efficient, it can improve the efficiency of users to view virtual parts, and because the synthesis of virtual vehicles can be realized only by clicking the synthesis control, the operation mode of synthesizing virtual vehicles is simple and efficient, that is, people The efficiency of computer interaction is high.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for displaying a virtual vehicle provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application
  • FIG. 5 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 11 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an interface provided by an embodiment of the present application.
  • 15 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application.
  • 16 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application.
  • 17 is a schematic diagram of an interface provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a display device for a virtual carrier provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the term "at least one” refers to one or more, and the meaning of "plurality” refers to two or more.
  • a plurality of face images refers to two or more face images.
  • Virtual scene is the virtual scene displayed (or provided) when the application is running on the terminal.
  • the virtual scene is a simulated environment of the real world, or a semi-simulated and semi-fictional virtual environment, or a purely fictional virtual environment.
  • the virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the embodiment of the present application does not limit the dimension of the virtual scene.
  • the virtual scene includes sky, land, ocean, etc.
  • the land includes environmental elements such as desert and city, and the user can control the virtual object to move in the virtual scene.
  • Virtual object refers to the movable object in the virtual scene.
  • the movable objects are virtual characters, virtual animals, cartoon characters, etc., such as characters, animals, plants, oil barrels, walls, stones, etc. displayed in the virtual scene.
  • the virtual object is a virtual avatar representing the user in the virtual scene.
  • the virtual scene can include multiple virtual objects, and each virtual object has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene.
  • the virtual object is a user character controlled by an operation on the client, or an artificial intelligence (Artificial Intelligence, AI) set in the virtual scene battle through training, or is set in the virtual scene Non-User Character (Non-Player Character, NPC).
  • AI Artificial Intelligence
  • NPC Non-Player Character
  • the virtual object is a virtual character competing in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene is preset, or dynamically determined according to the number of clients participating in the interaction.
  • users can control virtual objects to freely fall, glide or open a parachute in the sky of the virtual scene, and run, jump, crawl, bend forward, etc. on the land.
  • Virtual objects swim, float or dive in the ocean.
  • users can also control virtual objects to move in the virtual scene on a virtual vehicle.
  • the virtual vehicle is a virtual car, a virtual aircraft, a virtual yacht, etc.
  • the foregoing scenario is used as an example for illustration here, which is not specifically limited in this embodiment of the present application.
  • Users can also control the interaction between virtual objects and other virtual objects through interactive props.
  • the interactive props are throwing interactive props such as grenade, cluster For shooting interactive props such as pistols and rifles, this application does not specifically limit the types of interactive props.
  • virtual vehicles are often configured by planners and set at different positions in the virtual scene, and the user can control the virtual object to drive the virtual vehicle by controlling the virtual object to approach the virtual vehicle.
  • the user cannot decide by himself the virtual vehicle he wants to use, and can only control the virtual vehicle encountered by the virtual object driving in the virtual scene, resulting in low efficiency of human-computer interaction.
  • FIG. 1 is a schematic diagram of an implementation environment of a virtual vehicle display method provided by an embodiment of the present application.
  • the implementation environment includes: a first terminal 120 , a second terminal 140 , and a server 160 .
  • the first terminal 120 has an application program that supports virtual scene display installed and running.
  • the application is any one of a First-Person Shooting Game (FPS), a third-person shooter, a virtual reality application, a three-dimensional map program, or a multiplayer shootout type survival game.
  • the first terminal 120 is a terminal used by the first user.
  • the first user uses the first terminal 120 to operate the controlled virtual object located in the virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of ride, jump, drive, pick up, shoot, attack, throw.
  • the charged virtual object is a first virtual character, such as a simulated character or an anime character.
  • the first terminal 120 and the second terminal 140 are connected to the server 160 through a wireless network or a wired network.
  • the second terminal 140 has an application program supporting virtual scene display installed and running.
  • the application is any one of an FPS, a third-person shooter, a virtual reality application, a three-dimensional map program, or a multiplayer shootout-type survival game.
  • the second terminal 140 is a terminal used by the second user.
  • the second user uses the second terminal 140 to operate another virtual object located in the virtual scene to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of ride, jump, drive, pick up, shoot, attack, throw.
  • the virtual object controlled by the second terminal 140 is a second virtual character, such as a simulated character or an anime character.
  • the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 are in the same virtual scene. At this time, the virtual object controlled by the first terminal 120 can be controlled with the second terminal 140 in the virtual scene. interact with virtual objects.
  • the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 are in a hostile relationship, for example, the virtual object controlled by the first terminal 120 and the virtual object controlled by the second terminal 140 belong to different teams The virtual objects in the hostile relationship can interact with each other on the land by shooting at each other.
  • the applications installed on the first terminal 120 and the second terminal 140 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms.
  • the first terminal 120 generally refers to one of the multiple terminals
  • the second terminal 140 generally refers to one of the multiple terminals. In this embodiment, only the first terminal 120 and the second terminal 140 are used as examples for illustration.
  • the device types of the first terminal 120 and the second terminal 140 are the same or different, and the device types include at least one of a smart phone, a tablet computer, a laptop computer and a desktop computer.
  • the first terminal 120 and the second terminal 140 are smart phones, or other handheld portable game devices but are not limited thereto.
  • a terminal is used to refer to the first terminal or the second terminal.
  • the server 160 is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud communications, middleware services, domain name services, security services, distribution networks (Content Delivery Network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms. Not limited.
  • the virtual scene displayed by the computer equipment in the present application is first introduced.
  • game designers will refer to the way humans observe the real world. way to design the way of the virtual scene displayed by the computer equipment.
  • the controlled virtual object 201 can observe the virtual scene in the area 202, and the picture obtained by observing the area 202 from the controlled virtual object 201's angle is the virtual scene displayed by the computer device.
  • the user can adjust the position of the controlled virtual object 201 to observe the virtual scene by adjusting the orientation of the controlled virtual object 201 .
  • the virtual scene displayed by the computer device also displays controls for controlling the controlled virtual object to perform different actions.
  • the virtual scene 301 displayed by the computer device displays a virtual joystick 302, a posture adjustment control 303, a shooting control 304 and a prop switching control 305, wherein the virtual joystick 302 is used to control the moving direction of the controlled virtual object.
  • the posture adjustment control 303 is used to adjust the posture of the controlled virtual object, for example, to control the virtual object to perform actions such as squatting or crawling.
  • the shooting control 304 is used to control the interactive props held by the controlled virtual object to fire virtual ammunition.
  • the prop switching control 305 is used to switch the target prop.
  • the user can control the controlled virtual object to throw the target prop through the shooting control 304 .
  • 306 is a small map, the user can observe the positions of teammates and enemies in the virtual scene through the small map 306 .
  • a terminal is used as an example for execution, and the terminal here is the first terminal 120 or the second terminal 140 in the above implementation environment.
  • the technical solutions provided in the present application can be executed through the interaction between the terminal and the server, and the embodiment of the present application does not limit the type of the execution subject.
  • FIG. 4 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application. Referring to FIG. 4 , the method includes:
  • the terminal In response to the part display instruction, the terminal displays a parts display area in the virtual scene, and the parts display area is used to display the virtual parts owned by the controlled virtual object.
  • the virtual parts are also the parts that the user synthesizes the virtual vehicle, and the virtual vehicle includes multiple types, such as a virtual tank, a virtual car, a virtual motorcycle, and a virtual yacht. If the virtual vehicle is a virtual tank, then the virtual parts are also the parts used to synthesize the virtual tank.
  • the terminal displays the composite control in the virtual scene.
  • the synthetic control is a button displayed on the screen, and the user can control the terminal to perform corresponding steps by triggering the button.
  • the terminal In response to the triggering operation on the synthesis control, the terminal displays a first target vehicle in the virtual scene, where the first target vehicle is a virtual vehicle synthesized by multiple virtual parts.
  • the virtual parts already owned by the controlled virtual object can be displayed intuitively, and further, through multiple virtual parts in the part display area that meet the target conditions, that is, the controlled virtual object has
  • a synthesis control is displayed, so that by triggering the synthesis control, the multiple virtual parts can be synthesized into a virtual vehicle, so that the terminal can display the virtual vehicle in the virtual scene. Since the display of virtual parts is intuitive and efficient, it can improve the efficiency of users to view virtual parts, and because the synthesis of virtual vehicles can be realized only by clicking the synthesis control, the operation mode of synthesizing virtual vehicles is simple and efficient, that is, people The efficiency of computer interaction is high.
  • FIG. 5 is a flowchart of a method for displaying a virtual vehicle provided by an embodiment of the present application. Referring to FIG. 5 , the method includes:
  • the terminal controls a controlled virtual object to acquire a virtual part, and the controlled virtual object is a virtual object controlled by the terminal.
  • the virtual parts correspond to different parts of the virtual vehicle.
  • the virtual vehicle is a virtual tank
  • the virtual parts correspond to the chassis, engine, armor, gun barrel, and secondary weapons of the virtual tank, respectively.
  • a virtual vending machine is displayed in the virtual scene, the virtual vending machine is used to provide virtual parts, the distance between the controlled virtual object and the virtual vending machine is less than or equal to a first distance threshold, and the terminal is in the virtual scene
  • the part selection interface is displayed, and multiple virtual parts to be selected are displayed on the part selection interface.
  • the terminal determines the virtual part as a virtual part owned by the controlled virtual object.
  • the first distance threshold is set by the technician according to the actual situation, for example, it is set to 30 or 50, which is not limited in this embodiment of the present application.
  • the terminal can display a part selection interface in the virtual scene, and the user can select virtual parts for the controlled virtual object in the part selection interface.
  • the first part describes the manner in which the terminal displays a part selection interface when the distance between the controlled virtual object and the virtual vending machine is less than or equal to the first distance threshold.
  • a plurality of invisible collision detection boxes are arranged around the virtual vending machine, and the collision detection boxes will not block the virtual objects moving in the virtual scene.
  • the longest distance is the first distance threshold.
  • the terminal determines that the distance between the controlled virtual object and the virtual vending machine is less than or equal to the first distance threshold, and displays a part selection interface in the virtual scene.
  • the way in which the terminal determines that the controlled virtual object is in contact with the collision detection box is to determine whether there is an overlap between the model of the controlled virtual object and the collision detection box. In the case of overlapping parts, the terminal determines that the controlled virtual object is in contact with the collision detection box.
  • the terminal can divide the virtual scene into multiple invisible grids, and when the controlled virtual object continuously moves in the virtual scene, it can span different grids.
  • the terminal can determine the distance between the controlled virtual object and the virtual vending machine in real time.
  • the terminal displays a part selection interface in the virtual scene.
  • the terminal can determine the distance between the controlled virtual object and the virtual vending machine according to the coordinates of the controlled virtual object in the virtual scene and the coordinates of the virtual vending machine in the virtual scene. In this way, the terminal does not need to determine the distance between the controlled virtual object and the virtual vending machine in real time, and only needs to start the detection when the controlled virtual object enters a specific grid, which reduces the consumption of computing resources of the terminal.
  • the terminal determines the virtual part as a virtual part owned by the controlled virtual object.
  • the terminal in response to a selection operation on any virtual part displayed in the part selection interface, sends a part addition request to the server, and the part addition request carries the identifier of the selected virtual part and the controlled virtual part.
  • the identity of the object In the case of receiving the part addition request, the server can obtain the identifier of the virtual part and the identifier of the controlled virtual object from the part addition request, and establish a relationship between the identifier of the virtual part and the identifier of the controlled virtual object binding relationship. In other words, the server can determine the selected virtual part as a virtual part owned by the controlled virtual object, and this process is called adding a virtual part to the controlled virtual object.
  • a virtual vending machine 601 is displayed in the virtual scene, and when the distance between the controlled virtual object 602 and the virtual vending machine 601 is less than or equal to the first distance threshold, the terminal displays parts in the virtual scene
  • the selection interface 701 is displayed.
  • a plurality of virtual parts to be selected are displayed on the part selection interface 701 .
  • the terminal determines the selected virtual part 702 as a virtual part owned by the controlled virtual object 602 .
  • the selected virtual part in response to a selection operation on any virtual part among the multiple virtual parts to be selected displayed in the part selection interface, is used to replace the controlled virtual object Owned virtual parts of the same part type.
  • the controlled virtual object can only have one virtual part under each part type. If the controlled virtual object already has a virtual part of a certain part type, when the user selects a virtual part of the same part type for the controlled virtual object in the part selection interface, the virtual part originally owned by the controlled virtual object will be replaced by the same part.
  • Type of virtual part override.
  • different virtual parts under the same part type have different properties, and the user can replace some virtual parts owned by the controlled virtual object through the selection operation in the part selection interface, so as to make the final synthesized virtual parts Vehicles have specific properties.
  • the attribute is used to represent the performance value of the synthesized virtual vehicle, such as the speed of the virtual vehicle, the steering difficulty of the virtual vehicle and other values, and these values are also the attributes of the virtual vehicle.
  • the terminal can display the replaced virtual part in the virtual scene, and other users can control the virtual object to pick up the virtual part.
  • the terminal determines any virtual part as a virtual part possessed by the controlled virtual object, in response to a selection operation on other virtual parts among the plurality of virtual parts to be selected, The terminal displays second prompt information, where the second prompt information is used to prompt that the virtual part cannot be selected.
  • the user can only select one virtual part for the controlled virtual object. If a virtual vending machine can provide all the virtual parts, then the user who finds the virtual vending machine first can synthesize the virtual vehicle first. Compared with other users, the user who synthesizes the virtual vehicle also has an excessive confrontation advantage. It will cause the game to be unbalanced. By limiting the number of virtual parts provided by the virtual vending machine, the balance of the game can be improved. Users need to find multiple virtual vending machines in the virtual scene to collect virtual parts.
  • the terminal when the health value of any virtual object meets the condition of the target health value, displays a plurality of virtual parts owned by the virtual object on the target drop position, where the target drop position is in the virtual scene, the The location of the virtual object.
  • the terminal determines a plurality of virtual parts of the first type as virtual parts possessed by the controlled virtual object.
  • the first type of virtual part refers to a virtual part corresponding to a part type that is not yet owned by the controlled virtual object among the plurality of virtual parts owned by the virtual object.
  • the health value conforming to the target health value condition means that the health value is 0 or the health value is less than or equal to the health value threshold. In some embodiments, if a virtual object has a health value of 0, the state of the virtual object is said to be defeated or killed.
  • the terminal when the user controls the controlled virtual object to approach any virtual object whose health value meets the target health condition in the virtual scene, the terminal can control the controlled virtual object to automatically pick up the virtual parts dropped by the virtual object.
  • the terminal can control the controlled virtual object to only pick up the virtual parts corresponding to the part type it does not own, so as to ensure that the controlled virtual object only has one part under each part type. virtual parts.
  • the first part describes how the terminal displays a plurality of virtual parts owned by the virtual object at the target drop position.
  • the terminal when the health value of any virtual object in the virtual scene is 0, that is, when the status of the virtual object is defeated, the terminal displays on the position where the virtual object is defeated that the virtual object has of multiple virtual parts. Wherein, since the virtual object cannot continue to move in the virtual scene after being defeated, the position where the virtual object is defeated is also the target drop position. In some embodiments, the health value is also called life value or blood volume, which is not limited in this embodiment of the present application.
  • the terminal when the health value of any virtual object in the virtual scene is greater than zero and less than or equal to the health value threshold, the terminal can perform any of the following:
  • the terminal displays multiple virtual parts owned by the virtual object at the position where the health value of the virtual object is less than or equal to the health value threshold. This position is also the target display position, and the virtual parts will not follow the virtual object. move and move.
  • the terminal displays a plurality of virtual parts owned by the virtual object around the virtual object, and the position where the virtual object is located is also the target display position.
  • the virtual part can move with the movement of the virtual object.
  • the second part describes how the terminal determines the distance between the controlled virtual object and the drop position of the target.
  • the terminal sets a plurality of invisible collision detection boxes around the target drop position, the collision detection boxes will not block the virtual objects moving in the virtual scene, and each collision detection box is related to the target drop position.
  • the farthest distance between is the second distance threshold.
  • the terminal determines that the distance between the controlled virtual object and the drop position of the target is less than or equal to the second distance threshold.
  • the manner in which the terminal determines that the controlled virtual object is in contact with the collision detection box is to determine whether there is an overlap between the model of the controlled virtual object and the collision detection box. In the case where the model of the controlled virtual object and the collision detection box overlap, the terminal determines that the controlled virtual object is in contact with the collision detection box.
  • the terminal divides the virtual scene into multiple invisible grids, and when the controlled virtual object continuously moves in the virtual scene, it can span different grids.
  • the terminal can determine the distance between the controlled virtual object and the target display position in real time.
  • the terminal can determine the distance between the controlled virtual object and the target display position according to the coordinates of the controlled virtual object in the virtual scene and the coordinates of the target display position in the virtual scene. In this way, the terminal does not need to determine the distance between the controlled virtual object and the target display position in real time, and only needs to start the detection when the controlled virtual object enters a specific grid, which reduces the consumption of computing resources of the terminal.
  • the third part describes the manner in which the terminal determines a plurality of virtual parts of the first type as virtual parts possessed by the controlled virtual object.
  • the terminal determines a plurality of virtual parts of the first type from a plurality of virtual parts possessed by the virtual object.
  • the terminal sends a part addition request to the server, and the part addition request carries the identifiers of a plurality of virtual parts of the first type and the identifiers of the controlled virtual objects.
  • the server obtains the identifiers of a plurality of virtual parts of the first type and the identifier of the controlled virtual object from the part addition request, and establishes the identifiers of the plurality of virtual parts of the first type and the identifiers of the virtual parts of the first type.
  • the binding relationship of the identity of the controlled virtual object In other words, multiple virtual parts of the first type are determined as virtual parts owned by the controlled virtual object, and this process is called adding virtual parts to the controlled virtual object.
  • the terminal when the health value of any virtual object meets the condition of the target health value, displays a plurality of virtual parts owned by the virtual object on the target drop position, where the target drop position is in the virtual scene, the The location of the virtual object.
  • the terminal displays a part picking interface in the virtual scene, and a plurality of virtual parts of the second type are displayed on the part picking interface.
  • the multiple virtual parts of the second type are virtual parts corresponding to the part type already owned by the controlled virtual object among the multiple virtual parts owned by any virtual object.
  • the terminal uses the selected virtual part to replace the virtual part of the same part type owned by the controlled virtual object.
  • the user when the controlled virtual object is close to the target drop position, the user can replace the virtual part of a certain part type owned by the controlled virtual object through the part picking interface displayed on the terminal, and the replacement method is simple and convenient , the efficiency of human-computer interaction is higher.
  • the terminal In response to the parts display instruction, the terminal displays a parts display area in the virtual scene, and the parts display area is used to display the virtual parts owned by the controlled virtual object.
  • the terminal in response to the part display instruction, displays a parts display area 801 in the virtual scene, and the parts display area displays virtual parts owned by the controlled virtual object.
  • the part presentation instruction is triggered by any of the following:
  • Mode 1 In response to a click operation on the virtual part display control displayed in the virtual scene, the terminal triggers the part display instruction.
  • the parts display control is also the backpack display control. After the user clicks the backpack display control, the terminal can not only display the parts display area, but also display the virtual backpack interface of the controlled virtual object. Control virtual props owned by virtual objects, such as virtual firearms, virtual ammunition, and virtual bulletproof vests.
  • the parts display area is an area in the virtual backpack interface, which is not limited in this embodiment of the present application.
  • Manner 2 When the distance between the controlled virtual object and the virtual vending machine is less than or equal to the first distance threshold, the terminal triggers the part display instruction. In other words, when the controlled virtual object approaches the virtual vending machine in the virtual scene, the terminal can directly trigger the parts display instruction.
  • a plurality of virtual part display grids are displayed in the part display area, and each virtual part display grid is used to display virtual parts of one part type.
  • the terminal can display the virtual part in the corresponding virtual part display grid according to the part type of the acquired virtual part.
  • a plurality of virtual parts display grids 802 are displayed in the parts display area 801 , and virtual parts of corresponding part types are displayed in each virtual part display grid.
  • each grid is used to display a virtual part.
  • the terminal can also use different grids to mark the part type of the virtual part owned by the controlled virtual object, so that the user can more intuitively confirm that the controlled virtual object has The part type of the virtual part can be improved, so that the display efficiency of the virtual part can be improved, and the efficiency of human-computer interaction is higher.
  • the terminal displays the composite control in the virtual scene.
  • the terminal in response to the plurality of virtual parts displayed in the parts display area meeting the target condition, converts the plurality of virtual parts into a first target prop. In response to the transformation to obtain the first target prop, the terminal displays a synthesis control in the virtual scene.
  • the first part describes how the terminal converts multiple virtual parts into a first target prop when multiple virtual parts displayed in the parts display area meet the target conditions.
  • the multiple virtual parts meet the target condition means that the quantity of the multiple virtual parts meets the target quantity condition, and the part types corresponding to the multiple virtual parts meet at least one of the target part type conditions.
  • the quantity meeting the target quantity condition means that the quantity of multiple virtual parts is greater than or equal to the target quantity threshold; the part type meeting the target part type condition means that the part types of multiple virtual parts match the preset multiple part types .
  • the preset part types include five types of chassis, engine, armor, gun barrel, and secondary weapon. Multiple virtual parts correspond to these five part types, and then multiple virtual parts also meet the target part type conditions.
  • the terminal when the multiple virtual parts displayed in the parts display area meet the target conditions, the terminal converts the multiple virtual parts into a virtual vehicle blueprint, and the virtual vehicle blueprint is also the first target prop.
  • the terminal can display the virtual vehicle blueprint in the parts display area, and the user can determine the virtual vehicle blueprint possessed by the controlled virtual object by viewing the parts display area.
  • the terminal when the terminal converts multiple virtual parts into a virtual vehicle blueprint, it can also cancel the display of multiple virtual parts in the parts display area, and in this way, the multiple virtual parts are converted into a virtual vehicle blueprint. The effect of the vehicle blueprint.
  • different types of first target props correspond to different types of virtual vehicles, and the types of the first target props are determined by a plurality of virtual parts before conversion.
  • the types of virtual vehicles include three major categories: light tanks, medium tanks, and heavy tanks, each of which includes multiple subcategories.
  • large categories of light tanks include light tank 1, light tank 2, and Light tank type 3 and other small categories. If the virtual parts selected by the user all correspond to light tanks, then the first target item obtained by transformation also corresponds to light tanks.
  • the terminal in response to the plurality of virtual parts displayed in the parts display area meeting the target condition, the terminal converts the plurality of virtual parts into a first target prop 901 .
  • the second part describes the manner in which the terminal displays the synthesis control in the virtual scene when the first target prop is obtained through transformation.
  • the terminal displays a synthesis control 1001 in the virtual scene.
  • the user can control the terminal to display the virtual vehicle in the virtual scene by clicking on the composite control 1001 .
  • a virtual vending machine is displayed in the virtual scene.
  • the virtual vending machine is also called a tank parts shopping machine.
  • the user can select the desired virtual part through the displayed part selection interface.
  • the terminal puts the virtual part selected by the user into the virtual backpack of the controlled virtual object, and the controlled virtual object also acquires the virtual part.
  • the terminal converts multiple virtual parts owned by the controlled virtual object into a virtual vehicle blueprint. If the virtual vehicle is Virtual tank, then the virtual vehicle blueprint is also called the virtual tank blueprint.
  • the terminal In response to the triggering operation on the synthesis control, the terminal displays a first target vehicle in the virtual scene, where the first target vehicle is a virtual vehicle synthesized by multiple virtual parts.
  • the virtual vehicle includes various types, for example, the virtual vehicle includes a virtual motorcycle, a virtual car, a virtual yacht, and a virtual tank, etc.
  • the virtual vehicle is a virtual tank as an example for description.
  • the terminal determines the target display position of the first target vehicle in the virtual scene. In response to the target display position meeting the target display condition, the terminal displays the first target vehicle on the target display position.
  • the first part describes the manner in which the terminal determines the target display position of the first target vehicle in the virtual scene in response to the triggering operation on the synthesis control.
  • the trigger operation includes a drag operation, a click operation, a press operation, and the like.
  • the terminal in response to the drag operation on the composite control, determines the end position of the drag operation as the target display position of the first target vehicle in the virtual scene.
  • the user can determine the target display position of the virtual vehicle by dragging the composite control, and the degree of freedom in determining the target display position is relatively high.
  • the terminal when the duration of the pressing operation on the composite control meets the target duration condition, the terminal sets the state of the composite control to a draggable state. In response to the drag operation of the composite control, when the drag operation ends, the position of the composite control is determined as the target display position of the first target vehicle.
  • that the duration of the pressing operation meets the target duration condition means that the duration of the pressing operation is greater than or equal to the duration threshold, and the duration threshold is set by the technician according to the actual situation, for example, set to 0.3 seconds or 0.5 seconds, etc. Not limited.
  • the terminal in response to the click operation on the composite control, determines the position in the virtual scene, which is the target distance in front of the controlled virtual object, as the target display position.
  • the target distance is set by technical personnel according to actual conditions, which is not limited in this embodiment of the present application.
  • the terminal can automatically determine the target display position. Since the target display position does not need to be determined by the user through operations, the method for determining the target display position is simple and efficient, and the human-computer interaction higher efficiency.
  • the terminal displays the model of the first target vehicle in the virtual scene when the duration of the pressing operation on the composite control meets the target duration condition.
  • the terminal determines the position where the drag operation ends as the target display position.
  • the user can preview the target display position in real time when determining the target display position, thereby improving the efficiency of determining the target display position.
  • the second part describes the manner in which the terminal displays the first target vehicle on the target display position when the target display position meets the target display condition.
  • the target display position complies with the target display condition means that the area of the target display position is greater than or equal to the occupied area of the first target vehicle, and at least one item of any virtual building does not exist above the target display position.
  • the area of the target display position is greater than or equal to the occupied area of the first target vehicle to ensure that the target display position can accommodate the first target vehicle.
  • the fact that there is no virtual building above the target display position is to ensure that the virtual vehicle can be displayed normally in the virtual scene.
  • the terminal when the target display position meets the target display condition, can control the first target vehicle to fall from the sky of the virtual scene to the target display position. For example, referring to FIG. 12 , the terminal can display a virtual vehicle 1201 in a virtual scene.
  • the terminal controls the first target vehicle to fall from the sky of the virtual scene to the target display position according to the target movement speed, the target movement speed and the type of the virtual vehicle Associated.
  • the terminal can determine the target moving speed according to the type of the virtual tank. For example, in order to simulate a real scene, technicians set the following settings through the terminal: target movement speed corresponding to light tanks > target movement speed corresponding to medium tanks > target movement speed corresponding to heavy tanks.
  • the target moving speed corresponding to each type of virtual tank is set by the technician according to the actual situation, which is not limited in the embodiment of the present application.
  • the terminal before the terminal controls the first target vehicle to fall from the sky of the virtual scene to the target display position, the terminal can also display the virtual transporter in the virtual scene. In response to the virtual transport aircraft flying above the target display position, the virtual transport aircraft is controlled to drop the first target vehicle into the virtual scene, and the first target vehicle falls from the air of the virtual scene to the target display position. In some embodiments, during the process of the first target vehicle falling from the air of the virtual scene to the target display position, the terminal can also display a virtual parachute connected to the first target vehicle above the first target vehicle, so as to Makes the falling process of the first target vehicle more realistic.
  • the terminal before the terminal controls the first target vehicle to fall from the sky of the virtual scene to the target display position, the terminal can also display virtual smoke on the target display position, and the virtual smoke is used to remind the first target vehicle that it is about to fall to the target Display location. For example, referring to FIG. 13 , the terminal displays virtual smoke 1301 on the target display position.
  • the terminal can remind the user that the first target vehicle is about to fall by displaying virtual smoke before controlling the first target vehicle to fall to the target display position, so that the user can intuitively determine according to the virtual smoke
  • the terminal when displaying the virtual smoke, can also control the color of the virtual smoke.
  • the color of the virtual smoke is set to red or yellow, etc., which is not limited in this embodiment of the present application.
  • the terminal can also adjust the color of the virtual smoke according to the falling progress of the first target vehicle. For example, when the first target vehicle has just dropped, the terminal sets the color of the virtual smoke to green. When the first target vehicle falls to half, the terminal adjusts the color of the virtual smoke to yellow. When the first target vehicle is about to fall to the target display position, the terminal adjusts the color of the virtual smoke to red.
  • the falling speed of the first target vehicle is prompted by adjusting the color of the virtual smoke, so that the user can intuitively know the falling progress of the first target vehicle by observing the change in the color of the virtual smoke.
  • the method is intuitive and efficient, which improves the efficiency of human-computer interaction.
  • any one of the following can also be performed:
  • the terminal sets the state of the virtual vehicle to be destroyed.
  • setting the state of the virtual vehicle to be destroyed refers to adjusting the health value of the virtual vehicle to 0.
  • the health value of the virtual vehicle is also referred to as the life value, blood volume, and wear degree of the virtual vehicle, etc., which is not limited in this embodiment of the present application. If the state of a virtual vehicle is set to be destroyed, then the virtual vehicle can no longer be used.
  • the terminal sets the state of the virtual object to be defeated.
  • setting the state of the virtual object to be defeated means that the health value of the virtual object is adjusted to 0.
  • the terminal when the target display position does not meet the target display conditions, the terminal displays first prompt information in the virtual scene, and the first prompt information is used to prompt that the target display position does not meet the target display conditions.
  • Target display condition when the target display position does not meet the target display conditions, the terminal displays first prompt information in the virtual scene, and the first prompt information is used to prompt that the target display position does not meet the target display conditions.
  • the terminal in response to the target display position not meeting the target display condition, displays a prompt graphic in the target color in the virtual scene, where the prompt graphic is used to represent the outline of the first target vehicle.
  • the target color is set by the technical personnel according to the actual situation, for example, it is set to red or yellow, etc., which is not limited in this embodiment of the present application.
  • the terminal displays a prompt graphic 1401 in the virtual scene, and 1402 is a virtual building at the target display position, and the prompt graphic 1401 can also represent the outline of the first target vehicle.
  • the user can control the controlled virtual object to drive the first target vehicle to move in the virtual scene or compete with other virtual objects.
  • the terminal displays the vehicle ride control on the virtual scene.
  • the terminal controls the controlled virtual object to enter the first target vehicle, and the user can also control the first target vehicle to move.
  • the user can also control the virtual weapon of the first target vehicle to fire, thereby causing damage to other virtual objects or virtual vehicles.
  • the terminal displays the synthesizing controls in the virtual scene.
  • the terminal determines the target display position of the first target vehicle.
  • the terminal controls the first target vehicle to fall from the sky of the virtual scene to the target display position.
  • the virtual parts already owned by the controlled virtual object can be displayed intuitively.
  • the multiple virtual parts in the part display area meet the target conditions, that is, the controlled virtual object has
  • a synthesis control is displayed, so that by triggering the synthesis control, the multiple virtual parts can be synthesized into a virtual vehicle, so that the terminal can display the virtual vehicle in the virtual scene. Since the display of virtual parts is intuitive and efficient, it can improve the efficiency of users to view virtual parts, and because the synthesis of virtual vehicles can be realized only by clicking the synthesis control, the operation mode of synthesizing virtual vehicles is simple and efficient, that is, people The efficiency of computer interaction is high.
  • the embodiment of the present application also provides another method for displaying a virtual vehicle.
  • the user does not need to control the controlled virtual object to collect virtual parts one by one, but directly controls the controlled virtual object to pick up virtual props, and can perform operations related to virtual vehicle display.
  • the method includes:
  • the second target item is a virtual item dropped in the virtual scene.
  • the second target item is displayed on the terminal after the duration of the virtual battle in the virtual scene is greater than or equal to the battle duration threshold.
  • the second target prop is a virtual prop dropped by any defeated virtual object in the virtual scene, or, the second target prop is a virtual prop in the virtual scene that is dropped by any virtual object, etc., this application The embodiment does not limit this.
  • the terminal when the distance between the controlled virtual object and the second target prop is less than or equal to a fourth distance threshold, the terminal displays a prop pick-up control on the virtual scene. In response to the operation on the prop pick-up control, the terminal controls the controlled virtual object to pick up the second target prop. When the controlled virtual object picks up the second target prop, the terminal displays a synthesis control in the virtual scene.
  • different types of second target props correspond to different types of virtual vehicles, and the user can control the terminal to display different types of virtual vehicles by controlling the controlled virtual object to pick up different types of second target props.
  • the method for the terminal to display the composite control in the virtual scene belongs to the same inventive concept as the above-mentioned step 503, and the implementation process refers to the description of the above-mentioned step 503, which will not be repeated here.
  • the method before the terminal displays the composite control in the virtual scene, the method further includes:
  • the terminal discards the virtual part in the virtual scene. That is, after the controlled virtual object picks up the second target prop, the terminal controls the controlled virtual object to discard the owned virtual parts.
  • the method in response to the picking operation on the second target prop, after displaying the composition control in the virtual scene, the method further includes:
  • the second target prop is discarded in the virtual scene. That is, after the controlled virtual object picks up the second target prop, if the controlled virtual object picks up the virtual part again, the terminal controls the controlled virtual object to discard the possessed second target prop.
  • step 1602 and the above-mentioned step 504 belong to the same inventive concept, and the implementation process refers to the description of the above-mentioned step 504, which is not repeated here.
  • the user can also control the controlled virtual object to acquire another second target prop in the virtual scene, and control the control by consuming another second target prop
  • the terminal displays another second target vehicle, and in some embodiments, the two second target vehicles are different types of virtual vehicles. That is, the user can summon two or more virtual vehicles in the virtual scene through the controlled virtual object. Referring to FIG. 17 , a virtual vehicle 1701 and a virtual vehicle 1702 summoned by the controlled virtual object are displayed in the virtual scene.
  • the virtual parts already owned by the controlled virtual object can be displayed intuitively.
  • the multiple virtual parts in the part display area meet the target conditions, that is, the controlled virtual object has
  • a synthesis control is displayed, so that by triggering the synthesis control, the multiple virtual parts can be synthesized into a virtual vehicle, so that the terminal can display the virtual vehicle in the virtual scene. Since the display of virtual parts is intuitive and efficient, it can improve the efficiency of users to view virtual parts, and because the synthesis of virtual vehicles can be realized only by clicking the synthesis control, the operation mode of synthesizing virtual vehicles is simple and efficient, that is, people The efficiency of computer interaction is high.
  • FIG. 18 is a schematic structural diagram of a display device for a virtual vehicle provided by an embodiment of the application.
  • the display device includes: an area display module 1801 , a control display module 1802 , and a vehicle display module 1803 .
  • the area display module 1801 is used to display the parts display area in the virtual scene in response to the part display instruction, and the parts display area is used to display the virtual parts possessed by the controlled virtual object;
  • control display module 1802 configured to display a composite control in the virtual scene when the multiple virtual parts displayed in the parts display area meet the target condition
  • a vehicle display module 1803 configured to display a first target vehicle in the virtual scene in response to a trigger operation on the synthesis control, where the first target vehicle is a virtual vehicle synthesized from the plurality of virtual parts vehicle.
  • the vehicle display module 1803 is configured to determine a target display position of the first target vehicle in the virtual scene in response to a triggering operation on the synthesis control; When the display position meets the target display condition, the first target vehicle is displayed on the target display position.
  • the apparatus further includes:
  • a first prompt module configured to display first prompt information in the virtual scene when the target display position does not meet the target display conditions, where the first prompt information is used to prompt the target display position The stated target display conditions are not met.
  • the first prompt module is configured to display a prompt graphic in a target color in the virtual scene when the target display position does not meet the target display condition, and the prompt graphic Used to represent the outline of the first target vehicle.
  • the vehicle display module 1803 is configured to, in response to a drag operation on the composite control, determine the end position of the drag operation as the first target vehicle in the virtual The target display position in the scene.
  • the vehicle display module 1803 is configured to control the first target vehicle to fall from the sky of the virtual scene to the target vehicle when the target display position meets the target display condition. the target display position.
  • the vehicle display module 1803 is configured to control the first target vehicle to fall from the sky of the virtual scene to the target display position according to a target moving speed, where the target moving speed is the same as the target moving speed.
  • the type of virtual vehicle is associated.
  • the apparatus further includes:
  • the smoke display module is used for displaying virtual smoke on the target display position, and the virtual smoke is used to remind the first target vehicle to fall to the target display position.
  • the apparatus further includes a contact detection module for performing any of the following:
  • the target display condition refers to at least one of the following:
  • the area of the target display position is greater than or equal to the occupied area of the first target vehicle
  • control display module 1802 is configured to convert the plurality of virtual parts into a first target prop when the plurality of virtual parts displayed in the parts display area meet the target condition;
  • the synthesis control is displayed in the virtual scene.
  • a virtual vending machine is displayed in the virtual scene, and the virtual vending machine is used to provide virtual parts, and the apparatus further includes:
  • a part determination module configured to display a part selection interface in the virtual scene when the distance between the controlled virtual object and the virtual vending machine is less than or equal to a first distance threshold, the parts selection interface There are multiple virtual parts to be selected; in response to the selection operation of any virtual part in the multiple virtual parts to be selected, the selected virtual part is determined as the virtual part owned by the controlled virtual object .
  • the area display module 1801 is configured to, in response to a selection operation on any virtual part among the plurality of virtual parts to be selected, replace the possession of the controlled virtual object with the selected virtual part of the same type of virtual part.
  • the apparatus further includes:
  • the second prompt module is configured to display second prompt information in response to a selection operation on other virtual parts in the plurality of virtual parts to be selected, where the second prompt information is used to prompt that the virtual part cannot be selected.
  • the apparatus further includes:
  • the part determination module is used to display, on the target drop position, a plurality of virtual parts possessed by any virtual object when the health value of any virtual object meets the condition of the target health value, and the target drop position is all the virtual parts.
  • the position of any virtual object when the distance between the controlled virtual object and the target drop position is less than or equal to the second distance threshold, multiple first type
  • the virtual part is determined as the virtual part owned by the controlled virtual object, and the first type of virtual part refers to the part that the controlled virtual object does not own among the multiple virtual parts owned by any virtual object.
  • the virtual part corresponding to the type.
  • the part determination module is further configured to, when the distance between the controlled virtual object and the target drop position is less than or equal to the second distance threshold A part picking interface is displayed in the virtual scene, and a plurality of virtual parts of the second type are displayed on the part picking interface, and the plurality of virtual parts of the second type are a plurality of virtual parts owned by any virtual object.
  • the apparatus further includes:
  • a prop pick-up module configured to display the synthesis control in the virtual scene in response to a pick-up operation on the second target prop
  • the vehicle display module 1803 is further configured to display a second target vehicle in the virtual scene in response to the triggering operation of the synthesis control, and the second target vehicle is a virtual vehicle corresponding to the second target prop. vehicle.
  • the apparatus further includes:
  • a discarding module is configured to discard any virtual part in the virtual scene when the controlled virtual object possesses any virtual part.
  • the apparatus further includes:
  • the discarding module is used for discarding the second target prop in the virtual scene in response to the picking operation of any virtual part.
  • the device for displaying a virtual vehicle when the device for displaying a virtual vehicle provided in the above-mentioned embodiment displays the virtual vehicle, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated from different The function module is completed, that is, the internal structure of the computer device is divided into different function modules, so as to complete all or part of the functions described above.
  • the display device of the virtual carrier provided by the above embodiment and the embodiment of the display method of the virtual carrier belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the virtual parts already owned by the controlled virtual object can be displayed intuitively.
  • the multiple virtual parts in the part display area meet the target conditions, that is, the controlled virtual object has
  • a synthesis control is displayed, so that by triggering the synthesis control, the multiple virtual parts can be synthesized into a virtual vehicle, so that the terminal can display the virtual vehicle in the virtual scene. Since the display of virtual parts is intuitive and efficient, it can improve the efficiency of users to view virtual parts, and because the synthesis of virtual vehicles can be realized only by clicking the synthesis control, the operation mode of synthesizing virtual vehicles is simple and efficient, that is, people The efficiency of computer interaction is high.
  • An embodiment of the present application provides a computer device for executing the above method.
  • the computer device can be implemented as a terminal or a server.
  • the structure of the terminal is first introduced below:
  • FIG. 19 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal 1900 includes: one or more processors 1901 and one or more memories 1902 .
  • the processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1901 can use at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1901 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in a standby state.
  • the processor 1901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1901 may further include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1902 may include one or more computer-readable storage media, which may be non-transitory. Memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1902 is used to store at least one computer program, and the at least one computer program is used to be executed by the processor 1901 to implement the methods provided by the method embodiments in this application. The display method of the virtual vehicle.
  • the terminal 1900 may optionally further include: a display screen 1905 and a power supply 1909 .
  • the display screen 1905 is used to display UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 1905 also has the ability to acquire touch signals on or above the surface of the display screen 1905 .
  • the touch signal can be input to the processor 1901 as a control signal for processing.
  • the display screen 1905 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the power supply 1909 is used to power various components in the terminal 1900.
  • the power source 1909 may be alternating current, direct current, primary batteries, or rechargeable batteries.
  • FIG. 19 does not constitute a limitation on the terminal 1900, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • the above computer equipment can also be implemented as a server, and the structure of the server is introduced below:
  • FIG. 20 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the server 2000 may vary greatly due to different configurations or performance, for example, including one or more processors (Central Processing Units, CPU) 2001 and a or more memories 2002, wherein, at least one computer program is stored in the one or more memories 2002, and the at least one computer program is loaded and executed by the one or more processors 2001 to implement the above-mentioned methods. method provided by the example.
  • the server 2000 may also have components such as wired or wireless network interfaces, keyboards, and input/output interfaces for input and output, and the server 2000 may also include other components for implementing device functions, which will not be repeated here.
  • a computer-readable storage medium such as a memory including a computer program
  • the computer program can be executed by a processor to complete the display method of the virtual carrier in the foregoing embodiment.
  • the computer-readable storage medium may be Read-Only Memory (ROM), Random Access Memory (RAM), Compact Disc Read-Only Memory (CD-ROM), Tape, floppy disk, and optical data storage devices, etc.
  • a computer program product or computer program comprising program code stored in a computer readable storage medium from which a processor of a computer device is readable by a computer
  • the program code is read by reading the storage medium, and the processor executes the program code, so that the computer device executes the above method for displaying a virtual vehicle.
  • the computer programs involved in the embodiments of the present application may be deployed and executed on one computer device, or executed on multiple computer devices located at one location, or distributed in multiple locations and communicated through Executed on multiple computer devices interconnected by a network, and multiple computer devices distributed in multiple locations and interconnected through a communication network can form a blockchain system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟载具的显示方法、装置、设备以及存储介质,涉及计算机技术领域。方法包括:响应于零件展示指令,在虚拟场景中显示(401)零件展示区域;在零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在虚拟场景中显示(402)合成控件;响应于对合成控件的触发操作,在虚拟场景中显示(403)第一目标载具。

Description

虚拟载具的显示方法、装置、设备以及存储介质
本申请要求于2021年04月25日提交的申请号为202110450247.3、发明名称为“虚拟载具的显示方法、装置、设备以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种虚拟载具的显示方法、装置、设备以及存储介质。
背景技术
随着多媒体技术的发展以及终端功能的多样化,在终端上能够进行的游戏种类越来越多。射击类游戏是一种比较盛行的游戏,在游戏过程中,用户除了能够控制虚拟对象使用各种各样的虚拟枪械来攻击其他队伍的虚拟对象之外,还能够控制虚拟对象驾驶虚拟载具在虚拟场景中进行移动。
发明内容
本申请实施例提供了一种虚拟载具的显示方法、装置、设备以及存储介质,可以提升人机交互的效率。所述技术方案如下:
一方面,提供了一种虚拟载具的显示方法,所述方法包括:
响应于零件展示指令,在虚拟场景中显示零件展示区域,所述零件展示区域用于展示被控虚拟对象拥有的虚拟零件;
在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件;
响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,所述第一目标载具为由所述多个虚拟零件合成的虚拟载具。
一方面,提供了一种虚拟载具的显示装置,所述装置包括:
区域显示模块,用于响应于零件展示指令,在虚拟场景中显示零件展示区域,所述零件展示区域用于展示被控虚拟对象拥有的虚拟零件;
控件显示模块,用于在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件;
载具显示模块,用于响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,所述第一目标载具为由所述多个虚拟零件合成的虚拟载具。
一方面,提供了一种计算机设备,所述计算机设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条计算机程序,所述计算机程序由所述一个或多个处理器加载并执行以实现所述虚拟载具的显示方法。
一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现所述虚拟载具的显示方法。
一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括程序代码,该程序代码存储在计算机可读存储介质中,计算机设备的处理器从计算机可读 存储介质读取该程序代码,处理器执行该程序代码,使得该计算机设备执行上述虚拟载具的显示方法。
本申请实施例提供的技术方案,通过在虚拟场景中显示零件展示区域,能够直观的展示被控虚拟对象已拥有的虚拟零件,进一步的,通过在零件展示区域中的多个虚拟零件符合目标条件,也即被控虚拟对象已拥有的虚拟零件符合目标条件的情况下,显示合成控件,使得通过触发该合成控件,能够将该多个虚拟零件合成为虚拟载具,从而终端能够在虚拟场景中显示该虚拟载具。由于虚拟零件的展示方式直观且高效,能够提高用户查看虚拟零件的效率,并且由于只需要点击合成控件即可实现虚拟载具的合成,使得合成虚拟载具的操作方式简单且高效,也即人机交互的效率较高。
附图说明
图1是本申请实施例提供的一种虚拟载具的显示方法的实施环境的示意图;
图2是本申请实施例提供的一种界面示意图;
图3是本申请实施例提供的一种界面示意图;
图4是本申请实施例提供的一种虚拟载具的显示方法的流程图;
图5是本申请实施例提供的一种虚拟载具的显示方法的流程图;
图6是本申请实施例提供的一种界面示意图;
图7是本申请实施例提供的一种界面示意图;
图8是本申请实施例提供的一种界面示意图;
图9是本申请实施例提供的一种界面示意图;
图10是本申请实施例提供的一种界面示意图;
图11是本申请实施例提供的一种虚拟载具的显示方法的流程图;
图12是本申请实施例提供的一种界面示意图;
图13是本申请实施例提供的一种界面示意图;
图14是本申请实施例提供的一种界面示意图;
图15是本申请实施例提供的一种虚拟载具的显示方法的流程图;
图16是本申请实施例提供的一种虚拟载具的显示方法的流程图;
图17是本申请实施例提供的一种界面示意图;
图18是本申请实施例提供的一种虚拟载具的显示装置的结构示意图;
图19是本申请实施例提供的一种终端的结构示意图;
图20是本申请实施例提供的一种服务器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请中术语“第一”“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”、“第n”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。
本申请中术语“至少一个”是指一个或多个,“多个”的含义是指两个或两个以上,例如,多个人脸图像是指两个或两个以上的人脸图像。
虚拟场景:是应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景是对真实世界的仿真环境,或者是半仿真半虚构的虚拟环境,或者是纯虚构的虚拟环境。虚拟场景是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景包括天空、陆地、海洋等,该陆地包括沙漠、城市等环境元素,用户能够控制虚拟对象在该虚拟场景中进行移动。
虚拟对象:是指在虚拟场景中的可活动对象。该可活动对象是虚拟人物、虚拟动物、动漫人物等,比如:在虚拟场景中显示的人物、动物、植物、油桶、墙壁、石块等。该虚拟对象是该虚拟场景中的一个虚拟的用于代表用户的虚拟形象。虚拟场景中能够包括多个虚拟对象,每个虚拟对象在虚拟场景中具有自身的形状和体积,占据虚拟场景中的一部分空间。
在一些实施例中,该虚拟对象是通过客户端上的操作进行控制的用户角色,或者是通过训练设置在虚拟场景对战中的人工智能(Artificial Intelligence,AI),或者是设置在虚拟场景中的非用户角色(Non-Player Character,NPC)。在一些实施例中,该虚拟对象是在虚拟场景中进行竞技的虚拟人物。在一些实施例中,该虚拟场景中参与互动的虚拟对象的数量是预先设置的,或者是根据加入互动的客户端的数量动态确定的。
以射击类游戏为例,用户能够控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也能够控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也能够控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具是虚拟汽车、虚拟飞行器、虚拟游艇等,在此以上述场景进行举例说明,本申请实施例对此不作具体限定。用户也能够控制虚拟对象通过互动道具与其他虚拟对象进行战斗等方式的互动,例如,该互动道具是手雷、集束雷、粘性手雷(简称“粘雷”)等投掷类互动道具,或者是机枪、手枪、步枪等射击类互动道具,本申请对互动道具的类型不作具体限定。
相关技术中,虚拟载具往往是由策划人员配置好,设置在虚拟场景中的不同位置,用户控制虚拟对象接近虚拟载具就能够控制虚拟对象驾驶虚拟载具。在这种情况下,用户无法自行决定想要使用的虚拟载具,只能控制虚拟对象驾驶在虚拟场景中遇到的虚拟载具,导致人机交互的效率较低。
图1是本申请实施例提供的一种虚拟载具显示方法的实施环境示意图,参见图1,该实施环境包括:第一终端120、第二终端140和服务器160。
第一终端120安装和运行有支持虚拟场景显示的应用程序。在一些实施例中,该应用程序是第一人称射击游戏(First-Person Shooting Game,FPS)、第三人称射击游戏、虚拟现实应用程序、三维地图程序或者多人枪战类生存游戏中的任意一种。第一终端120是第一用户使用的终端,第一用户使用第一终端120操作位于虚拟场景中的被控虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示意性的,被控虚拟对象是第一虚拟人物,比如仿真人物角色或动漫人物角色。
第一终端120以及第二终端140通过无线网络或有线网络与服务器160相连。
第二终端140安装和运行有支持虚拟场景显示的应用程序。在一些实施例中,该应用程序是FPS、第三人称射击游戏、虚拟现实应用程序、三维地图程序或者多人枪战类生存游戏中的任意一种。第二终端140是第二用户使用的终端,第二用户使用第二终端140操作位于虚拟场景中的另一个虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷中的至少一种。示意性的,第二终端140控制的虚拟对象是第二虚拟人物,比如仿真人物角色或动漫人物角色。
在一些实施例中,第一终端120控制的虚拟对象和第二终端140控制的虚拟对象处于同一虚拟场景中,此时第一终端120控制的虚拟对象能够在虚拟场景中与第二终端140控制的虚拟对象进行互动。在一些实施例中,第一终端120控制的虚拟对象与第二终端140控制的虚拟对象为敌对关系,例如,第一终端120控制的虚拟对象与第二终端140控制的虚拟对象属于不同的队伍和组织,敌对关系的虚拟对象之间,能够在陆地上以互相射击的方式进行对战方式的互动。
在一些实施例中,第一终端120和第二终端140上安装的应用程序是相同的,或两个终 端上安装的应用程序是不同操作系统平台的同一类型应用程序。其中,第一终端120泛指多个终端中的一个,第二终端140泛指多个终端中的一个,本实施例仅以第一终端120和第二终端140来举例说明。第一终端120和第二终端140的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、膝上型便携计算机和台式计算机中的至少一种。例如,第一终端120和第二终端140是智能手机,或者其他手持便携式游戏设备但并不局限于此。本申请实施例提供的技术方案既能够应用在第一终端120上,也能够应用在第二终端140上,本申请实施例对此不做限定。为了更加清楚和简要,在下述说明过程中,采用终端来代指第一终端或者第二终端。
在一些实施例中,服务器160是独立的物理服务器,或者是多个物理服务器构成的服务器集群或者分布式系统,或者是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器,本申请实施例对服务器的数量和设备类型不加以限定。
为了更加清楚的对本申请实施例提供的技术方案进行说明,首先对本申请中计算机设备显示的虚拟场景进行介绍,参见图2,为了使得射击类游戏更加真实,游戏设计人员会参照人类观察现实世界的方式,来对计算机设备显示的虚拟场景的方式进行设计。被控虚拟对象201能够观察到区域202中的虚拟场景,以被控虚拟对象201的角度观察区域202得到的画面也即是计算机设备显示的虚拟场景。用户能够通过调整被控虚拟对象201的朝向,来调整被控虚拟对象201观察虚拟场景的位置。
以终端为智能手机为例,计算机设备显示的虚拟场景中还显示有用于控制被控虚拟对象执行不同动作的控件。参见图3,计算机设备显示的虚拟场景301上显示有虚拟摇杆302、姿态调整控件303、射击控件304以及道具切换控件305,其中,虚拟摇杆302用于控制被控虚拟对象的移动方向。姿态调整控件303用于调整被控虚拟对象的姿态,比如控制虚拟对象执行下蹲或者匍匐等动作。射击控件304用于控制被控虚拟对象持有的互动道具发射虚拟弹药。道具切换控件305用于切换目标道具,在本申请实施例中,用户能够通过射击控件304来控制被控虚拟对象投掷目标道具。306为小地图,用户能够通过小地图306观察队友和敌人在虚拟场景中的位置。
需要注意的是,在下述对本申请提供的技术方案进行说明的过程中,是以终端作为执行主体为例进行的,这里的终端是上述实施环境中的第一终端120或第二终端140。在其他可能的实施方式中,能够通过终端与服务器之间的交互来执行本申请提供的技术方案,本申请实施例对于执行主体的类型不做限定。
图4是本申请实施例提供的一种虚拟载具的显示方法的流程图,参见图4,方法包括:
401、响应于零件展示指令,终端在虚拟场景中显示零件展示区域,零件展示区域用于展示被控虚拟对象拥有的虚拟零件。
其中,虚拟零件也即是用户合成虚拟载具的零件,虚拟载具包括多个类型,比如虚拟坦克、虚拟汽车、虚拟摩托车以及虚拟游艇等。若虚拟载具为虚拟坦克,那么虚拟零件也即是用于合成虚拟坦克的零件。
402、在零件展示区域中展示的多个虚拟零件符合目标条件的情况下,终端在虚拟场景中显示合成控件。
其中,合成控件为屏幕上显示的一个按钮,用户能够通过对该按钮的触发操作来控制终端执行对应的步骤。
403、响应于对合成控件的触发操作,终端在虚拟场景中显示第一目标载具,第一目标载具为由多个虚拟零件合成的虚拟载具。
通过在虚拟场景中显示零件展示区域,能够直观的展示被控虚拟对象已拥有的虚拟零件, 进一步的,通过在零件展示区域中的多个虚拟零件符合目标条件,也即被控虚拟对象已拥有的虚拟零件符合目标条件的情况下,显示合成控件,使得通过触发该合成控件,能够将该多个虚拟零件合成为虚拟载具,从而终端能够在虚拟场景中显示该虚拟载具。由于虚拟零件的展示方式直观且高效,能够提高用户查看虚拟零件的效率,并且由于只需要点击合成控件即可实现虚拟载具的合成,使得合成虚拟载具的操作方式简单且高效,也即人机交互的效率较高。
图5是本申请实施例提供的一种虚拟载具的显示方法的流程图,参见图5,方法包括:
501、终端控制被控虚拟对象获取虚拟零件,被控虚拟对象为终端控制的虚拟对象。
其中,虚拟零件对应于虚拟载具的不同部分。比如,若虚拟载具为虚拟坦克,那么虚拟零件分别对应于虚拟坦克的底盘、发动机、装甲、炮管以及副武器。
在一些实施例中,虚拟场景中显示有虚拟售卖机,虚拟售卖机用于提供虚拟零件,在被控虚拟对象与虚拟售卖机之间的距离小于或等于第一距离阈值,终端在虚拟场景中显示零件选择界面,零件选择界面上显示有多个待选择的虚拟零件。响应于对多个待选择的虚拟零件中任一虚拟零件的选择操作,终端将该虚拟零件确定为被控虚拟对象拥有的虚拟零件。其中,第一距离阈值由技术人员根据实际情况进行设置,比如设置为30或50等,本申请实施例对此不做限定。
也即是,在游戏过程中,当用户发现虚拟场景中的虚拟售卖机时,能够控制被控虚拟对象朝着虚拟售卖机进行移动。当被控虚拟对象靠近虚拟售卖机时,终端能够在虚拟场景中显示零件选择界面,用户能够在零件选择界面中为被控虚拟对象选择虚拟零件。
为了对上述实施方式进行更加清楚的说明,下面将分为两个部分对上述实施方式进行说明。
第一部分、对在被控虚拟对象与虚拟售卖机之间的距离小于或等于第一距离阈值的情况下,终端显示零件选择界面的方式进行说明。
在一些实施例中,虚拟售卖机的周围设置有多个不可见的碰撞检测盒子,碰撞检测盒子不会对在虚拟场景中移动的虚拟对象造成阻挡,每个碰撞检测盒子与虚拟售卖机之间的最远距离均为第一距离阈值。在被控虚拟对象与任一碰撞检测盒子接触的情况下,终端确定被控虚拟对象与虚拟售卖机之间的距离小于或等于第一距离阈值,在虚拟场景中显示零件选择界面。在一些实施例中,终端确定被控虚拟对象与碰撞检测盒子接触的方式,是确定被控虚拟对象的模型与碰撞检测盒子之间是否有重合部分,在被控虚拟对象的模型与碰撞检测盒子之间有重合部分的情况下,终端确定被控虚拟对象与碰撞检测盒子接触。
在一些实施例中,终端能够将虚拟场景划分为多个不可见网格,被控虚拟对象在虚拟场景中进行持续移动时,能够跨越不同的网格。在被控虚拟对象进入虚拟售卖机所在的网格的情况下,终端能够实时确定被控虚拟对象与虚拟售卖机之间的距离。在被控虚拟对象与虚拟售卖机之间的距离小于或等于第一距离阈值的情况下,终端在虚拟场景中显示零件选择界面。在一些实施例中,终端能够根据被控虚拟对象在虚拟场景中的坐标,以及虚拟售卖机在虚拟场景中的坐标来确定被控虚拟对象与虚拟售卖机之间的距离。在这种方式下,终端无需实时确定被控虚拟对象与虚拟售卖机之间的距离,只需在被控虚拟对象进入特定网格时再开始检测,减少了终端运算资源的消耗。
第二部分、对响应于对多个待选择的虚拟零件中任一虚拟零件的选择操作,终端将该虚拟零件确定为被控虚拟对象拥有的虚拟零件的方式进行说明。
在一些实施例中,响应于对零件选择界面中显示的任一虚拟零件的选择操作,终端向服务器发送零件添加请求,该零件添加请求中携带有该被选中的虚拟零件的标识以及被控虚拟对象的标识。在接收到该零件添加请求的情况下,服务器能够从该零件添加请求中获取该虚拟零件的标识以及该被控虚拟对象的标识,建立该虚拟零件的标识与该被控虚拟对象的标识 之间的绑定关系。换而言之,服务器能够将该被选中的虚拟零件确定为被控虚拟对象拥有的虚拟零件,这个过程叫做为被控虚拟对象添加虚拟零件。
下面将结合附图对上述实施方式进行说明。
参见图6和图7,虚拟场景中显示有虚拟售卖机601,在被控虚拟对象602与虚拟售卖机601之间的距离小于或等于第一距离阈值的情况下,终端在虚拟场景中显示零件选择界面701,零件选择界面701上显示有多个待选择的虚拟零件。响应于对多个待选择的虚拟零件中任一虚拟零件702的选择操作,终端将被选中的虚拟零件702确定为被控虚拟对象602拥有的虚拟零件。
在上述实施方式的基础上,在一些实施例中,响应于对零件选择界面中显示的多个待选择的虚拟零件中任一虚拟零件的选择操作,采用被选中的虚拟零件替换被控虚拟对象拥有的相同零件类型的虚拟零件。
换而言之,对于多个零件类型的虚拟零件来说,被控虚拟对象只能拥有每个零件类型下的一个虚拟零件。若被控虚拟对象已经拥有某个零件类型的虚拟零件,那么当用户在零件选择界面中为被控虚拟对象选择同一零件类型的虚拟零件时,被控虚拟对象原本拥有的虚拟零件会被相同零件类型的虚拟零件替代。在一些实施例中,同一零件类型下的不同虚拟零件具有不同的属性,用户能够通过在零件选择界面中的选择操作,对被控虚拟对象拥有的一些虚拟零件进行替换,从而使得最终合成的虚拟载具拥有特定的属性。其中,属性用于表示合成的虚拟载具的性能数值,比如虚拟载具的速度,虚拟载具的转向难度等数值,这些数值也即是虚拟载具的属性。在一些实施例中,终端能够将被替换的虚拟零件显示在虚拟场景中,其他用户能够控制虚拟对象拾取该虚拟零件。
在上述实施方式的基础上,在一些实施例中,终端将任一虚拟零件确定为被控虚拟对象拥有的虚拟零件之后,响应于对多个待选择的虚拟零件中其他虚拟零件的选择操作,终端显示第二提示信息,第二提示信息用于提示无法选择虚拟零件。换而言之,对于一个虚拟售卖机来说,用户只能为被控虚拟对象选择一个虚拟零件。若一个虚拟售卖机能够提供全部的虚拟零件,那么先找到虚拟售卖机的用户就能够率先合成虚拟载具,合成虚拟载具的用户相较于其他用户来说,也就具有过大对抗优势,会导致游戏的不平衡。通过限制虚拟售卖机提供虚拟零件的数量,能够提高游戏的平衡性。用户需要在虚拟场景中寻找多个虚拟售卖机才能够集齐虚拟零件。
在一些实施例中,在任一虚拟对象的健康值符合目标健康值条件的情况下,终端在目标掉落位置上显示该虚拟对象拥有的多个虚拟零件,目标掉落位置为虚拟场景中,该虚拟对象所在的位置。在被控虚拟对象与目标掉落位置之间的距离小于或等于第二距离阈值的情况下,终端将多个第一类型的虚拟零件确定为被控虚拟对象拥有的虚拟零件。其中,第一类型的虚拟零件是指该虚拟对象拥有的多个虚拟零件中,被控虚拟对象尚未拥有的零件类型对应的虚拟零件。
其中,健康值符合目标健康值条件是指,健康值为0或者健康值小于或等于健康值阈值。在一些实施例中,若一个虚拟对象的健康值为0,那么该虚拟对象的状态被称为被击败或被击杀。
在这种实施方式下,当用户控制被控虚拟对象靠近虚拟场景中任一健康值符合目标健康条件的虚拟对象时,终端能够控制被控虚拟对象自动拾取该虚拟对象掉落的虚拟零件。在控制被控虚拟对象拾取该虚拟对象掉落的虚拟零件时,终端能够控制被控虚拟对象只拾取尚未拥有的零件类型对应的虚拟零件,保证被控虚拟对象只拥有每个零件类型下的一个虚拟零件。
为了对上述实施方式进行更加清楚的说明,下面将分为三个部分对上述实施方式进行说明。
第一部分、对终端在目标掉落位置上显示该虚拟对象拥有的多个虚拟零件的方式进行说明。
在一些实施例中,在虚拟场景中任一虚拟对象的健康值为0,也即是该虚拟对象的状态为被击败的情况下,终端在该虚拟对象被击败的位置上显示该虚拟对象拥有的多个虚拟零件。其中,由于该虚拟对象被击败之后无法在虚拟场景中继续移动,该虚拟对象被击败的位置也即是目标掉落位置。在一些实施例中,健康值也被称为生命值或者血量,本申请实施例对此不做限定。
在一些实施例中,在虚拟场景中任一虚拟对象的健康值大于零且小于或等于健康值阈值的情况下,终端能够执行下述任一项:
1、终端将该虚拟对象拥有的多个虚拟零件显示在该虚拟对象的健康值小于或等于健康值阈值时所在的位置,该位置也即是目标显示位置,虚拟零件不会随着该虚拟对象的移动而移动。
2、终端将该虚拟对象拥有的多个虚拟零件显示在该虚拟对象周围,该虚拟对象所在的位置也即是目标显示位置。换而言之,虚拟零件能够随着该虚拟对象的移动而移动。
第二部分、对终端确定被控虚拟对象与目标掉落位置之间的距离的方式进行说明。
在一些实施例中,终端在目标掉落位置的周围设置多个不可见的碰撞检测盒子,碰撞检测盒子不会对在虚拟场景中移动的虚拟对象造成阻挡,每个碰撞检测盒子与目标掉落位置之间的最远距离均为第二距离阈值。在被控虚拟对象与任一碰撞检测盒子接触的情况下,终端确定被控虚拟对象与目标掉落位置之间的距离小于或等于第二距离阈值。在一些实施例中,终端确定被控虚拟对象与碰撞检测盒子接触的方式,是确定被控虚拟对象的模型与碰撞检测盒子之间是否有重合部分。在被控虚拟对象的模型与碰撞检测盒子之间有重合部分的情况下,终端确定被控虚拟对象与碰撞检测盒子接触。
在一些实施例中,终端将虚拟场景划分为多个不可见网格,被控虚拟对象在虚拟场景中进行持续移动时,能够跨越不同的网格。在被控虚拟对象进入目标显示位置所在的网格的情况下,终端能够实时确定被控虚拟对象与目标显示位置之间的距离。在一些实施例中,终端能够根据被控虚拟对象在虚拟场景中的坐标,以及目标显示位置在虚拟场景中的坐标来确定被控虚拟对象与目标显示位置之间的距离。在这种方式下,终端无需实时确定被控虚拟对象与目标显示位置之间的距离,只需在被控虚拟对象进入特定网格时再开始检测,减少了终端运算资源的消耗。
第三部分、对终端将多个第一类型的虚拟零件确定为被控虚拟对象拥有的虚拟零件的方式进行说明。
在一些实施例中,在被控虚拟对象与目标掉落位置周围的任一碰撞检测盒子接触,终端从该虚拟对象拥有的多个虚拟零件中,确定出多个第一类型的虚拟零件。终端向服务器发送零件添加请求,零件添加请求中携带有多个第一类型的虚拟零件的标识以及被控虚拟对象的标识。在接收到该零件添加请求的情况下,服务器从该零件添加请求中获取多个第一类型的虚拟零件的标识以及该被控虚拟对象的标识,建立多个第一类型的虚拟零件的标识与该被控虚拟对象的标识的绑定关系。换而言之,将多个第一类型的虚拟零件确定为被控虚拟对象拥有的虚拟零件,这个过程叫做为被控虚拟对象添加虚拟零件。
在一些实施例中,在任一虚拟对象的健康值符合目标健康值条件的情况下,终端在目标掉落位置上显示该虚拟对象拥有的多个虚拟零件,目标掉落位置为虚拟场景中,该虚拟对象所在的位置。在被控虚拟对象与目标掉落位置之间的距离小于或等于第二距离阈值的情况下,终端在虚拟场景中显示零件拾取界面,零件拾取界面上显示有多个第二类型的虚拟零件,多个第二类型的虚拟零件为任一虚拟对象所拥有的多个虚拟零件中,被控虚拟对象已经拥有的零件类型对应的虚拟零件。响应于在零件拾取界面上的选择操作,终端采用被选中的虚拟零件替换被控虚拟对象拥有的相同零件类型的虚拟零件。
在这种实施方式下,当被控虚拟对象接近目标掉落位置时,使得用户能够通过终端显示的零件拾取界面,来替换被控虚拟对象拥有的某个零件类型的虚拟零件,替换方式简单便捷, 人机交互的效率较高。
502、响应于零件展示指令,终端在虚拟场景中显示零件展示区域,零件展示区域用于展示被控虚拟对象拥有的虚拟零件。
在一些实施例中,参见图8,响应于零件展示指令,终端在虚拟场景中显示零件展示区域801,零件展示区域中显示有被控虚拟对象拥有的虚拟零件。
在一些实施例中,零件展示指令由下述任一种方式触发:
方式1、响应于对虚拟场景中显示的虚拟零件展示控件的点击操作,终端触发该零件展示指令。在一些实施例中,零件展示控件也即是背包展示控件,用户点击背包展示控件之后,终端除了能够显示零件展示区域,还能够显示被控虚拟对象的虚拟背包界面,虚拟背包界面中显示有被控虚拟对象拥有的虚拟道具,比如包括虚拟枪械、虚拟弹药以及虚拟防弹衣等虚拟道具。在一些实施例中,零件展示区域为虚拟背包界面中的一个区域,本申请实施例对此不做限定。
方式2、在被控虚拟对象与虚拟售卖机之间的距离小于或等于第一距离阈值的情况下,终端触发该零件展示指令。换而言之,当被控虚拟对象接近虚拟场景中的虚拟售卖机时,终端能够直接触发该零件展示指令。
在一些实施例中,零件展示区域中显示有多个虚拟零件展示格子,每个虚拟零件展示格子用于展示一种零件类型的虚拟零件。被控虚拟对象获取虚拟零件之后,终端能够根据获取的虚拟零件的零件类型,将虚拟零件展示在对应的虚拟零件展示格子中。参见图8,零件展示区域801中显示有多个虚拟零件展示格子802,每个虚拟零件展示格子中显示有对应零件类型的虚拟零件。在一些实施例中,每个格子用于展示一个虚拟零件。终端除了能够通过零件展示区域来展示被控虚拟对象拥有的虚拟零件,还能够通过不同的格子来标注被控虚拟对象拥有虚拟零件的零件类型,使得用户能够更直观的确定被控虚拟对象已拥有的虚拟零件的零件类型,从而能够提高虚拟零件的展示效率,人机交互的效率较高。
503、在零件展示区域中展示的多个虚拟零件符合目标条件的情况下,终端在虚拟场景中显示合成控件。
在一些实施例中,响应于零件展示区域中展示的多个虚拟零件符合目标条件,终端将多个虚拟零件转化为一个第一目标道具。响应于转化得到该第一目标道具,终端在虚拟场景中显示合成控件。
为了对上述实施方式进行更加清楚的说明,下面将分为两个部分对上述实施方式进行说明。
第一部分、对在零件展示区域中展示的多个虚拟零件符合目标条件的情况下,终端将多个虚拟零件转化为一个第一目标道具的方式进行说明。
在一些实施例中,多个虚拟零件符合目标条件是指多个虚拟零件的数量符合目标数量条件,以及多个虚拟零件对应的零件类型符合目标零件类型条件中的至少一项。其中,数量符合目标数量条件是指,多个虚拟零件的数量大于或等于目标数量阈值;零件类型符合目标零件类型条件是指,多个虚拟零件的零件类型与预设的多个零件类型相匹配。比如,预设零件类型包括底盘、发动机、装甲、炮管以及副武器五种,多个虚拟零件分别对应于这五种零件类型,那么多个虚拟零件也就符合目标零件类型条件。
在一些实施例中,在零件展示区域中展示的多个虚拟零件符合目标条件的情况下,终端将多个虚拟零件转化为一个虚拟载具蓝图,虚拟载具蓝图也即是第一目标道具。在一些实施例中,终端能够将该虚拟载具蓝图显示在零件展示区域内,用户能够通过查看零件展示区域来确定被控虚拟对象具有的虚拟载具蓝图。在一些实施例中,终端将多个虚拟零件转化为一个虚拟载具蓝图时,还能够取消多个虚拟零件在零件展示区域中的显示,通过这样的方式来体现多个虚拟零件转化为一个虚拟载具蓝图的效果。
在一些实施例中,不同类型的第一目标道具对应于不同类型的虚拟载具,第一目标道具 的类型由转化前的多个虚拟零件决定。举例来说,虚拟载具的类型包括轻型坦克、中型坦克以及重型坦克三个大类,每个大类下包括多个小类,比如大类轻型坦克包括轻型坦克1型、轻型坦克2型以及轻型坦克3型等小类。若用户选择的虚拟零件均对应于轻型坦克,那么转化得到的第一目标道具也就对应于轻型坦克。
举例来说,参见图9,响应于零件展示区域中展示的多个虚拟零件符合目标条件,终端将多个虚拟零件转化为一个第一目标道具901。
第二部分、对在转化得到该第一目标道具的情况下,终端在虚拟场景中显示合成控件的方式进行说明。
在一些实施例中,参见图10,在转化得到该第一目标道具的情况下,终端在虚拟场景中显示合成控件1001。用户能够通过点击在合成控件1001控制终端在虚拟场景中显示虚拟载具。
下面将结合图11,对上述步骤501-503进行进一步说明。
参见图11,虚拟场景中显示有虚拟售卖机,若虚拟载具为虚拟坦克,那么虚拟售卖机也被称为坦克零件购物机。用户控制被控虚拟对象接近坦克零件购物机之后,能够通过显示的零件选择界面来选择想要的虚拟零件。终端将用户选择的虚拟零件放入被控虚拟对象的虚拟背包中,被控虚拟对象也就获取了该虚拟零件。在被控虚拟对象拥有的虚拟零件的数量和类型中的至少一项符合目标条件的情况下,终端将被控虚拟对象拥有的多个虚拟零件转化为一个虚拟载具蓝图,若虚拟载具为虚拟坦克,那么虚拟载具蓝图也被称为虚拟坦克蓝图。
504、响应于对合成控件的触发操作,终端在虚拟场景中显示第一目标载具,第一目标载具为由多个虚拟零件合成的虚拟载具。
其中,虚拟载具包括多种类型,比如虚拟载具包括虚拟摩托车、虚拟汽车、虚拟游艇以及虚拟坦克等,在下述说明过程中,以虚拟载具为虚拟坦克为例进行说明。
在一些实施例中,响应于对合成控件的触发操作,终端确定第一目标载具在虚拟场景中的目标显示位置。响应于目标显示位置符合目标显示条件,终端在目标显示位置上显示第一目标载具。
为了对上述实施方式进行更加清楚的说明,下面将分为两个部分对上述实施方式进行说明。
第一部分、对响应于对合成控件的触发操作,终端确定第一目标载具在虚拟场景中的目标显示位置的方式进行说明。其中,该触发操作包括拖动操作、点击操作、按压操作等。
在一些实施例中,响应于对合成控件的拖动操作,终端将拖动操作的结束位置确定为第一目标载具在虚拟场景中的目标显示位置。
在这种实施方式下,用户能够通过对合成控件的拖动操作,来决定虚拟载具的目标显示位置,确定目标显示位置的自由度较高。
在一些实施例中,在对合成控件的按压操作的时长符合目标时长条件的情况下,终端将合成控件的状态设置为可拖动状态。响应于对该合成控件的拖动操作,将拖动操作结束时,该合成控件的位置确定为第一目标载具的目标显示位置。其中,按压操作的时长符合目标时长条件是指,按压操作的时长大于或等于时长阈值,时长阈值由技术人员根据实际情况进行设置,比如设置为0.3秒或0.5秒等,本申请实施例对此不做限定。
在一些实施例中,响应于对合成控件的点击操作,终端将虚拟场景中,被控虚拟对象前方目标距离的位置确定为目标显示位置。其中,目标距离由技术人员根据实际情况进行设置,本申请实施例对此不做限定。
在这种实施方式下,当用户点击合成控件之后,终端能够自动确定目标显示位置,由于该目标显示位置不需要用户通过操作来确定,使得该目标显示位置的确定方式简单且高效,人机交互的效率较高。
在一些实施例中,在对合成控件的按压操作的时长符合目标时长条件的情况下,终端在 虚拟场景中显示第一目标载具的模型。响应于对第一目标载具的模型的拖动操作,终端将拖动操作结束的位置确定为目标显示位置。
在这种实施方式下,用户能够在确定目标显示位置时实时对目标显示位置进行预览,从而可以提高确定目标显示位置的效率。
第二部分、对在目标显示位置符合目标显示条件的情况下,终端在目标显示位置上显示第一目标载具的方式进行说明。
在一些实施例中,目标显示位置符合目标显示条件是指目标显示位置的面积大于或等于第一目标载具的占用面积,目标显示位置的上方不存在任一虚拟建筑物中的至少一项。其中,目标显示位置的面积大于或等于第一目标载具的占用面积是为了保证目标显示位置能够容乃第一目标载具。目标显示位置的上方不存在任一虚拟建筑物是为了保证虚拟载具能够正常显示在虚拟场景中。
在一些实施例中,在目标显示位置符合目标显示条件的情况下,终端能够控制第一目标载具从虚拟场景的天空下落至目标显示位置。比如,参见图12,终端能够虚拟场景中显示虚拟载具1201。
在一些实施例中,在目标显示位置符合目标显示条件的情况下,终端控制第一目标载具按照目标移动速度,从虚拟场景的天空下落至目标显示位置,目标移动速度与虚拟载具的类型相关联。
例如,虚拟载具为虚拟坦克,虚拟坦克包括轻型坦克、中型坦克和重型坦克,那么终端能够根据虚拟坦克的类型来确定目标移动速度。比如,为了模拟真实的场景,技术人员通过终端进行如下设置:轻型坦克对应的目标移动速度>中型坦克对应的目标移动速度>重型坦克对应的目标移动速度。其中,每种类型的虚拟坦克对应的目标移动速度由技术人员根据实际情况进行设置,本申请实施例对此不做限定。
在一些实施例中,终端控制第一目标载具从虚拟场景的天空下落至目标显示位置之前,终端还能够在虚拟场景中显示虚拟运输机。响应于虚拟运输机飞行至目标显示位置的上方,控制虚拟运输机向虚拟场景中投放第一目标载具,第一目标载具从虚拟场景的空中下落至目标显示位置。在一些实施例中,在第一目标载具从虚拟场景的空中下落至目标显示位置的过程中,终端还能够在第一目标载具的上方显示与第一目标载具相连的虚拟降落伞,以使得第一目标载具的下落过程更加真实。
在一些实施例中,终端控制第一目标载具从虚拟场景的天空下落至目标显示位置之前,还能够在目标显示位置上显示虚拟烟雾,虚拟烟雾用于提醒第一目标载具将要下落至目标显示位置。比如,参见图13,终端在目标显示位置上显示虚拟烟雾1301。
在这种实施方式下,终端能够在控制第一目标载具下落至目标显示位置之前,通过显示虚拟烟雾的方式来提醒用户第一目标载具即将落下,使得用户能够根据虚拟烟雾来直观的确定第一虚拟载具即将到达的目标显示位置,从而能够控制虚拟对象远离该目标显示位置,以避免第一目标载具对虚拟对象造伤害,提醒的方式直观且高效,人机交互效率较高。
在一些实施例中,终端在显示虚拟烟雾时,还能够控制虚拟烟雾的颜色。比如将虚拟烟雾的颜色设置为红色或黄色等,本申请实施例对此不做限定。在一些实施例中,终端还能够根据第一目标载具的下落进度,调整虚拟烟雾的颜色。比如,当第一目标载具刚刚下落时,终端将虚拟烟雾的颜色设置为绿色。当第一目标载具下落至一半时,终端将虚拟烟雾的颜色调整为黄色。当第一目标载具即将下落至目标显示位置时,终端将虚拟烟雾的颜色调整为红色。在这种实施方式下,通过调整虚拟烟雾的颜色来提示第一目标载具的下落速度,使得用户能够通过观察虚拟烟雾颜色的变化来直观的得知第一目标载具的下落进度,提醒的方式直观且高效,提高了人机交互的效率。
在一些实施例中,在终端控制第一目标载具下落的过程中,还能够执行下述任一项:
在第一目标载具在下落过程中与任一虚拟载具接触的情况下,终端将该虚拟载具的状态 设置为被摧毁。其中,将虚拟载具的状态设置为被摧毁是指,将虚拟载具的健康值调整为0。在一些实施例中,虚拟载具的健康值也被称为虚拟载具的生命值、血量以及磨损度等,本申请实施例对此不做限定。若一个虚拟载具的状态被设置为被摧毁,那么该虚拟载具也就无法继续使用。
在第一目标载具在下落过程中与任一虚拟对象接触的情况下,终端将该虚拟对象的状态设置为被击败。其中,将该虚拟对象的状态设置为被击败是指,将该虚拟对象的健康值调整为0。
在上述实施方式的基础上,在一些实施例中,在目标显示位置不符合目标显示条件的情况下,终端在虚拟场景中显示第一提示信息,第一提示信息用于提示目标显示位置不符合目标显示条件。
举例来说,响应于目标显示位置不符合目标显示条件,终端在虚拟场景中,以目标颜色显示提示图形,提示图形用于表示第一目标载具的轮廓。在一些实施例中,目标颜色由技术人员根据实际情况进行设置,比如设置为红色或者黄色等,本申请实施例对此不做限定。比如,参见图14,终端在虚拟场景中显示提示图形1401,1402为目标显示位置上的虚拟建筑物,该提示图形1401也就能够表示第一目标载具的轮廓。
在一些实施例中,终端在虚拟场景中显示第一目标载具之后,用户能够控制被控虚拟对象驾驶该第一目标载具在虚拟场景中进行移动或者与其他虚拟对象进行对战等。在一些实施例中,在被控虚拟对象与第一目标载具之间的距离小于或等于第三距离阈值的情况下,终端在虚拟场景上显示载具乘坐控件。响应于对载具乘坐控件的操作,终端控制被控虚拟对象进入第一目标载具,用户也就能够控制第一目标载具进行移动。在用户控制第一目标载具在虚拟场景中进行移动时,用户也能够控制第一目标载具的虚拟武器进行开火,从而对其他虚拟对象或虚拟载具造成伤害。
上述所有可选技术方案,能够采用任意结合形成本申请的可选实施例,在此不再一一赘述。
下面将结合图15,对上述步骤503和504进行说明。
参见图15,在将被控虚拟对象拥有的多个虚拟零件合成为一个虚拟载具蓝图的情况下,终端在虚拟场景中显示合成控件。响应于对合成控件的触发操作,终端确定第一目标载具的目标显示位置。终端控制第一目标载具从虚拟场景的天空下落至目标显示位置。
通过在虚拟场景中显示零件展示区域,能够直观的展示被控虚拟对象已拥有的虚拟零件,进一步的,通过在零件展示区域中的多个虚拟零件符合目标条件,也即被控虚拟对象已拥有的虚拟零件符合目标条件的情况下,显示合成控件,使得通过触发该合成控件,能够将该多个虚拟零件合成为虚拟载具,从而终端能够在虚拟场景中显示该虚拟载具。由于虚拟零件的展示方式直观且高效,能够提高用户查看虚拟零件的效率,并且由于只需要点击合成控件即可实现虚拟载具的合成,使得合成虚拟载具的操作方式简单且高效,也即人机交互的效率较高。
除了上述步骤501-504之外,本申请实施例还提供了另一种虚拟载具的显示方法。与上述步骤501-504不同的是,在下述步骤中,用户无需控制被控虚拟对象逐个收集虚拟零件,直接控制被控虚拟对象拾取虚拟道具,就能够进行虚拟载具显示的相关操作。参见图16,方法包括:
1601、响应于对第二目标道具的拾取操作,在虚拟场景中显示合成控件。
其中,第二目标道具为掉落在虚拟场景中的虚拟道具,在一些实施例中,第二目标道具为在虚拟场景中进行的虚拟对战的时长大于或等于对战时长阈值之后,终端显示在虚拟场景中的,或者,第二目标道具为虚拟场景中任一被击败的虚拟对象掉落的虚拟道具,或者,第二目标道具为虚拟场景中,任一虚拟对象丢弃的虚拟道具等,本申请实施例对此不做限定。
在一些实施例中,在被控虚拟对象与第二目标道具之间的距离小于或等于第四距离阈值的情况下,终端在虚拟场景上显示道具拾取控件。响应于对道具拾取控件的操作,终端控制被控虚拟对象拾取该第二目标道具。在被控虚拟对象拾取该第二目标道具的情况下,终端在虚拟场景中显示合成控件。在一些实施例中,不同类型的第二目标道具对应于不同类型的虚拟载具,用户能够通过控制被控虚拟对象拾取不同类型的第二目标道具,来控制终端显示不同类型的虚拟载具。
需要说明的是,终端在虚拟场景中显示合成控件的方法与上述步骤503属于同一发明构思,实现过程参见上述步骤503的描述,在此不再赘述。
在一些实施例中,终端在虚拟场景中显示合成控件之前,方法还包括:
在被控虚拟对象拥有任一虚拟零件的情况下,终端将该虚拟零件丢弃在虚拟场景中。也即是,当被控虚拟对象拾取第二目标道具之后,终端控制被控虚拟对象丢弃拥有的虚拟零件。
在一些实施例中,响应于对第二目标道具的拾取操作,在虚拟场景中显示合成控件之后,方法还包括:
响应于对任一虚拟零件的拾取操作,将第二目标道具丢弃在虚拟场景中。也即是,当被控虚拟对象拾取第二目标道具之后,若被控虚拟对象再次拾取虚拟零件,终端控制被控虚拟对象丢弃拥有的第二目标道具。
1602、响应于对合成控件的触发操作,在虚拟场景中显示第二目标载具,第二目标载具为第二目标道具对应的虚拟载具。
需要说明的是,该步骤1602与上述步骤504属于同一发明构思,实现过程参见上述步骤504的描述,在此不再赘述。
在一些实施例中,终端在虚拟场景中显示第二目标载具之后,用户还能够控制被控虚拟对象在虚拟场景中获取另一个第二目标道具,并通过消耗另一个第二目标道具来控制终端显示另一个第二目标载具,在一些实施例中,两个第二目标载具为不同类型的虚拟载具。也即是,用户能够通过被控虚拟对象在虚拟场景中召唤两个或更多的虚拟载具。参见图17,虚拟场景中显示有被控虚拟对象召唤的虚拟载具1701和虚拟载具1702。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
通过在虚拟场景中显示零件展示区域,能够直观的展示被控虚拟对象已拥有的虚拟零件,进一步的,通过在零件展示区域中的多个虚拟零件符合目标条件,也即被控虚拟对象已拥有的虚拟零件符合目标条件的情况下,显示合成控件,使得通过触发该合成控件,能够将该多个虚拟零件合成为虚拟载具,从而终端能够在虚拟场景中显示该虚拟载具。由于虚拟零件的展示方式直观且高效,能够提高用户查看虚拟零件的效率,并且由于只需要点击合成控件即可实现虚拟载具的合成,使得合成虚拟载具的操作方式简单且高效,也即人机交互的效率较高。
图18申请实施例提供的一种虚拟载具的显示装置的结构示意图,参见图18置包括:区域显示模块1801、控件显示模块1802以及载具显示模块1803。
区域显示模块1801,用于响应于零件展示指令,在虚拟场景中显示零件展示区域,所述零件展示区域用于展示被控虚拟对象拥有的虚拟零件;
控件显示模块1802,用于在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件;
载具显示模块1803,用于响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,所述第一目标载具为由所述多个虚拟零件合成的虚拟载具。
在一些实施例中,所述载具显示模块1803,用于响应于对所述合成控件的触发操作,确定所述第一目标载具在所述虚拟场景中的目标显示位置;在所述目标显示位置符合目标显示 条件的情况下,在所述目标显示位置上显示所述第一目标载具。
在一些实施例中,所述装置还包括:
第一提示模块,用于在所述目标显示位置不符合所述目标显示条件的情况下,在所述虚拟场景中显示第一提示信息,所述第一提示信息用于提示所述目标显示位置不符合所述目标显示条件。
在一些实施例中,所述第一提示模块,用于在所述目标显示位置不符合所述目标显示条件的情况下,在所述虚拟场景中,以目标颜色显示提示图形,所述提示图形用于表示所述第一目标载具的轮廓。
在一些实施例中,所述载具显示模块1803,用于响应于对所述合成控件的拖动操作,将所述拖动操作的结束位置确定为所述第一目标载具在所述虚拟场景中的目标显示位置。
在一些实施例中,所述载具显示模块1803,用于在所述目标显示位置符合所述目标显示条件的情况下,控制所述第一目标载具从所述虚拟场景的天空下落至所述目标显示位置。
在一些实施例中,所述载具显示模块1803,用于控制所述第一目标载具按照目标移动速度,从所述虚拟场景的天空下落至所述目标显示位置,所述目标移动速度与所述虚拟载具的类型相关联。
在一些实施例中,所述装置还包括:
烟雾显示模块,用于在所述目标显示位置上显示虚拟烟雾,所述虚拟烟雾用于提醒所述第一目标载具将要下落至所述目标显示位置。
在一些实施例中,所述装置还包括接触检测模块,用于执行下述任一项:
在所述第一目标载具在下落过程中与任一虚拟载具接触的情况下,将所述任一虚拟载具的状态设置为被摧毁;
在所述第一目标载具在下落过程中与任一虚拟对象接触的情况下,将所述任一虚拟对象的状态设置为被击败。
在一些实施例中,所述目标显示条件是指下述至少一项:
所述目标显示位置的面积大于或等于所述第一目标载具的占用面积;
所述目标显示位置的上方不存在任一虚拟建筑物。
在一些实施例中,所述控件显示模块1802,用于在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,将所述多个虚拟零件转化为一个第一目标道具;
在转化得到所述第一目标道具的情况下,在所述虚拟场景中显示所述合成控件。
在一些实施例中,所述虚拟场景中显示有虚拟售卖机,所述虚拟售卖机用于提供虚拟零件,所述装置还包括:
零件确定模块,用于在所述被控虚拟对象与所述虚拟售卖机之间的距离小于或等于第一距离阈值的情况下,在所述虚拟场景中显示零件选择界面,所述零件选择界面上显示有多个待选择的虚拟零件;响应于对所述多个待选择的虚拟零件中任一虚拟零件的选择操作,将被选中的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件。
在一些实施例中,所述区域显示模块1801,用于响应于对所述多个待选择的虚拟零件中任一虚拟零件的选择操作,采用被选中的虚拟零件替换所述被控虚拟对象拥有的相同类型的虚拟零件。
在一些实施例中,所述装置还包括:
第二提示模块,用于响应于对所述多个待选择的虚拟零件中其他虚拟零件的选择操作,显示第二提示信息,所述第二提示信息用于提示无法选择虚拟零件。
在一些实施例中,所述装置还包括:
零件确定模块,用于在任一虚拟对象的健康值符合目标健康值条件的情况下,在目标掉落位置上显示所述任一虚拟对象拥有的多个虚拟零件,所述目标掉落位置为所述虚拟场景中,所述任一虚拟对象所在的位置;在所述被控虚拟对象与所述目标掉落位置之间的距离小于或 等于第二距离阈值的情况下,将多个第一类型的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件,所述第一类型的虚拟零件是指所述任一虚拟对象拥有的多个虚拟零件中,所述被控虚拟对象尚未拥有的零件类型对应的虚拟零件。
在一种可能的实施方式中,所述零件确定模块,还用于在所述被控虚拟对象与所述目标掉落位置之间的距离小于或等于所述第二距离阈值的情况下,在所述虚拟场景中显示零件拾取界面,所述零件拾取界面上显示有多个第二类型的虚拟零件,所述多个第二类型的虚拟零件为所述任一虚拟对象所拥有的多个虚拟零件中,所述被控虚拟对象已经拥有的零件类型对应的虚拟零件;响应于在所述零件拾取界面上的选择操作,采用被选中的虚拟零件替换所述被控虚拟对象拥有的相同类型的虚拟零件。
在一些实施例中,所述装置还包括:
道具拾取模块,用于响应于对第二目标道具的拾取操作,在所述虚拟场景中显示所述合成控件;
所述载具显示模块1803,还用于响应于对所述合成控件的触发操作,在虚拟场景中显示第二目标载具,所述第二目标载具为所述第二目标道具对应的虚拟载具。
在一些实施例中,所述装置还包括:
丢弃模块,用于在所述被控虚拟对象拥有任一虚拟零件的情况下,将所述任一虚拟零件丢弃在所述虚拟场景中。
在一些实施例中,所述装置还包括:
丢弃模块,用于响应于对任一虚拟零件的拾取操作,将所述第二目标道具丢弃在所述虚拟场景中。
需要说明的是:上述实施例提供的虚拟载具的显示的装置在显示虚拟载具时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将计算机设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟载具的显示装置与虚拟载具的显示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
通过在虚拟场景中显示零件展示区域,能够直观的展示被控虚拟对象已拥有的虚拟零件,进一步的,通过在零件展示区域中的多个虚拟零件符合目标条件,也即被控虚拟对象已拥有的虚拟零件符合目标条件的情况下,显示合成控件,使得通过触发该合成控件,能够将该多个虚拟零件合成为虚拟载具,从而终端能够在虚拟场景中显示该虚拟载具。由于虚拟零件的展示方式直观且高效,能够提高用户查看虚拟零件的效率,并且由于只需要点击合成控件即可实现虚拟载具的合成,使得合成虚拟载具的操作方式简单且高效,也即人机交互的效率较高。
本申请实施例提供了一种计算机设备,用于执行上述方法,该计算机设备可以实现为终端或者服务器,下面先对终端的结构进行介绍:
图19是本申请实施例提供的一种终端的结构示意图。
通常,终端1900包括有:一个或多个处理器1901和一个或多个存储器1902。
处理器1901可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1901可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1901也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1901可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1901还可以包括AI(Artificial  Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1902可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1902还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1902中的非暂态的计算机可读存储介质用于存储至少一个计算机程序,该至少一个计算机程序用于被处理器1901所执行以实现本申请中方法实施例提供的虚拟载具的显示方法。
在一些实施例中,终端1900还可选包括有:显示屏1905和电源1909。
显示屏1905用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1905是触摸显示屏时,显示屏1905还具有采集在显示屏1905的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1901进行处理。此时,显示屏1905还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。
电源1909用于为终端1900中的各个组件进行供电。电源1909可以是交流电、直流电、一次性电池或可充电电池。
本领域技术人员可以理解,图19中示出的结构并不构成对终端1900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
上述计算机设备还可以实现为服务器,下面对服务器的结构进行介绍:
图20是本申请实施例提供的一种服务器的结构示意图,该服务器2000可因配置或性能不同而产生比较大的差异,如包括一个或多个处理器(Central Processing Units,CPU)2001和一个或多个的存储器2002,其中,所述一个或多个存储器2002中存储有至少一条计算机程序,所述至少一条计算机程序由所述一个或多个处理器2001加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器2000还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器2000还可以包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括计算机程序的存储器,上述计算机程序可由处理器执行以完成上述实施例中的虚拟载具的显示方法。例如,该计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括程序代码,该程序代码存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该程序代码,处理器执行该程序代码,使得该计算机设备执行上述虚拟载具的显示方法。
在一些实施例中,本申请实施例所涉及的计算机程序可被部署在一个计算机设备上执行,或者在位于一个地点的多个计算机设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算机设备上执行,分布在多个地点且通过通信网络互连的多个计算机设备可以组成区块链系统。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
上述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种虚拟载具的显示方法,由计算机设备执行,所述方法包括:
    响应于零件展示指令,在虚拟场景中显示零件展示区域,所述零件展示区域用于展示被控虚拟对象拥有的虚拟零件;
    在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件;
    响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,所述第一目标载具为由所述多个虚拟零件合成的虚拟载具。
  2. 根据权利要求1所述的方法,所述响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,包括:
    响应于对所述合成控件的触发操作,确定所述第一目标载具在所述虚拟场景中的目标显示位置;
    在所述目标显示位置符合目标显示条件的情况下,在所述目标显示位置上显示所述第一目标载具。
  3. 根据权利要求2所述的方法,所述方法还包括:
    在所述目标显示位置不符合所述目标显示条件的情况下,在所述虚拟场景中显示第一提示信息,所述第一提示信息用于提示所述目标显示位置不符合所述目标显示条件。
  4. 根据权利要求3所述的方法,所述在所述目标显示位置不符合所述目标显示条件的情况下,在所述虚拟场景中显示第一提示信息,包括:
    在所述目标显示位置不符合所述目标显示条件的情况下,在所述虚拟场景中,以目标颜色显示提示图形,所述提示图形用于表示所述第一目标载具的轮廓。
  5. 根据权利要求2所述的方法,所述响应于对所述合成控件的触发操作,确定所述第一目标载具在所述虚拟场景中的目标显示位置,包括:
    响应于对所述合成控件的拖动操作,将所述拖动操作的结束位置确定为所述第一目标载具在所述虚拟场景中的目标显示位置。
  6. 根据权利要求2所述的方法,所述在所述目标显示位置符合目标显示条件的情况下,在所述目标显示位置上显示所述第一目标载具,包括:
    在所述目标显示位置符合所述目标显示条件的情况下,控制所述第一目标载具从所述虚拟场景的天空下落至所述目标显示位置。
  7. 根据权利要求6所述的方法,所述控制所述第一目标载具从所述虚拟场景的天空下落至所述目标显示位置,包括:
    控制所述第一目标载具按照目标移动速度,从所述虚拟场景的天空下落至所述目标显示位置,所述目标移动速度与所述虚拟载具的类型相关联。
  8. 根据权利要求6所述的方法,所述控制所述第一目标载具从所述虚拟场景的天空下落至所述目标显示位置之前,所述方法还包括:
    在所述目标显示位置上显示虚拟烟雾,所述虚拟烟雾用于提醒所述第一目标载具将要下 落至所述目标显示位置。
  9. 根据权利要求6所述的方法,所述方法还包括下述任一项:
    在所述第一目标载具在下落过程中与任一虚拟载具接触的情况下,将所述任一虚拟载具的状态设置为被摧毁;
    在所述第一目标载具在下落过程中与任一虚拟对象接触的情况下,将所述任一虚拟对象的状态设置为被击败。
  10. 根据权利要求2-9任一项所述的方法,所述目标显示条件是指下述至少一项:
    所述目标显示位置的面积大于或等于所述第一目标载具的占用面积;
    所述目标显示位置的上方不存在任一虚拟建筑物。
  11. 根据权利要求1所述的方法,所述在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件,包括:
    在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,将所述多个虚拟零件转化为一个第一目标道具;
    在转化得到所述第一目标道具的情况下,在所述虚拟场景中显示所述合成控件。
  12. 根据权利要求1所述的方法,所述虚拟场景中显示有虚拟售卖机,所述虚拟售卖机用于提供虚拟零件,所述响应于零件展示指令,在虚拟场景中显示零件展示区域之前,所述方法还包括:
    在所述被控虚拟对象与所述虚拟售卖机之间的距离小于或等于第一距离阈值的情况下,在所述虚拟场景中显示零件选择界面,所述零件选择界面上显示有多个待选择的虚拟零件;
    响应于对所述多个待选择的虚拟零件中任一虚拟零件的选择操作,将被选中的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件。
  13. 根据权利要求12所述的方法,所述响应于对所述多个待选择的虚拟零件中任一虚拟零件的选择操作,将被选中的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件,包括:
    响应于对所述多个待选择的虚拟零件中任一虚拟零件的选择操作,采用被选中的虚拟零件替换所述被控虚拟对象拥有的相同类型的虚拟零件。
  14. 根据权利要求12所述的方法,所述将被选中的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件之后,所述方法还包括:
    响应于对所述多个待选择的虚拟零件中其他虚拟零件的选择操作,显示第二提示信息,所述第二提示信息用于提示无法选择虚拟零件。
  15. 根据权利要求1所述的方法,所述响应于零件展示指令,在虚拟场景中显示零件展示区域之前,所述方法还包括:
    在任一虚拟对象的健康值符合目标健康值条件的情况下,在目标掉落位置上显示所述任一虚拟对象拥有的多个虚拟零件,所述目标掉落位置为所述虚拟场景中,所述任一虚拟对象所在的位置;
    在所述被控虚拟对象与所述目标掉落位置之间的距离小于或等于第二距离阈值的情况下,将多个第一类型的虚拟零件确定为所述被控虚拟对象拥有的虚拟零件,所述第一类型的虚拟零件是指所述任一虚拟对象拥有的多个虚拟零件中,所述被控虚拟对象尚未拥有的零件类型对应的虚拟零件。
  16. 根据权利要求15所述的方法,所述方法还包括:
    在所述被控虚拟对象与所述目标掉落位置之间的距离小于或等于所述第二距离阈值的情况下,在所述虚拟场景中显示零件拾取界面,所述零件拾取界面上显示有多个第二类型的虚拟零件,所述多个第二类型的虚拟零件为所述任一虚拟对象所拥有的多个虚拟零件中,所述被控虚拟对象已经拥有的零件类型对应的虚拟零件;
    响应于在所述零件拾取界面上的选择操作,采用被选中的虚拟零件替换所述被控虚拟对象拥有的相同类型的虚拟零件。
  17. 根据权利要求1所述的方法,所述方法还包括:
    响应于对第二目标道具的拾取操作,在所述虚拟场景中显示所述合成控件;
    响应于对所述合成控件的触发操作,在虚拟场景中显示第二目标载具,所述第二目标载具为所述第二目标道具对应的虚拟载具。
  18. 根据权利要求17所述的方法,所述在所述虚拟场景中显示所述合成控件之前,所述方法还包括:
    在所述被控虚拟对象拥有任一虚拟零件的情况下,将所述任一虚拟零件丢弃在所述虚拟场景中。
  19. 根据权利要求17所述的方法,所述响应于对第二目标道具的拾取操作,在所述虚拟场景中显示所述合成控件之后,所述方法还包括:
    响应于对任一虚拟零件的拾取操作,将所述第二目标道具丢弃在所述虚拟场景中。
  20. 一种虚拟载具的显示装置,所述装置包括:
    区域显示模块,用于响应于零件展示指令,在虚拟场景中显示零件展示区域,所述零件展示区域用于展示被控虚拟对象拥有的虚拟零件;
    控件显示模块,用于在所述零件展示区域中展示的多个虚拟零件符合目标条件的情况下,在所述虚拟场景中显示合成控件;
    载具显示模块,用于响应于对所述合成控件的触发操作,在所述虚拟场景中显示第一目标载具,所述第一目标载具为由所述多个虚拟零件合成的虚拟载具。
  21. 一种计算机设备,所述计算机设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条计算机程序,所述计算机程序由所述一个或多个处理器加载并执行以实现如权利要求1至权利要求19任一项所述的虚拟载具的显示方法。
  22. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至权利要求19任一项所述的虚拟载具的显示方法。
PCT/CN2022/082663 2021-04-25 2022-03-24 虚拟载具的显示方法、装置、设备以及存储介质 WO2022227958A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/987,302 US20230072503A1 (en) 2021-04-25 2022-11-15 Display method and apparatus for virtual vehicle, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110450247.3A CN113144597B (zh) 2021-04-25 2021-04-25 虚拟载具的显示方法、装置、设备以及存储介质
CN202110450247.3 2021-04-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/987,302 Continuation US20230072503A1 (en) 2021-04-25 2022-11-15 Display method and apparatus for virtual vehicle, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022227958A1 true WO2022227958A1 (zh) 2022-11-03

Family

ID=76870489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/082663 WO2022227958A1 (zh) 2021-04-25 2022-03-24 虚拟载具的显示方法、装置、设备以及存储介质

Country Status (3)

Country Link
US (1) US20230072503A1 (zh)
CN (1) CN113144597B (zh)
WO (1) WO2022227958A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113144597B (zh) * 2021-04-25 2023-03-17 腾讯科技(深圳)有限公司 虚拟载具的显示方法、装置、设备以及存储介质
CN113694517A (zh) * 2021-08-11 2021-11-26 网易(杭州)网络有限公司 信息显示控制方法、装置和电子设备
CN114558323A (zh) * 2022-01-29 2022-05-31 腾讯科技(深圳)有限公司 道具合成方法和装置、存储介质及电子设备
CN114461328B (zh) * 2022-02-10 2023-07-25 网易(杭州)网络有限公司 虚拟物品布设方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018017626A2 (en) * 2016-07-18 2018-01-25 Patrick Baudisch System and method for editing 3d models
CN110681158A (zh) * 2019-10-14 2020-01-14 北京代码乾坤科技有限公司 虚拟载具的处理方法、存储介质、处理器及电子装置
CN111603766A (zh) * 2020-06-29 2020-09-01 上海完美时空软件有限公司 虚拟载具的控制方法及装置、存储介质、电子装置
CN111672101A (zh) * 2020-05-29 2020-09-18 腾讯科技(深圳)有限公司 虚拟场景中的虚拟道具获取方法、装置、设备及存储介质
CN112090083A (zh) * 2020-10-12 2020-12-18 腾讯科技(深圳)有限公司 一种虚拟道具的生成方法以及相关装置
CN112121433A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的处理方法、装置、设备及计算机可读存储介质
CN113144597A (zh) * 2021-04-25 2021-07-23 腾讯科技(深圳)有限公司 虚拟载具的显示方法、装置、设备以及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018017626A2 (en) * 2016-07-18 2018-01-25 Patrick Baudisch System and method for editing 3d models
CN110681158A (zh) * 2019-10-14 2020-01-14 北京代码乾坤科技有限公司 虚拟载具的处理方法、存储介质、处理器及电子装置
CN111672101A (zh) * 2020-05-29 2020-09-18 腾讯科技(深圳)有限公司 虚拟场景中的虚拟道具获取方法、装置、设备及存储介质
CN111603766A (zh) * 2020-06-29 2020-09-01 上海完美时空软件有限公司 虚拟载具的控制方法及装置、存储介质、电子装置
CN112121433A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 虚拟道具的处理方法、装置、设备及计算机可读存储介质
CN112090083A (zh) * 2020-10-12 2020-12-18 腾讯科技(深圳)有限公司 一种虚拟道具的生成方法以及相关装置
CN113144597A (zh) * 2021-04-25 2021-07-23 腾讯科技(深圳)有限公司 虚拟载具的显示方法、装置、设备以及存储介质

Also Published As

Publication number Publication date
CN113144597A (zh) 2021-07-23
CN113144597B (zh) 2023-03-17
US20230072503A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
WO2022227958A1 (zh) 虚拟载具的显示方法、装置、设备以及存储介质
JP7476235B2 (ja) 仮想オブジェクトの制御方法、装置、デバイス及びコンピュータプログラム
WO2021184806A1 (zh) 互动道具显示方法、装置、终端及存储介质
WO2021043069A1 (zh) 虚拟对象的受击提示方法、装置、终端及存储介质
CN113181650A (zh) 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
US20230040737A1 (en) Method and apparatus for interaction processing of virtual item, electronic device, and readable storage medium
CN110507990B (zh) 基于虚拟飞行器的互动方法、装置、终端及存储介质
WO2022227936A1 (zh) 虚拟场景的显示方法、虚拟场景的处理方法、装置及设备
CN113181649B (zh) 虚拟场景中召唤对象的控制方法、装置、设备及存储介质
WO2022042435A1 (zh) 虚拟环境画面的显示方法、装置、设备及存储介质
CN112057857B (zh) 互动道具处理方法、装置、终端及存储介质
CN113633964B (zh) 虚拟技能的控制方法、装置、设备及计算机可读存储介质
WO2022017111A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2023142617A1 (zh) 基于虚拟场景的射线显示方法、装置、设备以及存储介质
JP2024512582A (ja) 仮想アイテムの表示方法、装置、電子機器及びコンピュータプログラム
CN111921190B (zh) 虚拟对象的道具装备方法、装置、终端及存储介质
WO2022095672A1 (zh) 画面显示方法、装置、设备以及存储介质
CN113713383B (zh) 投掷道具控制方法、装置、计算机设备及存储介质
CN112121433B (zh) 虚拟道具的处理方法、装置、设备及计算机可读存储介质
WO2024093941A1 (zh) 虚拟场景中控制虚拟对象的方法、装置、设备及产品
WO2024098628A1 (zh) 游戏交互方法、装置、终端设备及计算机可读存储介质
US20220212107A1 (en) Method and Apparatus for Displaying Interactive Item, Terminal, and Storage Medium
CN116549972A (zh) 虚拟资源处理方法、装置、计算机设备及存储介质
CN117065347A (zh) 虚拟物资拾取方法、装置、计算机设备和存储介质
CN116712733A (zh) 虚拟角色的控制方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794434

Country of ref document: EP

Kind code of ref document: A1